id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.10808 | Manifold-Aware Self-Training for Unsupervised Domain Adaptation on
Regressing 6D Object Pose | Domain gap between synthetic and real data in visual regression (e.g. 6D pose
estimation) is bridged in this paper via global feature alignment and local
refinement on the coarse classification of discretized anchor classes in target
space, which imposes a piece-wise target manifold regularization into
domain-invariant representation learning. Specifically, our method incorporates
an explicit self-supervised manifold regularization, revealing consistent
cumulative target dependency across domains, to a self-training scheme (e.g.
the popular Self-Paced Self-Training) to encourage more discriminative
transferable representations of regression tasks. Moreover, learning unified
implicit neural functions to estimate relative direction and distance of
targets to their nearest class bins aims to refine target classification
predictions, which can gain robust performance against inconsistent feature
scaling sensitive to UDA regressors. Experiment results on three public
benchmarks of the challenging 6D pose estimation task can verify the
effectiveness of our method, consistently achieving superior performance to the
state-of-the-art for UDA on 6D pose estimation. | Yichen Zhang, Jiehong Lin, Ke Chen, Zelin Xu, Yaowei Wang, Kui Jia | 2023-05-18T08:42:41Z | http://arxiv.org/abs/2305.10808v2 | # Manifold-Aware Self-Training for Unsupervised Domain Adaptation on
###### Abstract
Domain gap between synthetic and real data in visual regression (_e.g._ 6D pose estimation) is bridged in this paper via global feature alignment and local refinement on the coarse classification of discretized anchor classes in target space, which imposes a piece-wise target manifold regularization into domain-invariant representation learning. Specifically, our method incorporates an explicit self-supervised manifold regularization, revealing consistent cumulative target dependency across domains, to a self-training scheme (_e.g._ the popular Self-Paced Self-Training) to encourage more discriminative transferable representations of regression tasks. Moreover, learning unified implicit neural functions to estimate relative direction and distance of targets to their nearest class bins aims to refine target classification predictions, which can gain robust performance against inconsistent feature scaling sensitive to UDA regressors. Experiment results on three public benchmarks of the challenging 6D pose estimation task can verify the effectiveness of our method, consistently achieving superior performance to the state-of-the-art for UDA on 6D pose estimation. Code is available at [https://github.com/Gorilla-Lab-SCUT/MAST](https://github.com/Gorilla-Lab-SCUT/MAST).
## 1 Introduction
The problems of visual regression such as estimation of 6D pose of object instances (_i.e._ their orientation and translation with respect to the camera optical center) and configuration of human body parts given RGB images are widely encountered in numerous fields such as robotics [10, 13], augmented reality [16, 17] and autonomous driving [1, 18], which can be typically addressed by learning a single or multi-output regression mapping on deep representations of visual observations. Recent regression algorithms have gained remarkable success to handle with inconsistent lighting conditions and heavy occlusions in-between foreground and contextual objects in uncontrolled and cluttered environment, owing to recent development of representation learning of visual regression, such as introduction of self-supervised regularization [14] and powerful network architectures [15].
In those regression tasks, visual observations, _i.e._ RGB images, can be easily acquired in practice or directly collected from the Internet, but it is laborious or even unfeasible for manual noise-free annotation with continuous targets. As a result, the size of real training data with precise labels is typically limited and less scalable, _e.g.__eggbox_ and _holepuncher_ training samples in the LineMOD [13] for 6D pose estimation, which increases
Figure 1: Illustration of the proposed Manifold-Aware Self Training (MAST) for UDA on 6D pose estimation. Top: a novel cumulative target correlation (CTC) regularization on representation learning. Middle: one bin with the highest confidence (_i.e._ highlighted with the largest red dish) with further local refinement (_i.e._ the gray lines) are adopted to generate pseudo pose labels in our MAST. Bottom: the t-SNE visualization of feature distribution w/ and w/o the proposed CTC, which can verify the effectiveness of introduction of target manifolds with a more smooth distribution; and comparison with our MAST and its backbone with an example from the Occluded LineMOD dataset is given.
the difficulty of learning good representations. The synthesis of images can be a powerful solution to cope with data sparsity, which can be gained via photorealistic rendering [14] with CAD models. However, domain discrepancy between synthetic and real data, _e.g._ appearance difference between CAD models and real objects, scene illumination, and systematic imaging noises, can lead to collapse of regression performance, which encourages the practical setting of unsupervised domain adaptation on visual regression (UDAVR), _i.e._ samples in the source and target domains cannot satisfy the i.i.d. condition.
Different from the widely-investigated problem of unsupervised domain adaptation on visual classification (UDAVC) [1, 13, 14], only a few works [3, 15] have explored the vital factors of representation learning of visual regression that different from classification in the context of UDA. [3] revealed and exploited the sensitivity of feature scaling on domain adaptation regression performance to regularize representation learning, which can achieve promising results to bridge domain gap. We argue that _cumulative dependent nature_ and _piece-wise manifolds_ in target space are two key factors of UDA regression yet missing in the existing algorithms. To this end, this paper proposes a **M**anifold-**A**ware **S**elf-**T**raining (MAST) scheme to decompose the problem of learning a domain-invariant regression mapping into a combination of a feature-scaling-robust globally coarse classification of discretized target anchors via self-training based feature alignment and a locally regression-based refinement less sensitive to inconsistent feature scale, as shown in Figure 1.
For exploiting the cumulative dependent nature of regression targets different from those in classification, the self-training method (_e.g._ the self-paced self-training [14]) originally designed for the UDAVC problem is now adapted to the coarse classification on discretization of continuous target space, with incorporating a novel piece-wise manifold regularization on domain-invariant representation learning, namely a self-supervised cumulative target correlation regularization. Intuitively, appearance ambiguities across domains in representation learning can be mitigated via leveraging consistent target correlation under certain distance metrics in target space (_e.g._ the Euclidean distance in \(R(3)\) translation space). Furthermore, considering the risk of sensitivity to varying feature scaling in the UDAVR problem [3], learning unified local regression functions with those shared features of the classification of discretized target bins (typically having inconsistent feature scales) can achieve superior robustness against large scale variations of transferable representations. Extensive experiments on three popular benchmarks of the challenging UDA on 6D pose estimation can confirm the effectiveness of our MAST scheme, consistently outperforming the state-of-the-art.
The novelties of our paper are summarized as follows.
* This paper proposes a novel and generic manifold-aware self-training scheme for unsupervised domain adaptation on visual regression, which exploits cumulative correlation and piece-wise manifolds in regression target space for domain-invariant representation learning.
* Technically, a novel cumulative target correlation regularization is proposed to regularize the self-training algorithm on coarse classification with latent dependency across regression targets, while local refinement can be achieved via learning implicit functions to estimate residual distance to the nearest anchor within local target manifolds in a unified algorithm.
* Experiment results on multiple public benchmarks of UDA on 6D pose estimation can verify consistent superior performance of our scheme to the state-of-the-art UDA pose regressors.
## 2 Related Works
6D Pose Estimation.The problem of estimating 6D pose of object instances within an RGB image (optionally with a complementary depth image) is active yet challenging in robotics and computer vision. With the rise of deep learning, recent methods for predicting 6D poses can be divided into two main groups - keypoint-based [17, 18] and regression based [16, 15, 12]. The former relied on learning a 2D-to-3D correspondence mapping between object keypoints in 3D space and their 2D projection on images with the Perspective-n-Point (PnP) [12]. Such a correspondence can be achieved by either detecting a limited size of landmarks [12, 13] or pixel-wise voting from a heatmap [14, 15]. The latter concerned on deep representation learning for direct pose regression with the point-matching loss for optimizing output pose [16, 12] or proposing a differentiable PnP paradigm in an end-to-end training style [15, 14]. Alternatively, the problem can also be formulated into ordinal classification via discretization of \(SE(3)\) space into class bins [16, 17]. To alleviate representation ambiguities, the estimated 6D pose of objects can be further refined via either an iterative refinement with residual learning [17, 18] or simply the Iterative Closest Point [16], while some work introduced cross-view fusion based refinement [12, 15]. Existing refinement strategies are typically employed as a post-processing step following the main module of 6D pose estimation, some of which such as [15, 16] can be designed in an end-to-end learning cascade to obtain significant performance gain, but they are not designed for bridging domain gap and therefore cannot ensure good performance under the UDAVR setting. Alternatively, [15] introduced a combined scheme of both coarse classification and local regression-based refinement simultaneously, which is similar to our MAST method. However, the main differences lie in the introduction of the cumulative target correlation regularization in our scheme to encourage domain-invariant pose representations revealing the dependent nature of regression targets.
Unsupervised Domain Adaptation on Visual Regression.Most of regression methods [16, 15] employ annotated real data for model training, but
manual annotations on real data are usually laboriously expensive or even unfeasible. Lack of sufficient annotated real data encourages the practical setting of Simulation-to-Reality (Sim2Real) UDAVR, _i.e._ learning a domain-agnostic representation given annotated synthetic data as source domain and unlabeled real data as target domain. A simple yet effective way to narrow Sim2Real domain gap can rely on domain randomization [14, 15], while recent success of self-supervised learning for UDAVC [16, 17] inspired a number of self-supervised regressors [15, 16] in the context of Regression. Self6D [15] and its extension Self6D++ [15] leveraged a differentiable renderer to conduct self-supervised visual and geometrical alignment on visible and amodal object mask predictions. Bao _et al._[1] introduced a self-supervised representation learning of relative rotation estimation to adapt one gaze regressor to the target domain. Zhang _et al._[16] utilized a Graph Convolutional Network to model domain-invariant geometry structure among key-points, which is applied to guide training of the object pose estimator on real images. These mentioned algorithms were designed for only one specific task and cannot be directly applied to other visual regression problems. [3] proposed the representation subspace distance (RSD) generic to multiple UDAVR problems, but cannot perform well on the challenging task having severe representation ambiguities, _e.g._ 6D pose estimation investigated in this paper (see Table 3). In contrast, the proposed MAST scheme is generic to UDAVR owing to exploiting explicit target correlation in the style of local manifolds to regularize deep representation learning agnostic to domains.
**Self-Training.** Self-training methods utilize a trained model on labeled data to make predictions of unannotated data as pseudo labels [13] (_i.e._ supervision signals assigned to unlabeled data), which is widely used in semi-supervised learning [13, 14] and UDA [15]. [16] generated pseudo labels from weakly augmented images, which are adopted as supervision of strongly augmented variants in semi-supervised learning; similar scripts are shared with the noisy student training [17]. [3] proposed the co-training for domain adaptation that slowly adding to the training set both the target features and instances in which the current algorithm is the most confident. [16] proposed the self-based self-training (SPST) for unsupervised domain adaptation classification that can perform a self-paced learning [16] with latent variable objective optimization. The representative SPST has inspired a number of follow-uppers such as [16] and [3]. Nevertheless, all of existing self-training algorithms were designed for classification or segmentation, while self-training for the UDAVR remains a promising yet less explored direction.
## 3 Methodology
Given a source domain \(\{\mathcal{I}^{i}_{S},\mathbf{y}^{i}_{S}\}_{i=1}^{N_{S}}\) with \(N_{S}\) labeled samples and a target domain \(\{\mathcal{I}^{i}_{T}\}_{i=1}^{N_{T}}\) with \(N_{T}\) unlabeled samples, tasks of UDAVR aim at learning a domain-invariant regression mapping to a shared continuous label space \(\mathcal{Y}\). In the context of our focused 6D object pose estimation, the source and target domains are often the synthetic and real-world data, respectively, while the shared label space between two domains is the whole learning space of \(SE(3)\).
To deal with the problems of UDAVR introduced in Sec. 1, _e.g._, cumulative dependent nature and piece-wise manifolds in target space, we propose in this paper a manifold-aware self-training scheme, which decomposes the learning of \(SE(3)\) space into a global classification on discretized pose anchors and a local pose refinement for feature scaling robustness, and incorporates a self-supervised manifold regularization to the self-training.
### The Design of Network Architecture
Given a batch of \(B\) object-centric RGB images \(\{\mathcal{I}^{b}\}_{b=1}^{B}\) as input, the proposed scheme is designed to predict 6D object poses \(\{\mathcal{T}^{b}\}_{b=1}^{B}\), with each pose \(\mathcal{T}=[\mathbf{R}|\mathbf{t}]\) represented by a 3D rotation \(\mathbf{R}\in SO(3)\) and a 3D translation \(\mathbf{t}\in\mathbb{R}^{3}\). The whole network architecture is shown in Fig. 2, which consists of three main modules, including a **Feature Extractor**, a **Coarse Classifier** of discretized pose anchors, and a **Fine Regressor of Residual Poses** to the nearest anchor.
More specifically, we employ the same feature extractor as [1] to learn the pose-sensitive feature vectors \(\{\mathbf{f}^{b}\in\mathbb{R}^{C}\}_{b=1}^{B}\) from each frame, which are then fed into the decoupled coarse classifier and fine regressor individually, whose output are combined together as final pose predictions. The former learns coarse poses via classification on the discretized pose anchors, while the latter learns residual poses to refine the coarse ones of pose anchors locally; both modules share the same input features, achieving superior robustness against inconsistent feature scaling. We will take a single image as an example to detail the two pose estimation modules shortly, and thus omit the superscript \(b\) of the notations for simplicity in the following subsections.
**Coarse Classification on Discretized Pose Anchors.** Given the pose-sensitive feature \(\mathbf{f}\) of \(\mathcal{I}\), the goal of this module is to globally make coarse predictions of \(\mathbf{R}\) and \(\mathbf{t}\) via classification on their pre-defined anchors, respectively. For the rotation \(\mathbf{R}\), we generate \(N_{\mathbf{R}}\) anchors that are uniformly distributed on the whole \(SO(3)\) space as [11], which are denoted as \(\{\mathbf{R}^{1}_{a},\cdots,\mathbf{R}^{N_{\mathbf{R}}}_{a}\}\). For the translation \(\mathbf{t}\), we factorize it into three individual classification targets, including the two translation components \(v_{x}\) and \(v_{y}\) on the image coordinate system along X-axis and Y-axis, with the remaining component \(z\) along Z-axis; for each classification target \(t\in\{v_{x},v_{y},z\}\), we discretize the range of \([d_{t}^{min},d_{t}^{max}]\) into \(N_{t}\) bins uniformly, and use the bin centers \(\{t_{a}^{1},\cdots,t_{a}^{N_{t}}\}\) as the anchors of \(t\). We implement the classifier as four Multilayer Perceptrons (MLPs) with \(N_{\mathbf{R}},N_{v_{x}},N_{v_{y}},N_{z}\) output neurons, which are collectively denoted as the probabilities \(\mathbf{S_{R}}\in\mathbb{R}^{N_{\mathbf{R}}}\), \(\mathbf{S}_{v_{x}}\in\mathbb{R}^{N_{x_{x}}}\), \(\mathbf{S}_{v_{y}}\in\mathbb{R}^{N_{v_{y}}}\), and \(\mathbf{S}_{z}\in\mathbb{R}^{N_{z}}\) of \(\mathbf{R}\), \(v_{x}\), \(v_{y}\) and \(z\), respectively. Denoting their indexes of maximal probabilities as \(i^{max}_{\mathbf{R}}\), \(i^{max}_{v_{x}}\), \(i^{max}_{v_{y}}\) and \(i^{max}_{z}\), the classifier finally gives out their coarse pose predictions as \(\mathbf{R}_{cls}=\mathbf{R}^{i^{max}_{v_{x}}}_{a}\), \(v_{x,cls}=v^{i^{max}_{v_{x}}}_{x,a}\), \(v_{y,cls}=v^{i^{max}_{v_{y}}}_{y,a}\) and \(z_{cls}=z^{i^{max}_{z}}_{a}\).
Fine Regressor of Residual Poses.This module shares the same input feature \(\mathbf{f}\) as the coarse classifier to make the learning more robust to feature scale variations, and is implemented as four MLPs with \(N_{\mathbf{R}}\times 6,N_{v_{x}},N_{v_{y}},N_{z}\) output neurons to regress the residuals of the pose anchors. We collectively denote the outputs as \(\{\mathbf{R}^{i}_{reg,6D}\}_{i=1}^{N_{\mathbf{R}}}\), \(\{v^{i}_{x,reg}\}_{i=1}^{N_{v_{y}}}\), \(\{v^{i}_{y,reg}\}_{i=1}^{N_{v_{y}}}\), and \(\{z^{i}_{reg}\}_{i=1}^{N_{v}}\); here we use the continuous 6D representations of rotation [11] as the regression target, which can be transformed into rotation matrices \(\{\mathbf{R}^{i}_{reg}\}_{i=1}^{N_{\mathbf{R}}}\). According to probabilities of the classifier, the fine regressor refines the coarse predictions via the residuals \(\mathbf{R}_{reg}=\mathbf{R}^{i^{max}}_{reg}\), \(v_{x,reg}=v^{i^{max}_{x,reg}}\), \(v_{y,reg}=v^{i^{max}_{v_{y}}}_{y,reg}\), and \(z_{reg}=z^{i^{max}_{reg}}_{reg}\).
Combining coarse anchor predictions and their residuals, our proposed network can generate the final object pose \(\mathcal{T}=[\mathbf{R}|\mathbf{t}]\), with \(\mathbf{t}=[x,y,z]\), as follows:
\[\left\{\begin{aligned} \mathbf{R}&=\mathbf{R}_{reg}\cdot\mathbf{R}_{ cls}\\ x&=(v_{x,cls}+v_{x,reg})\cdot z/f_{x}\\ y&=(v_{y,cls}+v_{y,reg})\cdot z/f_{y}\\ z&=z_{cls}+z_{reg}\end{aligned}\right., \tag{1}\]
where \(f_{x}\) and \(f_{y}\) are the focal lengths along X-axis and Y-axis, respectively.
### Manifold-Aware Objective
To train our network, we formulate the following manifold-aware objective \(\mathcal{L}\) via combining a **coarse-to-fine pose decomposition loss**\(\mathcal{L}_{pose}\) with a **cumulative target correlation regularization**\(\mathcal{L}_{ctc}\):
\[\mathcal{L}=\mathcal{L}_{pose}+\mathcal{L}_{ctc}, \tag{2}\]
where \(\mathcal{L}_{pose}\) favors for domain-invariant representations in 6D pose estimation across domains, while \(\mathcal{L}_{ctc}\) enforces target manifolds into representation learning.
Coarse-to-fine Pose Decomposition Loss.\(\mathcal{L}_{pose}\) consists of two loss terms \(\mathcal{L}_{cls}\) and \(\mathcal{L}_{reg}\) for the coarse classifier and the fine regressor, respectively, as follows:
\[\mathcal{L}_{pose}=\frac{1}{B}\sum_{b=1}^{B}\mathcal{L}_{cls}^{b}+\mathcal{L}_ {reg}^{b}. \tag{3}\]
For simplicity, we introduce \(\mathcal{L}_{pose}\) on single input, and thus omit the batch index \(b\) accordingly.
For the coarse classifier, given the ground truth pose \(\tilde{\mathcal{T}}=[\tilde{\mathbf{R}}|\tilde{\mathbf{t}}]\), with \(\tilde{\mathbf{t}}=[\tilde{x},\tilde{y},\tilde{z}]\) (and \(\tilde{v}_{x},\tilde{v}_{y}\)), we first adopt a sparse scoring strategy to assign the labels for \(\mathbf{S_{R}}\), \(\mathbf{S}_{v_{x}}\), \(\mathbf{S}_{v_{y}}\) and \(\mathbf{S}_{z}\), resulting in \(\tilde{\mathbf{S}}_{\mathbf{R}}\), \(\tilde{\mathbf{S}}_{v_{x}}\), \(\tilde{\mathbf{S}}_{v_{y}}\) and \(\tilde{\mathbf{S}}_{z}\), respectively, with each element \(\tilde{s}^{i}_{t}\) (\(t\in\{\mathbf{R},v_{x},v_{y},z\}\)) assigned as follows:
\[\tilde{s}^{i}_{t}=\left\{\begin{aligned} \theta_{t,1},& \quad i\in\text{NN}_{1}(\tilde{t})\\ \theta_{t,2},&\quad i\in\text{NN}_{k_{t}}(\tilde{t}) \backslash\text{NN}_{1}(\tilde{t})\\ 0,&\quad Otherwise\end{aligned}\right., \tag{4}\]
where \(\theta_{t,1}\gg\theta_{t,2}\), and \(\theta_{t,1}+(k_{t}-1)\theta_{t,2}=1\). \(\text{NN}_{k_{t}}(\tilde{t})\) denotes the set of indexes of the \(k_{t}\) nearest anchors of \(\tilde{t}\).1 With the assigned labels, we use the cross-entropy loss \(\mathcal{H}\) on top of the classifier as follows:
Footnote 1: We use the geodesic distance [1] to measure the distance of two rotations \(\mathbf{R}_{1}\) and \(\mathbf{R}_{2}\) as \(\arccos(\frac{trace(\mathbf{R}_{1}\mathbf{R}_{2}^{2})-1}{2})\), and use the difference value to measure that of two scalars.
\[\mathcal{L}_{cls}=\sum_{t\in\{\mathbf{R},v_{x},v_{y},z\}}\mathcal{H}(\mathbf{S}_{t}, \tilde{\mathbf{S}}_{t}). \tag{5}\]
For the fine regressor, we make individual predictions on each anchor of \(t\in\{\mathbf{R},(v_{x},v_{y}),z\}\) by combining the paired classification and regression results, and supervise the predic
Figure 2: The pipeline of our manifold-aware self-training scheme.
tions of their top K nearest anchors of \(\tilde{t}\) as follows:
\[\begin{split}\mathcal{L}_{reg}=&\sum_{i\in\text{NN}_{k_ {\text{R}}}(\tilde{\mathbf{R}})}\mathcal{D}(\mathcal{T}_{\mathbf{R}^{i}},\tilde{ \mathcal{T}})+\sum_{i\in\text{NN}_{k_{z}}(\tilde{z})}\mathcal{D}(\mathcal{T}_{z ^{i}},\tilde{\mathcal{T}})\\ &+\sum_{i\in\text{NN}_{k_{v_{x}v_{y}}}(\tilde{v}_{x}\tilde{v}_{y} )}\mathcal{D}(\mathcal{T}_{v_{x}^{i}v_{y}^{i}},\tilde{\mathcal{T}}),\end{split} \tag{6}\]
where \(t^{i}\) denotes the prediction of the anchor \(i\) of \(t\), and \(\mathcal{T}_{t^{i}}\) denotes the object pose computed by \(t^{i}\) and other ground truths \(\{\tilde{\mathbf{R}},(\tilde{v}_{x},\tilde{v}_{y}),\tilde{z}\}\backslash\tilde{t}\). \(\mathcal{D}(\cdot,\cdot)\) is the \(L_{1}\) distance between the point sets transformed by two object poses from the same object point cloud \(\mathcal{O}\), as follows:
\[\mathcal{D}(\mathcal{T},\tilde{\mathcal{T}})=\frac{1}{|\mathcal{O}|}\sum_{x\in \mathcal{O}}\|\mathcal{T}x-\tilde{\mathcal{T}}x\|_{1}. \tag{7}\]
Following [1], we combine the supervision of \(v_{x}\) and \(v_{y}\) for convenience in (6), and also employ the same strategy to handle object symmetries by finding the closest ground truth rotation to the predicted one.
Cumulative Target Correlation Regularization.For regression tasks, continuous targets preserve latent cumulative dependency [10]. When we discretize the continuously changing targets into discretized labels as classification, the assumption of independence across targets is adopted, which is invalid in regressing continuous targets. As a result, each class cannot seek support from samples of correlated class, which can significantly reduce performance especially for sparse and imbalanced data distributions. To better cope with this problem, we propose to regularize the features by an explicit relation in the regression target space.
Given the pose-sensitive feature vectors \(\{\mathbf{f}^{b}\in\mathbb{R}^{C}\}\backslash\hat{b}_{b=1}^{R}\) of a mini-batch inputs \(\{\mathcal{I}^{b}\}_{b=1}^{B}\), we first build the feature correlation graph \(\mathcal{G}\in\mathbb{R}^{B\times B}\) across the data batch via feature cosine similarities, with the element \(g^{ij}\) indexed by \((i,j)\) computed as follows:
\[g^{ij}=\frac{<\mathbf{f}^{i},\mathbf{f}^{j}>}{||\mathbf{f}^{i}||_{2}\cdot||\mathbf{f}^{j}||_{2 }}, \tag{8}\]
where \(<\cdot,\cdot>\) denotes inner product. We then build the ground truth \(\tilde{\mathcal{G}}\) based on a pre-computed correlation graph \(\tilde{\mathcal{G}}_{0}\in\mathbb{R}^{N\times N}\) with \(N\) pose classes; assuming the classes of \(\mathcal{I}_{i}\) and \(\mathcal{I}_{j}\) are \(n_{i}\) and \(n_{j}\), respectively, we assign the value of \(\tilde{g}^{ij}\in\tilde{\mathcal{G}}\) as that of \(\tilde{g}_{0}^{n_{i}n_{j}}\). Finally, the proposed target correlation regularizer can be simply written as the squared \(L_{2}\) distance between \(\mathcal{G}\) and \(\tilde{\mathcal{G}}\):
\[\mathcal{L}_{ctc}=\|\mathcal{G}-\tilde{\mathcal{G}}\|_{2}^{2}. \tag{9}\]
There are multiple choices for building the pose-related correlation graph \(\tilde{\mathcal{G}}_{0}\); here we introduce a simple but effective one, which utilizes the similarity of depth components of translations along Z-axis to initialize \(\tilde{\mathcal{G}}_{0}\), with \(N=N_{z}\). Specifically, for the anchors \(\{z_{a}^{1},\cdots,z_{a}^{N}\}\) of \(z\), we map them linearly to the angles \(\{\phi^{1},\cdots,\phi^{N}\}\) as follows:
\[\phi^{n}=\frac{z_{a}^{n}}{z^{max}-z^{min}}\cdot\frac{\pi}{2}, \tag{10}\]
and the element \(\tilde{g}_{0}^{n_{i}n_{j}}\) of \(\tilde{\mathcal{G}}_{0}\) indexed by \((n_{i},n_{j})\) can be defined as the cosine of difference between the angles:
\[\tilde{g}_{0}^{n_{i}n_{j}}=\cos(|\phi^{n_{i}}-\phi^{n_{j}}|). \tag{11}\]
When \(z_{a}^{n_{i}}\) and \(z_{a}^{n_{j}}\) are close, the difference of their corresponding angles is small, and thus the correlation value of \(\tilde{g}_{0}^{n_{i}n_{j}}\) will be large. The reason for choosing \(z\) is that the learning of this component is very challenging in 6D pose estimation without depth information. Experimental results in Sec. 4.2 also verify the effectiveness of our regularization.
### Manifold-Aware Self-training
To reduce the Sim2Real domain gap, we design a manifold-aware self-training scheme for unsupervisedly adapting the pose estimator, which adaptively incorporates our proposed manifold-aware training objective in (2) with Self-Paced Self-Training [22] to select target samples in an easy-to-hard manner. More specifically, we first train a teacher model \(\mathcal{M}_{T}\) on the labeled synthetic data (source domain) as a pseudo-label annotator for the unlabeled real-world data (target domain), and select the training samples from the real data with pseudo labels for the learning of a student model \(\mathcal{M}_{S}\). Both teacher and student models share the same networks introduced in Sec. 3.1, and are trained by solving the problems of \(\min_{\mathcal{M}_{T}}\mathcal{L}\) and \(\min_{\mathcal{M}_{S}}\mathcal{L}\), respectively.
The core of sample selection on the target domain lies on the qualities of pseudo labels. For the tasks of visual classification, the categorical probabilities are usually used as the measurement of qualities, while for those of visual regression tasks, _e.g._, object pose estimation in this paper, direct usage of the typical mean square error (MSE) can be less effective due to lack of directional constraints for adaptation. In geometric viewpoint, the surface of a super ball can have the same MSE distance to its origin, but the optimal regions of object surface for domain adaptation exist, which can be omitted by the MSE metric. Owing to the decomposition of object pose estimation into coarse classification and fine regression in our MAST scheme, we can flexibly exploit the classification scores to indicate the qualities of pseudo labels, since the coarse classification points out the overall direction of pose estimation. In practice, we use the probabilities \(\mathbf{S}_{z}\) as confidence scores because UDA on classification can perform more stably and robustly, and set a threshold \(\tau\) to select the samples with scores larger than \(\tau\) for training \(\mathcal{M}_{S}\). Larger score indicates higher quality of the pseudo label. Following [22], the threshold \(\tau\) is gradually decreased during training, realizing the learning in an easy-to-hard manner and making \(\mathcal{M}_{S}\) generalized to harder target samples.
## 4 Experiments
Datasets and Settings.The LineMOD dataset [16] provides individual videos of 13 texture-less objects, which are recorded in cluttered scenes with challenging lighting variations. For each object, we follow [1] to use randomly sampled \(15\%\) of the sequence as the real-world training data of the target domain, and the remaining images are set aside for testing. The Occluded LineMOD dataset [1] is a subset
of the LineMOD with 8 different objects, which is formed by the images with severe object occlusions and self-occlusions. We follow [21] to split the training and test sets. The HomebrewedDB dataset [17] provides newly captured test images of three objects in the LineMOD, including bwise, driller and phone. Following the Self-6D [21], the second sequence of HomebrewedDB is used to test our models which are trained on the LineMOD, to evaluate the robustness of our method on different variations, _e.g._, scene layouts and camera intrinsics. In the experiments, the above three real-world datasets are considered as the target domains, all of which share the same synthetic source domain. We employ the publicly available synthetic data provided by BOP challenge [1] as the source data, which contains 50k images generated by physically-based rendering (PBR) [1].
Evaluation Metrics.Following [21], we employ the Average Distance of model points (ADD) [11] as the evaluation metric of the 6D poses for asymmetric objects, which measures the average deviation of the model point set \(\mathcal{O}\) transformed by the estimated pose \(\mathcal{T}=[\mathbf{R}|\mathbf{t}]\) and that transformed by the ground-truth pose \(\tilde{\mathcal{T}}=[\tilde{\mathbf{R}}|\tilde{\mathbf{t}}]\):
\[\mathcal{D}_{\text{ADD}}(\mathcal{T},\tilde{\mathcal{T}})=\frac{1}{|\mathcal{ O}|}\sum_{\mathbf{x}\in\mathcal{O}}\|(\mathbf{R}\mathbf{x}+\mathbf{t})-(\tilde{\mathbf{R}}\mathbf{x}+ \tilde{\mathbf{t}})\|_{2}. \tag{12}\]
For symmetric objects, we employ the metric of Average Distance of the closest points (ADD-S) [1]:
\[\mathcal{D}_{\text{ADD-S}}(\mathcal{T},\tilde{\mathcal{T}})=\frac{1}{|\mathcal{ O}|}\sum_{\mathbf{x}_{1}\in\mathcal{O}}\min_{\mathbf{x}_{2}\in\mathcal{O}}\|(\mathbf{R}\mathbf{x}_{1}+ \mathbf{t})-(\tilde{\mathbf{R}}\mathbf{x}_{2}+\tilde{\mathbf{t}})\|_{2}. \tag{13}\]
Combining (12) and (13), we report the Average Recall (\(\%\)) of ADD(-S) less than \(10\%\) of the object's diameter on all the three datasets.
Implementation Details.For object detection, we use Mask R-CNN [1] trained purely on synthetic PBR images to generate the object bounding boxes for the target real data. For pose estimation, we set the numbers of anchors as \(N_{\mathbf{R}}=60,N_{v_{x}}=N_{v_{y}}=20,N_{z}=40\), and set
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c}{Occluded LineMOD} & \multicolumn{8}{c}{HomebrewedDB} \\ \cline{2-13} & Ape & Can & Cat & Drill & Duck & _Eggbox_ & _Glue_ & Holep & Mean & Bwise & Drill & Phone & Mean \\ \hline \multicolumn{13}{c}{Data: syn (w/ GT)} \\ \multicolumn{13}{c}{} \\ APODD [17] & 2.3 & 4.0 & 1.2 & 7.2 & 10.5 & 4.4 & 12.9 & 7.5 & 6.3 & 52.9 & 37.8 & 7.3 & 32.7 \\ CDPN [11] & 20.0 & 15.1 & 16.4 & 22.2 & 5.0 & 36.1 & 27.9 & 24.0 & 20.8 & β & β & β \\ SD-Pose [11] & 21.5 & 56.7 & 17.0 & 44.4 & 27.6 & 42.8 & 45.2 & 21.6 & 34.6 & β & β & β \\ SSDGD+Ref. [16] & β & β & β & β & β & β & β & β & 82.0 & 22.9 & 24.9 & 43.3 \\ Self6D++ [21] & 44.0 & **83.9** & **49.1** & **88.5** & 15.0 & **33.9** & **75.0** & 34.0 & 52.9 & 7.1 & 2.2 & 0.1 & 3.1 \\ MAR (ours) & **44.9** & 78.4 & 40.3 & 73.5 & **47.9** & 26.9 & 72.1 & **58.0** & **55.3** & **92.6** & **91.5** & **80.0** & **88.0** \\ \hline \multicolumn{13}{c}{Data: syn (w/ GT) + real (w/ GT)} \\ \multicolumn{13}{c}{} \\ DSCC-PoseNet [21] & 13.9 & 15.1 & 19.4 & 40.5 & 6.9 & 38.9 & 24.0 & 16.3 & 21.9 & 72.9 & 40.6 & 18.5 & 44.0 \\ Sock _et al._[21] & 12.0 & 27.5 & 12.0 & 20.5 & 23.0 & 25.1 & 27.0 & 35.0 & 22.8 & 57.3 & 46.6 & 41.5 & 52.0 \\ Zhang _et al._[21] & β & β & β & β & β & β & β & β & 33.7 & β & β & β & 63.8 \\ Self6D++ [21] & **57.7** & **95.0** & **52.6** & **90.5** & 26.7 & 45.0 & **87.1** & 23.5 & 59.8 & 56.1 & **97.7** & **85.1** & 79.6 \\ MAST (ours) & 47.6 & 82.9 & 45.4 & 75.0 & **53.7** & **48.2** & 75.3 & **63.0** & **61.4** & **93.8** & 91.5 & 81.8 & **89.0** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparative evaluation on the Occluded LineMOD and HomebrewedDB datasets w.r.t. the Average Recall (%) of the ADD(-S). Symmetric object classes are in italic. βMARβ (manifold-aware regression) denotes our method without self-training.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c}{Occluded LineMOD} & \multicolumn{8}{c}{HomebrewedDB} \\ \cline{2-13} & Ape & Can & Cat & Drill & Duck & _Eggbox_ & _Glue_ & Holep & Mean & Bwise & Drill & Phone & Mean \\ \hline \multicolumn{13}{c}{Data: syn (w/ GT)} \\ \multicolumn{13}{c}{} \\ APODD [17] & 2.3 & 4.0 & 1.2 & 7.2 & 10.5 & 4.4 & 12.9 & 7.5 & 6.3 & 52.9 & 37.8 & 7.3 & 32.7 \\ CDPN [11] & 20.0 & 15.1 & 16.4 & 22.2 & 5.0 & 36.1 & 27.9 & 24.0 & 20.8 & β & β & β \\ SD-Pose [11] & 21.5 & 56.7 & 17.0 & 44.4 & 27.6 & 42.8 & 45.2 & 21.6 & 34.6 & β & β & β & β \\ SSDGD+Ref. [16] & β & β & β & β & β & β & β & β & 82.0 & 22.9 & 24.9 & 43.3 \\ Self6D++ [21] & 44.0 & **83.9** & **49.1** & **88.5** & 15.0 & **33.9** & **75.0** & 34.0 & 52.9 & 7.1 & 2.2 & 0.1 & 3.1 \\ MAR (ours) & **44.9** & 78.4 & 40.3 & 73.5 & **47.9** & 26.9 & 72.1 & **58.0** & **55.3** & **92.6** & **91.5** & **80.0** & **88.0** \\ \hline \multicolumn{13}{c}{Data: syn (w/ GT) + real (w/ GT)} \\ \multicolumn{13}{c}{} \\ APODD [17] & 13.9 & 15.1 & 19.4 & 40.5 & 6.9 & 38.9 & 24.0 & 16.3 & 21.9 & 72.9 & 40.6 & 18.5 & 44.0 \\ Sock _et al._[21] & 12.0 & 27.5 & 12.0 & 20.5 & 23.0 & 25.1 & 27.0 & 35.0 & 22.8 & 57.3 & 46.6 & 41.5 & 52.0 \\ Zhang _et al._[21] & β & β & β & β & β & β & β & β & 33.7 & β & β & β & 63.8 \\ Self6D++ [21] & **57.7** & **95.0** & **52.6** & **90.5** & 26.7 & 45.0 & **87.1** & 23.5 & 59.8 & 56.1 & **97.7** & **85.1** & 79.6 \\ MAST (ours) & 47.6 & 82.9 & 45.4 & 75.0 & **53.7** & **48.2** & 75.3 & **63.0** & **61.4** & **93.8** & 91.5 & 81.8 & **89.0** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparative evaluation on the LineMOD dataset w.r.t. the Average Recall (%) of ADD(-S). Symmetric object classes are in italic. βMARβ (manifold-aware regression) denotes our method without self-training.
the ranges of \(v_{x}\), \(v_{y}\) and \(z\) as \([d^{min}_{v_{x}},d^{max}_{v_{x}}]=[d^{min}_{v_{y}},d^{max}_{v_{y}}]=[-200,200]\), and \([d^{min}_{z},d^{max}_{z}]=[0.0,2.0]\), respectively. To train our network, we choose \(\theta^{\mathbf{R}}_{1}=0.7,\theta^{\mathbf{R}}_{2}=0.1\) and \(k_{\mathbf{R}}=4\) for rotation in (4), and also set \(\theta^{v_{x}}_{1}=\theta^{v_{y}}_{1}=\theta^{z}_{1}=0.55,\theta^{v_{x}}_{2}= \theta^{v_{y}}_{2}=\theta^{z}_{2}=0.075\), and \(k_{v_{x}}=k_{v_{y}}=k_{z}=7\) for translation. Following the popular setting [22], we train individual networks for all the objects with the Adam optimizer [13]. The teacher model \(\mathcal{M}_{T}\) is firstly pre-trained on the synthetic images of all objects, and then fine-tuned on the single object, while the parameters of the student model \(\mathcal{M}_{S}\) is initialized as those of \(\mathcal{M}_{T}\); their initial learning rates are \(3\times 10^{-4}\) and \(3\times 10^{-5}\), respectively. The training batch size is set as \(B=32\). We also include the same data augmentation as [1] during training.
### Comparative Evaluation
We compare our method with the existing ones on three benchmarks for 6D object pose estimation with RGB images.
On the LineMOD, we conduct experiments under three settings of training data, including 1) labeled synthetic data, 2) labeled synthetic and real data, and 3) labeled synthetic data and unlabeled real data. Results of the first two settings are the lower and upper bounds of that of the last setting. We report qualitative results of comparative methods in Table 1, where our method outperforms its competitors by large margins under all the settings, _e.g._, with the respective improvements of \(4.7\%\), \(3.5\%\) and \(1.6\%\) over the state-of-the-art Self6D++ [23]. On the Occluded LineMOD and the HomebrewDB, results are shown in Table 2, where our method consistently performs better than the existing ones on both datasets, demonstrating superior robustness of our method against occlusion and the generalization to new scenes and cameras.
### Ablation Studies and Analyses
#### 4.2.1 Fine Regression.
We decompose the problem of UDA on estimating object poses into a coarse classification on discretized anchors and a residual regression. As shown in Table 3, for the models trained purely on synthetic data, the design of pose decomposition realizes \(4.0\%\) and \(5.7\%\) improvements on the LineMOD and the Occluded LineMOD, respectively, compared to direct regression of object poses, since the global classification eases the difficulty in learning along with feature-scaling robustness, and the local regression achieves pose refinement.
#### 4.2.2 Effects of Cumulative Target Correlation Regularization.
As shown in Table 3, \(\mathcal{L}_{cte}\) consistently improves the results under different settings across different datasets, _e.g._, \(5.6\%\) improvement on the Occluded LineMOD for the model trained on synthetic data, which demonstrates the effectiveness of \(\mathcal{L}_{cte}\) on mining latent correlation across regression targets. We also visualize the feature distribution of an example via the t-SNE [20] in Fig. 1, where, with \(\mathcal{L}_{cte}\), features assigned to different pose anchors preserve smooth and continuously changing nature of regression targets in the feature space.
#### 4.2.3 Effects of Manifold-Aware Self-Training on Coarse Classification.
The self-training schemes have been verified their effectiveness on reducing the Sim2Real domain gap by incorporating the unlabeled real data into training via pseudo label generation and training sample selection. Taking our network with \(\mathcal{L}_{cte}\) as example, the results are improved from \(55.3\%\) to \(61.4\%\) on the Occluded LineMOD via self-training. Compared to the RSD [3] designed for the problem of UDA on regression, our MAST scheme can significantly beat the competing RSD (see results in Table 3), where the only difference lies in replacing self-training on coarse classification with RSD on whole regression. Such an observation can again confirm the superiority of the proposed MAST scheme, consistently outperforming the state-of-the-art UDA on regression.
#### 4.2.4 On More Visual Regression Tasks.
We conduct more experiments on the dSprites dataset [16] for assessing UDAVR performance. For simplicity, the problem aims to regress the "scale" variable of a shape from images. Using the same backbone as RSD, under the UDA setting from the scream (S) domain to the noisy (N) domain, our MAST can achieve 0.024 in terms of mean absolute error, while the RSD only obtains 0.043.
#### 4.2.5 Run-time analysis.
On a server with NVIDIA GeForce RTX 3090 GPU, given a 640 \(\times\) 480 image, the run-time of our network is up to 5.8 ms/object including object detection and pose estimation when using Mask R-CNN [14] as detector. Pose estimation takes around 5 ms/object.
#### 4.2.6 Details of output pose.
We employ a render-and-compare style pose refinement process as [1] to get final object pose. An initial guess pose \([\mathbf{R}_{init},x_{init},y_{init},z_{init}]\) is calculated from bounding box and object CAD model using the same strategy as [1]. Given the network output \([\mathbf{R},x,y,z]\), the estimated object pose \([\mathbf{R}_{obj},x_{obj},y_{obj},z_{obj}]\) can be calculated by:
\[\left\{\begin{aligned} \mathbf{R}_{obj}&=\mathbf{R} \cdot\mathbf{R}_{init}\\ x_{obj}&=x+x_{init}\\ y_{obj}&=y+y_{init}\\ z_{obj}&=z\cdot z_{init}\end{aligned}\right., \tag{14}\]
\begin{table}
\begin{tabular}{c|c|c|c|c} \multirow{2}{*}{Pose Estimator} & \multirow{2}{*}{\(\mathcal{L}_{cte}\)} & \multirow{2}{*}{Method of UDA} & \multirow{2}{*}{Dataset} \\ \cline{3-3} \cline{5-5} & & & LM & LMO \\ \hline \multicolumn{5}{c|}{Data: syn (w/ GT)} \\ \multicolumn{1}{c|}{Reg.} & \(\times\) & - & 75.3 & 44.0 \\ \multicolumn{1}{c|}{Cls. + Reg.} & \(\times\) & - & 79.3 & 49.7 \\ \multicolumn{1}{c|}{Cls. + Reg.} & \(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check
On selecting samples with pseudo pose labels.We choose the probability \(\mathbf{S}_{z}\) as confidence scores in practice, Fig. 3 shows the average recall of selected samples with pseudo pose labels via \(\mathbf{S}_{\mathbf{R}}\), \(\mathbf{S}_{v_{x}}\), \(\mathbf{S}_{v_{y}}\), \(\mathbf{S}_{z}\), which tells that as the confidence threshold becomes larger, only red line (\(\mathbf{S}_{z}\)) grows in terms of the average recall while others remain unchanged or decreased.
## 5 Conclusion
This paper proposes a novel and generic manifold-aware self-training scheme for UDA on regression, which is applied to the challenging 6D pose estimation of object instances. We address the UDAVR problem via decomposing it into coarse classification and fine regression, together with a cumulative target correlation regularization. Experiment results on three popular benchmarks can verify the effectiveness of our MAST scheme, outperforming the state-of-the-art methods with significant margins. It is worth pointing out that our MAST scheme can readily be applied to any UDA regression tasks, as the UDA on coarse classification making our method robust against feature scaling while maintaining latent cumulative correlation underlying in regression target space.
## Acknowledgments
This work is supported in part by the National Natural Science Foundation of China (Grant No.: 61902131), the Guangdong Youth Talent Program (Grant No.: 2019QN01X246), the Guangdong Basic and Applied Basic Research Foundation (Grant No.: 2022A1515011549), the Program for Guangdong Introducing Innovative and Enterpreurial Teams (Grant No.: 2017ZT07X183), and the Guangdong Provincial Key Laboratory of Human Digital Twin (Grant No.: 2022B1212010004).
|
2306.09809 | Generic Selective Independent Families | We prove that the generic maximal independent family obtained by iteratively
forcing with the Mathias forcing relative to diagonalization filters is densely
maximal. Moreover, by choosing the filters with some care one can ensure the
family is selective and hence forcing indestructible in a strong sense. Using
this we prove that under $\mathfrak{p} = 2^{\aleph_0}$ there are selective
independent families and also we show how to add selective independent families
of any desired size. | Vera Fischer, Corey Bacal Switzer | 2023-06-16T12:47:08Z | http://arxiv.org/abs/2306.09809v1 | # Generic selective independent families
###### Abstract.
We prove that the generic maximal independent family obtained by iteratively forcing with the Mathias forcing relative to diagonalization filters is densely maximal. Moreover, by choosing the filters with some care one can ensure the family is selective and hence forcing indestructible in a strong sense. Using this we prove that under \(\mathfrak{p}=2^{\aleph_{0}}\) there are selective independent families and also we show how to add selective independent families of any desired size.
2010 Mathematics Subject Classification: 03E17, 03E35, 03E50 _Acknowledgments:_ The authors would like to thank the Austrian Science Fund (FWF) for the generous support through grant number Y1012-N35.
## 1. Introduction
Recall that a family \(\mathcal{I}\subseteq[\omega]^{\omega}\) is _independent_ if for all finite, disjoint \(\mathcal{A},\mathcal{B}\subseteq\mathcal{I}\) the set \(\bigcap\mathcal{A}\setminus\bigcup\mathcal{B}\) is infinite. Such a family is a _maximal_ independent family or a m.i.f. if it is maximal with this property. Denote by \(\mathsf{FF}(\mathcal{I})\) the collection of finite functions \(h:\mathcal{I}\to 2\) and for all \(h\in\mathsf{FF}(\mathcal{I})\) let \(\mathcal{I}^{h}:=\bigcap_{A\in\operatorname{dom}(h)}A^{h}\) where \(A^{h}=A\) if \(h(A)=0\) and \(A^{h}=\omega\setminus A\) if \(h(A)=1\). Thus \(\mathcal{I}\) is independent if for all \(h\in\mathsf{FF}(\mathcal{I})\)\(\mathcal{I}^{h}\) is infinite. The sets of the form \(\mathcal{I}^{h}\) are called _Boolean combinations_.
Maximal independent families are one of several important examples of maximal combinatorial sets of reals studied in set theory. Other examples include MAD families, MED families and ultrafilter bases. In each case there is an associated cardinal characteristic: the least size of a maximal family of that type. In the case of m.i.f.'s this cardinal is denoted \(\mathfrak{i}\). See [3] for more information on \(\mathfrak{i}\) and related cardinals. When trying to prove that such a cardinal can be consistently less than the continuum one often needs to construct witnesses which satisfy a stronger maximality condition which can be preserved by iterations of appropriate forcing notions.
In the case of independent families the associated "strongly maximal" families are called _selective independent families_ (defined below). These were first investigated by Shelah in his proof of the consistency of \(\mathfrak{i}<\mathfrak{u}\) in [12], and further studies can be found in e.g. [4, 5, 7, 13]. See in particular [8] where the authors proved that such families can be preserved by any countable support iteration of Cohen preserving, proper forcing notions for which each iterand preserves the dense maximality of the family1. However several aspects of the combinatorics of such families remain unknown including exactly when such families exist. In particular, it is open whether it is consistent with \(\mathsf{ZFC}\) that there are no selective independent families. In this paper we begin the investigation into such questions. Our first main theorem is the following.
**Theorem 1.1** (see Theorem 4.1 below).: \(\mathfrak{p}=2^{\aleph_{0}}\) _implies there are selective maximal independent families._
This result actually follows as a corollary of the main result of the paper. To each independent family \(\mathcal{I}\), maximal or not, there is an associated _diagonalization filter_, \(\mathcal{F}_{\mathcal{I}}\), the choice of which is not unique, so that forcing with the associated Mathias forcing \(\mathbb{M}(\mathcal{F}_{\mathcal{I}})\) adds a real \(m\) so that \(\mathcal{I}\cup\{m\}\) is independent but \(\mathcal{I}\cup\{m,y\}\) is not independent for any ground model \(y\in[\omega]^{\omega}\). It follows that a finite support iteration of such Mathias forcing notions of any length of uncountable cofinality, adding the generic real to the independent family at each step, generically adds a maximal independent family. Now we can state the main result of this paper.
**Theorem 1.2**.: _In the finite support iteration described above, the diagonalization filters can be chosen so that the m.i.f. produced at the end is selective. More explicitly, any finite support iteration of diagonalization filters produces a densely maximal family whose density filter is a \(P\)-filter, and the diagonalization filters can be chosen so that the density filter is a \(Q\)-filter as well._
While the the wording above is somewhat technical, the moral is that the obvious way of producing a m.i.f. generically actually produces one which satisfies a stronger maximality condition making its maximality forcing invariant (for appropriate forcing notions). An immediate corollary of Theorem 1.2 is the following.
**Theorem 1.3** (see Theorem 4.2 below).: _Let \(\kappa\leq\lambda\) be cardinals of uncountable cofinality. It is consistent that \(\lambda=2^{\aleph_{0}}\) and there is a selective independent family of size \(\kappa\). If moreover \(\kappa\) is regular then we can arrange \(\mathfrak{i}=\kappa\) as well._
Prior to the current work, the above was only known for the special case \(\kappa=\aleph_{1}\).
The rest of this paper is organized as follows. In the next section we recall some preliminaries we will need. Section 3 provides the proof of Theorem 1.2. Section 4 proves some corollaries and additional results including the proof of Theorems 1.1 and 1.3. The paper closes with some relevant questions and lines for further research. Throughout our notation is mostly standard, conforming to that of [11]. For all undefined notions involving cardinal characteristics and set theory of the real line we refer the reader to [3] or the monograph [1].
**Acknowledgements.** The authors thank Juris Steprans for many helpful conversations on the content relating to this paper as well as allowing us to inlcude Proposition 2.9. They also thank Oswaldo Guzman for pointing out Lemmas 6.6 and 6.7 of [9].
## 2. Preliminaries
Let \(\mathcal{I}\) be an independent family. We say that \(\mathcal{I}\) is _densely maximal_ if for every \(X\in[\omega]^{\omega}\) and every \(h\in\mathsf{FF}(\mathcal{I})\) there is an \(h^{\prime}\supseteq h\) in \(\mathsf{FF}(\mathcal{I})\) so that either \(\mathcal{I}^{h^{\prime}}\setminus X\) or \(\mathcal{I}^{h^{\prime}}\cap X\) is finite. In other words, \(\mathcal{I}\) is densely maximal if for any \(X\notin\mathcal{I}\) the collection of \(h\in\mathsf{FF}(\mathcal{I})\) witnessing that \(X\) cannot be added to \(\mathcal{I}\) while preserving maximality is dense in the partial order \((\mathsf{FF}(\mathcal{I}),\supseteq)\).
The _density filter_ of an independent family is the collection of all \(X\in[\omega]^{\omega}\) so that for every \(h\in\mathsf{FF}(\mathcal{I})\) there is an \(h^{\prime}\supseteq h\) in \(\mathsf{FF}(\mathcal{I})\) so that \(\mathcal{I}^{h^{\prime}}\setminus X\) is finite. Denote this filter by \(\operatorname{fil}(\mathcal{I})\). Densely maximal independent families are characterized by _the partition property_, explained below.
**Fact 2.1** (The Partition Property).: _Let \(\mathcal{I}\) be an independent family. The following are equivalent:_
1. \(\mathcal{I}\) _is densely maximal._
2. \(P(\omega)=\operatorname{fil}(\mathcal{I})\cup\langle\omega\setminus\mathcal{I}^ {h}\mid h\in\mathsf{FF}(\mathcal{I})\rangle_{dn}\) _where_ \(\langle\mathcal{X}\rangle_{dn}\) _denotes the downward closure of_ \(\mathcal{X}\subseteq[\omega]^{\omega}\) _under_ \(\supseteq^{*}\)_._
Some more facts about the density filter are listed below. The following are easily verified, see [2, Lemma 5.5].
**Lemma 2.2**.:
1. _If_ \(\mathcal{I}^{\prime}\) _is an independent family and_ \(\mathcal{I}\subseteq\mathcal{I}^{\prime}\) _then_ \(\operatorname{fil}(\mathcal{I})\subseteq\operatorname{fil}(\mathcal{I}^{ \prime})\)_;_
2. _If_ \(\kappa\) _is a regular uncountable cardinal and_ \(\langle\mathcal{I}_{\alpha}\mid\alpha<\kappa\rangle\) _is a continuous increasing chain of independent families then_ \(\operatorname{fil}(\bigcup_{\alpha<\kappa}\mathcal{I}_{\alpha})=\bigcup_{ \alpha<\kappa}\operatorname{fil}(\mathcal{I}_{\alpha})\)_;_
3. _If_ \(\mathcal{I}\) _is an independent family then_ \(\operatorname{fil}(\mathcal{I})=\bigcup\{\operatorname{fil}(\mathcal{J})\mid \mathcal{J}\in[\mathcal{I}]^{\leq\omega}\}\)_._
We will need another type of filter associated to an independent family as well, which we define now.
**Definition 2.3**.: Let \(\mathcal{I}\) be an independent family. A _diagonalization filter_ for \(\mathcal{I}\) is any filter \(\mathcal{F}\) on \(\omega\) which is maximal with respect to the property that all \(X\in\mathcal{F}\) have infinite intersection with every \(\mathcal{I}^{h}\).
Note that in general diagonalization filters will not be unique. For instance, if \(\mathcal{I}\) is not maximal then there is some real \(y\notin\mathcal{I}\) so that \(\mathcal{I}\cup\{y\}\) is independent and, by the definition of independence it follows that both \(y\) and \(\omega\setminus y\) have infinite intersection with every Boolean combination of \(\mathcal{I}\). As such there are diagonalization filters containing both \(y\) and \(\omega\setminus y\) (which obviously therefore cannot be the same). However, it is an easy but key observation that _any_\(y\) which has infinite intersection with every Boolean combination of a given independent family \(\mathcal{I}\) must have infinite intersection with \(\mathcal{I}^{h}\cap Z\) for every element \(Z\in\operatorname{fil}(\mathcal{I})\) and every \(h\in\mathsf{FF}(\mathcal{I})\). Hence if \(\mathcal{F}\) is a diagonalization filter then \(\operatorname{fil}(\mathcal{I})\subseteq\mathcal{F}\). In the case of dense maximality we have the converse.
**Lemma 2.4** (Essentially Fischer-Montoya, see [7]).: _Let \(\mathcal{I}\) be independent. The following are equivalent._
1. \(\mathcal{I}\) _is densely maximal._
2. _The diagonalization filter is unique and equals the density filter._
Proof.: Item (1) implies item (2) is Corollary 36 of [7]. Let us show that (2) implies (1). Thus suppose that \(\operatorname{fil}(\mathcal{I})\) is a diagonalization filter. Note that it must be the unique one since any other diagonalization filter extends it but, being a diagonalization filter it is maximal. We will show that \(\mathcal{I}\) is densely maximal. By the partition property, Fact 2.1, it suffices to show that
\[P(\omega)=\operatorname{fil}(\mathcal{I})\cup\langle\omega\setminus\mathcal{I }^{h}\mid h\in\mathsf{FF}(\mathcal{I})\rangle_{dn}\]
Suppose that \(X\notin\operatorname{fil}(\mathcal{I})\). Since \(\operatorname{fil}(\mathcal{I})\) is the unique diagonalization filter for \(\mathcal{I}\), it follows that \(X\) is not in any diagonalization filter. This means in particular that it does not have infinite intersection with every Boolean combination of \(\mathcal{I}\) (otherwise we could apply Zorn's Lemma to the filter generated by \(X\) to get a diagonalization filter which contains \(X\)). Therefore there is an \(h\in\mathsf{FF}(\mathcal{I})\) so that \(X\) is almost disjoint from \(\mathcal{I}^{h}\). But then \(X\) is almost included in \(\omega\setminus\mathcal{I}^{h}\) as needed.
We will need to force with the Mathias forcing of diagonalization filters so we recall this now.
**Definition 2.5**.: Let \(\mathcal{I}\) be an independent family and let \(\mathcal{F}\) be a diagonalization filter for \(\mathcal{I}\). Denote by \(\mathbb{M}(\mathcal{F})\) the Mathias forcing relativized to \(\mathcal{F}\). A condition for this forcing notion is a pair \((s,A)\) so that
1. \(s\in 2^{<\omega}\) is the characteristic function of a finite set of natural numbers.
2. \(A\in\mathcal{F}\)
3. \(\min(A)\geq\operatorname{dom}(s)\).
If \((s,A)\) and \((t,B)\) are conditions in \(\mathbb{M}(\mathcal{F})\) then we let \((s,A)\leq(t,B)\) just in case:
1. \(s\supseteq t\)
2. \(A\subseteq B\)
3. If \(n\in\operatorname{dom}(s)\setminus\operatorname{dom}(t)\) and \(s(n)=1\) then \(n\in B\).
If \(\mathcal{F}\) is clear from context, or unimportant we often write \(\mathbb{M}(\mathcal{I})\) to emphasize the independent family. If \(G\subseteq\mathbb{M}(\mathcal{I})\) is generic then the real \(m_{G}:=\{n\mid\exists(s,A)\in G\,s(n)=1\}\) is denoted _the Mathias real_ for \(\mathcal{I}\) (or \(\mathcal{F}\) depending on the context). It is readily checked that \(m\) is an infinite set of natural numbers which _diagonalizes_\(\mathcal{F}\) i.e. \(m\subseteq^{*}A\) for each \(A\in\mathcal{F}\) where \(X\subseteq^{*}Y\) means that \(X\setminus Y\) is finite. Also \(V[G]=V[m_{G}]\). Moreover we have that \(\mathcal{I}\cup\{m_{G}\}\) is independent but for any ground model \(y\in[\omega]^{\omega}\cap V\) we have \(\mathcal{I}\cup\{m_{G},y\}\) is not independent, see [7]. It follows that an iteration with finite support of Mathias forcing relativized to the increasing independent families obtained by adjoining the generic Mathias reals (of length an uncountably cofinal ordinal) will result in a model with a m.i.f. of length the iteration. Let us call such a m.i.f. a _generic m.i.f._. That such a filter exists, and produces a maximal independent family was first observed by Brendle, see [10]. Indeed by performing such an iteration of length \(\aleph_{1}<2^{\aleph_{0}}\) Brendle obtained the first model of \(\mathfrak{i}<2^{\aleph_{0}}\).
As stated in the introduction, the main goal of this paper is to improve this result by showing that such an iteration produces a _selective_ maximal independent family. Selectivity is a further strengthening of dense maximality. We recall some more definitions.
**Definition 2.6**.: Let \(\mathcal{F}\) be a family of subsets of \(\omega\). We say that:
1. \(\mathcal{F}\) is a \(P\)_-set_ if every countable family \(\{A_{n}\mid n<\omega\}\subseteq\mathcal{F}\) has a psuedointersection \(B\in\mathcal{F}\), i.e. \(B\subseteq^{*}A_{n}\) for all \(n<\omega\),
2. \(\mathcal{F}\) is a \(Q\)_-set_ if given every partition of \(\omega\) into finite sets \(\{I_{n}\mid n<\omega\}\) there is a _semiselector_\(A\in\mathcal{F}\) i.e. \(|A\cap I_{n}|\leq 1\) for all \(n<\omega\),
3. \(\mathcal{F}\) is _Ramsey_ if it is both a \(P\)-set and a \(Q\)-set.
If \(\mathcal{F}\) is a filter and a \(P\)-set (respectively a \(Q\)-set, Ramsey set) we call \(\mathcal{F}\) a \(P\)-filter (respectively a \(Q\)-filter, Ramsey filter).
We can now give the definition of a selective maximal independent family.
**Definition 2.7**.: An independent family \(\mathcal{I}\) is _selective_ if it is densely maximal and \(\operatorname{fil}(\mathcal{I})\) is Ramsey.
Selective independent families were first introduced by Shelah in [12], where it is shown in the course of his proof of the consistency of \(\mathfrak{i}<\mathfrak{u}\) that under \(\mathsf{CH}\) there is a selective independent family. Since then the following results have been shown concerning the forcing indestructibility of selective independent families.
**Fact 2.8**.: _Let \(\mathcal{I}\) be a selective independent family. Then \(\mathcal{I}\) is remains selective (and hence maximal) after forcing with a countable support product of Sacks forcing (Shelah, see [4, Theorem 4.6] or [7, Corollary 37]). Moreover, \(\mathcal{I}\) remains selective independent after forcing with the countable support iteration of any of the following:_
1. _Sacks forcing (Shelah, see_ _[_4_, Theorem 4.6]_ _or_ _[_7_, Corollary 37]__);_
2. _forcing notions of the form_ \(\mathbb{Q}_{\mathcal{I}}\) _from Shelah's_ _[_12_]__;_
3. _Miller partition forcing (see_ _[_6_]__);_
4. \(h\)_-Perfect Tree Forcing Notions for different functions_ \(h:\omega\to\omega\) _with_ \(1<h(n)<\omega\) _for all_ \(N<\omega\) _(see_ _[_13_]__);_
5. _Coding with perfect trees (see_ _[_2_]__);_
6. _Miller line forcing, (see_ _[_8_, Theorem 4.1]__);_
7. _any mix of the above (a consequence of_ _[_8_, Theorem 3.8]__)._
Obviously all the forcing constructions described above produce a model where the selective independent family is of size \(\aleph_{1}\). As stated in the introduction, the purpose of this paper in part is to show that consistently there are selective families of other sizes.
We finish these preliminaries by remarking that there are in fact maximal, non-densely maximal independent families. This is due to Juris Steprans and is included with his kind permission.
**Proposition 2.9** (Steprans).: _For any cardinal \(\kappa\) for which there is a maximal independent family there is a maximal, non densely maximal independent family of size \(\kappa\). In particular there is always one of size \(\mathfrak{i}\)._
Proof.: Fix a cardinal \(\kappa\) for which there is a maximal independent family. Let \(Z\subseteq\omega\) be an infinite, co-infinite set. By translation there is a maximal independent family \(\mathcal{I}=\{A_{\alpha}\mid\alpha<\kappa\}\) on \(Z\) (so all the \(A_{\alpha}\)'s are subsets of \(Z\)). Let \(\mathcal{J}=\{B_{\alpha}\mid\alpha<\kappa\}\) be an independent, but not maximal independent family on \(\omega\setminus Z\). Note that there is always an independent family of size continuum ([3, Proposition 8.9]) so in particular there is one of size \(\kappa\). Finally let \(\mathcal{K}=\{A_{\alpha}\cup B_{\alpha}\mid\alpha<\kappa\}\cup\{Z\}\). It is routine to check that this is independent. Moreover if \(X\notin\mathcal{K}\) then either \(X\cap Z\) is almost disjoint, in which case \(\mathcal{K}\cup\{X\}\) is not independent or \(X\cap Z=A_{\alpha}\) for some \(\alpha\) in which case \(\mathcal{K}\cup\{X\}\) is not independent or else there is a Boolean combination \(h\) on \(\mathcal{I}\) witnessing that \(X\cap Z\) cannot be added to \(\mathcal{I}\) and hence \(\mathcal{K}\cup\{X\}\) is not independent. Therefore \(\mathcal{K}\) is maximal.
Finally let \(Y\subseteq\omega\setminus Z\) be a set so that \(Y\notin\mathcal{J}\) but \(\mathcal{J}\cup\{Y\}\) is independent. Observe that as a result no Boolean combination \(h\in\mathsf{FF}(\mathcal{K})\) extending \(\langle Z,1\rangle\) will be such that \(\mathcal{K}^{h}\setminus Y\) or \(\mathcal{K}^{h}\cap Y\) is finite. Thus \(\mathcal{K}\) is not densely maximal.
This counterexample should be contrasted with the combined content of Lemmas 6.6 and 6.7 of [9] where it is shown that every maximal independent family is densely maximal below some Boolean combination i.e. for each m.i.f. \(\mathcal{I}\) there is an \(h\in\mathsf{FF}(\mathcal{I})\) so that the set \(\{A\cap\mathcal{I}^{h}\mid A\in\mathcal{I}\setminus\operatorname{dom}(h)\}\) is densely maximal as a family on \(\mathcal{I}^{h}\). Note that as a corollary of this it follows that in \(\mathsf{ZFC}\) there are densely maximal independent families and indeed they exist in every cardinality for which there is a maximal independent family.
## 3. The Selectivity of the Generic Maximal Independent Family
Next we prove Theorem 1.2. Let us fix some notation for the rest of this section. Let \(\kappa\) be an ordinal of uncountable cofinality. Let \(\mathcal{I}_{0}\) be some fixed independent family and
\(\mathcal{F}_{0}\) be a diagonalization filter. Now inductively let \(\langle\mathbb{P}_{\alpha},\dot{\mathbb{Q}}_{\alpha}\mid\alpha<\kappa\rangle\) be a finite support iteration and \(\dot{\mathcal{I}}_{\alpha}\) and \(\dot{\mathcal{F}}_{\alpha}\) be \(\mathbb{P}_{\alpha}\)-names defined as follows.
1. \(\mathbb{P}_{0}\) is the trivial forcing, \(\dot{\mathbb{Q}}_{0}\) is the trivial name for \(\mathbb{M}(\mathcal{F}_{0})\).
2. \(\mathbb{P}_{\alpha}\) forces that \(\dot{\mathcal{I}}_{\alpha}\) is an independent family with a diagonalization filter \(\dot{\mathcal{F}}_{\alpha}\).
3. \(\mathbb{P}_{\alpha+1}=\mathbb{P}_{\alpha}*\mathbb{M}(\dot{\mathcal{F}}_{\alpha})\).
4. \(\mathbb{P}_{\alpha+1}\) forces that \(\dot{\mathcal{I}}_{\alpha+1}=\dot{\mathcal{I}}_{\alpha}\cup\{\dot{m}_{\alpha}\}\) where \(\dot{m}_{\alpha}\) is the name for the \(\mathbb{M}(\dot{\mathcal{F}}_{\alpha})\)-generic real.
5. If \(\beta\) is a limit ordinal then \(\mathbb{P}_{\beta}\) forces that \(\dot{\mathcal{I}}_{\beta}=\bigcup_{\gamma<\beta}\dot{\mathcal{I}}_{\gamma}\).
Let \(\dot{\mathcal{I}}_{\kappa}\) be the \(\mathbb{P}_{\kappa}\)-name for the union of the \(\dot{\mathcal{I}}_{\alpha}\)'s. Let \(G_{\kappa}\subseteq\mathbb{P}_{\kappa}\) be generic over \(V\) and, in \(V[G_{\kappa}]\) let \(\mathcal{I}_{\alpha}\), \(\mathcal{F}_{\alpha}\), \(m_{\alpha}\) etc refer to the evaluation of all of the corresponding names with the dots. Finally let for each \(\alpha<\kappa\) the generic \(G_{\alpha}=G_{\kappa}\cap\mathbb{P}_{\alpha}\) as usual. For the rest of this section we fix all such objects. We refer to \(\mathcal{I}_{\kappa}\) (in \(V[G_{\kappa}]\)) as _the generic m.i.f._. Let us now restate Theorem 1.2 more precisely.
**Theorem 3.1**.: \(\mathbb{P}_{\kappa}\) _forces that the generic m.i.f. is densely maximal and its density filter is a \(P\)-point. Moreover, if \(\kappa\) is a cardinal and \(\kappa^{<\kappa}=\kappa\) in the ground model then the \(\dot{\mathcal{F}}_{\alpha}\)'s can be chosen so that the generic mif is selective._
It is unclear whether the above can be improved so as to eliminate the need to choose the filters so as to ensure that \(\operatorname{fil}(\mathcal{I}_{\kappa})\) is a \(Q\)-filter. We will discuss this more later. To prove Theorem 3.1 we need to show three things: that \(\mathcal{I}_{\kappa}\) is densely maximal, that its density filter is a \(P\)-point and that its density filter is a \(Q\)-point given good enough bookkeeping. We will prove each of these separately, beginning with dense maximality.
### Dense Maximality
Again, we fix the notation described in the first paragraph of this section.
**Lemma 3.2**.: \(\mathbb{P}_{\kappa}\) _forces that the generic m.i.f. is densely maximal._
Proof.: By Lemma 2.4 it suffices to show that if \(X\in V[G_{\kappa}]\) has infinite intersection with every Boolean combination then it is in the density filter of \(\mathcal{I}_{\kappa}\) as this implies that the density filter is the unique diagonalization filter and hence \(\mathcal{I}_{\kappa}\) is densely maximal. So suppose that \(X\in V[G_{\kappa}]\) has infinite intersection with every Boolean combination and let \(h\in\mathsf{FF}(\mathcal{I})\). We need to find an \(h^{\prime}\supseteq h\) so that \(\mathcal{I}_{\kappa}^{h^{\prime}}\setminus X\) is finite. Let \(\alpha<\kappa\) be such that \(X,h\in V[G_{\alpha}]\).
Case 1: There is a \(\beta\geq\alpha\) so that \(X\) is forced to be in \(\dot{\mathcal{F}}_{\beta}\), the diagonalization filter used at stage \(\beta\). Now the generic real \(m_{\beta}\in\mathcal{I}_{\kappa}\) is a pseudointersection of this filter and in particular \(m_{\beta}\subseteq^{*}X\). But then we get that if \(h^{\prime}=h\cup\langle m_{\beta},0\rangle\) then \(\mathcal{I}_{\beta}^{h^{\prime}}\subseteq^{*}m_{\beta}\subseteq^{*}X\) so \(h^{\prime}\) is as needed. Note that this was valid since \(h\in V[G_{\alpha}]\) and so in particular \(\beta\notin\operatorname{dom}(h)\).
Case 2: \(X\) is not forced to be in any diagonalization filter at any stage \(\beta\geq\alpha\). Work \(V[G_{\alpha}]\) and let \(\mathcal{F}_{\alpha}\) be the choice of diagonalization filter for \(\mathcal{I}_{\alpha}\). Note that the assumption implies in particular that \(X\) is not in \(\mathcal{F}_{\alpha}\). Since \(\mathcal{F}_{\alpha}\) is maximal with the property that every element has infinite intersection with every Boolean combination of \(\mathcal{I}_{\alpha}\), there must be a \(Y\in\mathcal{F}_{\alpha}\) and a Boolean combination \(g\in\mathsf{FF}(\mathcal{I}_{\alpha})\) so that \(X\cap Y\cap\mathcal{I}_{\alpha}^{g}\) is finite. But now note that \(\dot{m}_{\alpha}\) is forced to be an almost subset of \(Y\) thus we get that \(V[G_{\alpha+1}]\models\)"\(X\cap\dot{m}_{\alpha}^{G_{\alpha+1}}\cap\mathcal{I}_{\alpha}^{g}\) is finite". But \(\dot{m}_{\alpha}^{G_{\alpha+1}}\cap\mathcal{I}_{\alpha}^{g}\) is a Boolean combination of \(\mathcal{I}_{\kappa}\), contradicting the defining property of \(X\)
Having established dense maximality we go on to consider the property of the density filter being a \(P\)-filter.
### \(\operatorname{fil}(\mathcal{I}_{\kappa})\) is a \(P\)-Filter
We continue with our notation outlined above.
**Lemma 3.3**.: _The density filter of the generic m.i.f. added by \(\mathbb{P}_{\kappa}\) is a \(P\)-filter._
Proof.: Let \(\{\dot{A}_{n}\ |\ n<\omega\}\) name an \(\omega\)-sequence of elements of \(\operatorname{fil}(\mathcal{I}_{\kappa})\). By the fact that \(\kappa\) has uncountable cofinality there is a \(\gamma<\kappa\) so that \(\dot{A}_{n}\in V[G_{\gamma}]\) and, moreover, by Lemma 2.2 we can find such a \(\gamma\) so that \(\dot{A}_{n}\in\operatorname{fil}(\mathcal{I}_{\gamma})\) for all \(n<\omega\). Work in such a \(V[G_{\gamma}]\) and let \(A_{n}\) be the evaluation of \(\dot{A}_{n}\) in this model.
Since \(A_{n}\in\operatorname{fil}(\mathcal{I}_{\gamma})\) for each \(n<\omega\) we must have that for each \(n<\omega\) the set \(A_{n}\) is in every diagonalization filter we choose from stage \(\gamma\) on, again by Lemma 2.2. Consequently for all \(\xi>\gamma\) we have that \(m_{\xi}\subseteq^{*}A_{n}\) for all \(n<\omega\). In particular (working now in \(V[G_{\gamma+\omega}]\)) we have \(m_{\gamma+n}\subseteq^{*}A_{m}\) for all \(n,m<\omega\). For each \(k<\omega\) let \(l_{k}(n)\) be such that \(m_{\gamma+n}\setminus l_{k}(n)\subseteq A_{k}\). Let \(f:\omega\to\omega\) dominate all of the \(l_{k}\)'s. Finally set
\[B=\bigcup_{n<\omega}(m_{\gamma+n}\setminus f(n))\]
We claim that \(B\subseteq^{*}A_{n}\) for each \(n<\omega\) and \(B\) is forced to be in the density filter for \(\mathcal{I}_{\kappa}\), which completes the proof. For the first part fix \(k<\omega\) and let \(m\) be such that for all \(n>m\) we have \(f(n)>l_{k}(n)\). Now we have
\[B=\bigcup_{n\leq m}(m_{\gamma+n}\setminus f(n))\cup\bigcup_{n>m}(m_{\gamma+n }\setminus f(n))\]
Observe that \(\bigcup_{n\leq m}m_{\gamma+n}\subseteq^{*}A_{k}\) since it is a finite union of almost subsets of \(A_{k}\) and \(\bigcup_{n>m}m_{\gamma+n}\setminus f(n)\subseteq A_{k}\) (true inclusion - not mod finite) since \(f(n)>l_{k}(n)\) and by definition of \(l_{k}\) we have that \(m_{\gamma+n}\setminus l_{k}(n)\subseteq A_{k}\). Putting these two observations together proves the first part of our claim, namely that \(B\subseteq^{*}A_{k}\).
For the second part let \(h\in\mathsf{FF}(\mathcal{I}_{\kappa})\). Since \(h\) is finite there is an \(n<\omega\) so that \(m_{\gamma+n}\notin\operatorname{dom}(h)\). Fix such an \(n<\omega\) and let \(h^{\prime}=h\cup\langle m_{\gamma+n},0\rangle\). Now \(\mathcal{I}_{\kappa}^{h^{\prime}}=\mathcal{I}_{\kappa}^{h}\cap m_{\gamma+n} \subseteq m_{\gamma+n}\subseteq^{*}B\) so \(B\) is in the density filter as needed.
_Remark 1_.: By one of the results of [7] we know that forcing with \(\mathbb{M}(\mathcal{I})\) for \(\mathcal{I}\) selective adds a dominating real. It follows that, once we have proved Lemma 3.9 below and hence Theorem 3.1 in Mathias iterations as we have been describing we get that \(\mathfrak{b}=\mathfrak{d}=\operatorname{cof}(\kappa)\). The existence of such dominating reals allows us to sup up the argument for 3.3 (since the only place we used countability was to get a dominating function) so we will have shown that in fact, assuming we choose the diagonalization filters correctly, \(\operatorname{fil}(\mathcal{I}_{\kappa})\) is a \(P_{\operatorname{cf}(\kappa)}\)-filter i.e. any \(<\!\!\operatorname{cof}(\kappa)\)-many elements have a pseudointersection in the filter.
The proof of Lemma 3.3 actually shows that a basis for the density filter of \(\mathcal{I}_{\kappa}\) in \(V[G_{\kappa}]\) is given by simply \(\bigcup_{n<\omega}(m_{\xi_{n}}\setminus f(n))\) for functions \(f:\omega\to\omega\) and elements \(m_{\xi_{n}}\in\mathcal{I}_{\kappa}\) (with infinitely many distinct). Extracting from this we get the following.
**Lemma 3.4**.: _Let \(A\in[\omega]^{\omega}\cap V[\mathcal{I}_{\kappa}]\). The following are equivalent._
1. _There is a_ \(\gamma<\kappa\) _so that for all_ \(\alpha\in(\gamma,\kappa)\) _we have_ \(m_{\gamma}\subseteq^{*}A\)_._
2. _There are strictly increasing ordinals_ \(\xi_{n}<\kappa\) _for_ \(n<\omega\) _so that for all_ \(n<\omega\) _we have_ \(m_{\xi_{n}}\subseteq^{*}A\)_._
3. \(A\in\operatorname{fil}(\mathcal{I}_{\kappa})\)_._
Proof.: Fix \(A\) as above and work in \(V[\mathcal{I}_{\kappa}]\). Since (1) is an obvious strengthening of (2) we have (1) implies (2) and (2) implies (3) is exactly as in the proof of Lemma 3.3. Thus it suffices to show that (3) implies (1). Observe that by the ccc there is a \(\gamma<\kappa\) so that \(A\in[\omega]^{\omega}\cap V[\mathcal{I}_{\gamma}]\). Moreover since \(A\in\operatorname{fil}(\mathcal{I}_{\kappa})\) we can assume without loss of generality that \(A\in\operatorname{fil}(\mathcal{I}_{\gamma})\) (the \(\gamma\) where \(A\) first appears might be before the one in which \(A\) ends up in the density filter but we just take the latter in this case). Now we get that \(A\in\mathcal{F}_{\alpha}\) for each \(\alpha>\gamma\) hence \(m_{\alpha}\subseteq^{*}A\) for every such \(\alpha\) since \(m_{\alpha}\) is a pseudointersection of \(\mathcal{F}_{\alpha}\).
### \(\operatorname{fil}(\mathcal{I}_{\kappa})\) is a \(Q\)-Filter
Finally we will show that \(\operatorname{fil}(\mathcal{I}_{\kappa})\) is a \(Q\)-filter when the diagonalization filters \(\mathcal{F}_{\alpha}\) are chosen carefully enough. As noted in the hypotheses of Theorem 3.1, we will eventually need that \(\kappa\) is a cardinal and \(\kappa^{<\kappa}=\kappa\) but we will state this explicitly when we need it. For now we proceed with the notation given above. We will use the following characterization of \(Q\)-filters.
**Fact 3.5** (See Lemma 3.7 of [6]).: _Let \(\mathcal{F}\) be a filter on \(\omega\). The following are equivalent._
1. \(\mathcal{F}\) _is a_ \(Q\)_-filter._
2. _For every strictly increasing_ \(f:\omega\to\omega\) _there is a_ \(A\in\mathcal{F}\) _so that, letting_ \(A=\{k(n)\}_{n<\omega}\) _be an increasing enumeration of_ \(A\)_,_ \(f(k(n))<k(n+1)\)_._
Moving forward, in the pursuit of brevity, if \(f\) and \(A\) have the property described in (2) above we will say that \(A\)_Q-dominates_\(f\). We need one more fact.
**Lemma 3.6**.: _In \(V[G_{\alpha}]\) there is a diagonalization filter \(\mathcal{F}^{\prime}\supseteq\mathcal{F}_{\alpha}\) for \(\mathcal{I}_{\alpha+1}\). Consequently we can always choose the diagonalization filters for the iteration to be a \(\subseteq\)-strictly increasing sequence._
Proof.: Work in \(V[G_{\alpha}]\) and fix \(X\in\mathcal{F}_{\alpha}\). It suffices to show that, under the hypotheses we have that \(\Vdash_{\mathbb{M}(\mathcal{F}_{\alpha})}``\bar{X}\) has infinite intersection with every \(h\in\mathsf{FF}(\hat{\mathcal{I}}_{\alpha+1})"\) as in this case every \(X\in\mathcal{F}_{\alpha}\) has infinite intersection with every Boolean combination of \(\mathcal{I}_{\alpha+1}\) (note that we are assuming the maximal condition forces this) so we can extend \(\mathcal{F}_{\alpha}\) to some diagonalization filter for \(\mathcal{I}_{\alpha+1}\). Fix an arbitrary condition \((s,A)\) and an arbitrary \(h\in\mathsf{FF}(\mathcal{I}_{\alpha+1})\). If \(m_{\alpha}\notin\operatorname{dom}(h)\) then by hypothesis we have \(X\cap\mathcal{I}_{\alpha+1}^{h}\) is infinite so assume \(m_{\alpha}\in\operatorname{dom}(h)\) and let \(h^{\prime}=h\setminus\{\langle m_{\alpha},h(m_{\alpha})\rangle\}\). We need to show that for each \(n<\omega\) and each \(i<2\) we have that there is a \(k>n\) with \(k\in\mathcal{I}_{\alpha}^{h^{\prime}}\cap X\cap m_{\alpha}^{i}\). Fix \(n<\omega\), and \(i<2\). Without loss we can assume that \(\operatorname{dom}(s)\supsetneq n\) and \(A\subseteq X\) as the set of conditions whose stem has domain containing \(n\) is dense and, since \(X\in\mathcal{F}_{\alpha}\) we can always replace \(A\) by \(A\cap X\) if we so choose. Note that again we have that \(A\cap\mathcal{I}_{\alpha}^{h^{\prime}}\) has infinite intersection, and therefore there is in particular a \(k\in A\cap\mathcal{I}_{\alpha}^{h^{\prime}}\) and \(k>n\) since \((s,A)\) is a condition and therefore \(\min(A)>n\). Let \(s^{\prime}\supseteq s\) so that \(s^{\prime}(k)=i\). Then \((s^{\prime},A\setminus\operatorname{dom}(s^{\prime}))\) is a condition extending \((s,A)\) which forces that \(k\in\mathcal{I}_{\alpha}^{h^{\prime}}\cap A\cap m_{\alpha}^{i}\subseteq \mathcal{I}_{\alpha}^{h^{\prime}}\cap X\cap m_{\alpha}^{i}\), so we are done.
**Lemma 3.7**.: _If \(\gamma<\kappa\) and \(f\in\omega^{\omega}\cap V[G_{\gamma}]\) is strictly increasing, then there is a choice of diagonalization filters \(\dot{\mathcal{F}}_{\gamma+i}\) for \(i\in\omega+\omega\) so that (using this choice of filters for the Mathias forcing notions) we have that in \(V[G_{\gamma+\omega+\omega}]\) there is an \(A\in\operatorname{fil}(\mathcal{I}_{\gamma+\omega+\omega})\) which \(Q\)-dominates \(f\)._
Proof.: Fix \(\gamma\) and \(f\) as in the hypothesis of the lemma. Observe by the finiteness of the support, a density argument ensures that for every \(k<\omega\) there is an \(n<\omega\) so that \(\min(m_{\gamma+n})>k\). Using this, inductively define \(A=\{k(n)\}_{n<\omega}\) so that \(k(n+1)=\min(m_{\gamma+n_{n+1}})\) where \(n_{n+1}\) is the least number \(l\) so that \(\min(m_{\gamma+l})>f(k(n))\). Clearly
\(A\), which is in \(V[G_{\gamma+\omega}]\), \(Q\)-dominates \(f\) so it remains to show that the next \(\omega\)-many diagonalization filters can be chosen so that \(A\) is forced to be in \(\operatorname{fil}(\mathcal{I}_{\gamma+\omega+\omega})\).
Work in \(V[G_{\gamma+\omega}]\).
**Claim 3.8**.: \(A\) _has infinite intersection with every Boolean combination in \(\mathcal{I}_{\gamma+\omega}\)._
Proof of Claim 3.8.: This is a density argument. Suppose that \(k<\omega\), and \(h\in\mathsf{FF}(\mathcal{I}_{\gamma+\omega})\). Let \(n\) be large enough that \(\operatorname{dom}(h)\subseteq\mathcal{I}_{\gamma+n}\) and let \(m\) be such that the first \(m\)-entries of \(A\) are the minimums of elements from among \(m_{\gamma+l}\) for \(l<n\) and denote these elements \(\{k(j)\}_{j<m}\). Without loss of generality we can assume that \(m>k\). Work in \(V[G_{\gamma+n}]\). We now let \(a>f(k(m-1))\) be in \(\mathcal{I}_{\gamma+n}^{h}\). Since this set is infinite such an \(a<\omega\) exists. Finally let \(s\in 2^{<\omega}\) be the sequence of length \(a+1\) so that for all \(b<a\) we have \(s(b)=0\) and \(s(a)=1\). Clearly, regardless of the choice of \(\mathcal{F}_{\gamma+n}\) we have that \((s,\omega)\in\mathbb{M}(\mathcal{F}_{\gamma+n})\) and forces that the \(m^{\text{th}}\)-element of \(A\) is in \(\mathcal{I}_{\gamma+n}^{h}\setminus k\). Since \(k\), \(h\) and \(n\) were arbitrary, the proof is complete.
Given Claim 3.8, observe that we can put \(A\) into \(\mathcal{F}_{\xi}\) for \(\xi\in[\gamma+\omega,\gamma+\omega+\omega)\). The first step i.e. putting \(A\) in \(\mathcal{F}_{\gamma+\omega}\) follows from the claim since \(A\) has infinite intersection with every Boolean combination. The following steps follow from Lemma 3.6. Therefore \(A\in\operatorname{fil}(\mathcal{I}_{\gamma+\omega+\omega})\) and hence \(A\in\operatorname{fil}(\mathcal{I}_{\kappa})\) by Lemma 3.4, thus completing the proof.
Now we can prove the following lemma which implies Theorem 3.1.
**Lemma 3.9**.: _If \(\kappa^{<\kappa}=\kappa\) is a cardinal then there is a choice of diagonalization filters so that \(\mathbb{P}_{\kappa}\) forces that \(\operatorname{fil}(\hat{\mathcal{I}}_{\kappa})\) is a \(Q\)-filter._
Proof.: We want to show that we can choose the diagonalization filters so that every strictly increasing \(f\in\omega^{\omega}\cap V[G_{\kappa}]\) is \(Q\)-dominated by some \(A\in\operatorname{fil}(\mathcal{I}_{\kappa})\). By Lemma 3.7 we can ensure for any _fixed_\(f\in\omega^{\omega}\cap V[G_{\kappa}]\) this can be done but then the cardinal arithmetic hypothesis, alongside the ccc of the forcing ensures that there is enough space to handle every \(f\) with some bookkeeping as there are only \(\kappa\)-many nice names for reals.
As stated before the combination of Lemmas 3.2, 3.3 and 3.9 prove Theorem 3.1 (and hence Theorem 1.2).
## 4. Arbitrarily large selective independent families
Since Mathias forcing notions relativized to a filter are all \(\sigma\)-centered, by [3, Theorem 7.12], we get the following, which strengthens a theorem of Shelah from [12], who proved the same under the stronger hypothesis of \(\mathsf{CH}\) in place of \(\mathfrak{p}=2^{\aleph_{0}}\).
**Theorem 4.1**.: _Assume \(\mathfrak{p}=2^{\aleph_{0}}\). Every independent family \(\mathcal{I}_{0}\) of size \(<\!\!2^{\aleph_{0}}\) can be extended to a selective independent family._
Proof.: Fix \(\mathcal{I}_{0}\), an independent family of some size \(\lambda<2^{\aleph_{0}}\). Enumerate the elements of \(\mathcal{I}_{0}\) as \(\{A_{\xi}\mid\xi<\lambda\}\). For \(\mathcal{I}_{0}\), and further independent families we will build in this proof, we associate a finite partial function \(h:2^{\aleph_{0}}\to 2\) to a Boolean combination by mapping e.g. \(A_{\alpha}\) to \(A_{\alpha}^{h(\alpha)}\). We will not comment on this again and assume implicitly that some enumeration of our independent families has been chosen to make sense of this. If there is a \(\zeta\in\operatorname{dom}(h)\) which is greater than \(\lambda\) then we consider the Boolean combination undefined. Enumerate all pairs consisting of an element of Ramsey space and a finite partial function \(h:2^{\aleph_{0}}\to 2\) as \(\{(X_{\alpha},h_{\alpha})\mid\alpha<2^{\aleph_{0}}\}\), enumerate countable subsets of Ramsey space
\(\{(A_{n}^{\alpha})\mid n<\omega,\,\alpha<2^{\aleph_{0}}\}\) so that every sequence appears unboundedly often and fix a scale \(\{f_{\alpha}\mid\alpha<2^{\aleph_{0}}\}\subseteq\omega^{\omega}\) (so \(\alpha<\beta\) implies \(f_{\alpha}\leq^{*}f_{\beta}\) and this family is dominating). Note that the assumption on \(\mathfrak{p}\) guarantees such a scale exists. We will inductively define a continuous, \(\subseteq\)-increasing sequence of independent families \(\{\mathcal{I}_{\alpha}\}_{\alpha<2^{\aleph_{0}}}\) so that the union \(\bigcup_{\alpha<2^{\aleph_{0}}}\mathcal{I}_{\alpha}\) is selective. Indeed it suffices to show that given \(\mathcal{I}_{\alpha}\) independent we can find \(\mathcal{I}_{\alpha+1}\supseteq\mathcal{I}_{\alpha}\) so that the following hold:
1. If \(X_{\alpha}\) has infinite intersection with every Boolean combination of \(\mathcal{I}_{\alpha}\) and \(h_{\alpha}\) is defined on \(\mathcal{I}_{\alpha}\) then there is an \(h^{\prime}\supseteq h_{\alpha}\) so that \(\mathcal{I}_{\alpha}^{h^{\prime}}\setminus X_{\alpha}\) is finite.
2. If \((A_{n}^{\alpha})\subseteq\operatorname{fl}(\mathcal{I}_{\alpha})\) then there is a \(B\in\operatorname{fl}(\mathcal{I}_{\alpha+1})\) so that for all \(n<\omega\) we have \(B\subseteq^{*}A_{n}^{\alpha}\).
3. There is a \(C\in\operatorname{fl}(\mathcal{I}_{\alpha+1})\) which \(Q\)-dominates \(f_{\alpha}\).
The rest of the proof is standard bookkeeping argument. So fix \(\alpha<2^{\aleph_{0}}\). At each stage we will add at most countably many reals so we can assume that \(\mathcal{I}_{\alpha}\) has size \(<\)\(2^{\aleph_{0}}\). We will deal with the three requirements in three steps. In the first step, if \(X_{\alpha}\) does not have infinite intersection with every Boolean combination of \(\mathcal{I}_{\alpha}\) or \(h_{\alpha}\) is not defined on \(\mathcal{I}_{\alpha}\) then we do nothing and let \(\mathcal{I}_{\alpha}^{0}=\mathcal{I}_{\alpha}\). Otherwise we use the forcing axiom characterization of \(\mathfrak{p}=2^{\aleph_{0}}\) applied to \(\mathbb{M}(\mathcal{F}_{\alpha})\) where \(\mathcal{F}_{\alpha}\) is a diagonalization filter containing \(X_{\alpha}\). By \(<\)\(2^{\aleph_{0}}\)-many dense sets we can find a \(Y\) which is independent over \(\mathcal{I}_{\alpha}\) and \(Y\subseteq^{*}X\) since there are \(<\)\(2^{\aleph_{0}}\)-many Boolean combinations. Let \(\mathcal{I}_{\alpha}^{0}=\mathcal{I}_{\alpha}\cup\{Y\}\) and note that by the same argument as in the proof of Lemma 3.2 this satisfies criterion (1) above.
Next, if \((A_{n}^{\alpha})\nsubseteq\operatorname{fl}(\mathcal{I}_{\alpha})\), let \(\mathcal{I}_{\alpha}^{1}=\mathcal{I}_{\alpha}^{0}\). Otherwise, by successively choosing diagonalization filters (which will all have all the \(A_{n}^{\alpha}\)'s) we can find countably many sets \((Y_{n})_{n<\omega}\) so that \(\mathcal{I}_{\alpha}^{1}:=\mathcal{I}_{\alpha}^{0}\cup\{Y_{n}\mid n<\omega\}\) is independent and each \(Y_{n}\) is an almost subset of every \(A_{n}^{\alpha}\). As in the proof of Lemma 3.3, for each \(k\) let \(l_{k}(n)\) be such that \(Y_{n}\setminus l_{k}(n)\subseteq A_{n}^{\alpha}\) and let \(f\in\omega^{\omega}\) dominate all the \(l_{k}\)'s. The same proof as in Lemma 3.3 ensures then that \(B=\bigcup_{n<\omega}(Y_{n}\setminus f(n))\) is in \(\operatorname{fl}(\mathcal{I}_{\alpha}^{1})\) and \(B\subseteq^{*}A_{n}^{\alpha}\) for all \(n<\omega\) as needed for criterion (2).
Finally for criterion (3), we can again use the forcing axiom characterization of \(\mathfrak{p}=2^{\aleph_{0}}\) applied in this case to mimic the proof of Lemma 3.7 to find a \(Z\) which \(Q\)-dominates \(f_{\alpha}\) and has infinite intersection with every Boolean combination in \(\mathcal{I}_{\alpha}^{1}\). Finally similar to the proof of criterion (2) in the previous paragraph we can find sets \(\{Z_{n}\mid n<\omega\}\) so that \(\mathcal{I}_{\alpha+1}:=\mathcal{I}_{\alpha}^{2}=\mathcal{I}_{\alpha}^{1}\cup \{Z_{n}\mid n<\omega\}\) is independent and \(Z_{n}\subseteq^{*}Z\) for all \(n<\omega\). By the same proof again as in Lemma 3.7, this ensures that \(Z\in\operatorname{fl}(\mathcal{I}_{\alpha+1})\), thus completing the construction and hence the proof.
We also get the following.
**Theorem 4.2**.: _Let \(\kappa<\lambda\) be cardinals both of uncountable cofinality. It is consistent that \(2^{\aleph_{0}}=\lambda\) and there is a selective independent family of size \(\kappa\). Moreover if \(\kappa\) is regular we can have that \(\mathfrak{i}=\kappa\) i.e. the selective independent family is of minimal size._
Proof.: By forcing if necessary assume that \(2^{\aleph_{0}}=\lambda\). Let \(\mu=\operatorname{cf}(\kappa)\) and let \(\{i_{\alpha}\mid\alpha<\mu\}\) be a \(\mu\)-length cofinal sequence. We will force that \(\mathfrak{b}=\mathfrak{d}=\mu\) and there is a selective independent family of size \(\kappa\). Since \(\mathfrak{d}\leq\mathfrak{i}\) in \(\mathsf{ZFC}\), see [3, Theorem 8.13], in the case \(\kappa\) is regular this will complete the proof of the "moreover" part as well. Now, define a finite support iteration \(\langle\mathbb{P}_{\alpha},\mathbb{Q}_{\alpha}\mid\alpha<\kappa\rangle\) so that if \(\alpha\notin\{i_{\xi}\mid\xi<\mu\}\) then \(\mathbb{P}_{\alpha}\) forces that \(\mathbb{Q}_{\alpha}\) is the Mathias forcing for some inductively defined independent family as in the construction described in Theorem 3.1. If \(\alpha\) is some \(i_{\xi}\) then let \(\mathbb{P}_{\alpha}\) force that \(\mathbb{Q}_{\alpha}\) is Hechler forcing,
followed by the \(\omega+\omega\) stage iteration described in Lemma 3.7 to make the Hechler Generic \(Q\)-dominated by some element of the filter of the family we are adding.
Let \(\mathcal{I}_{\kappa}\) be the generic independent family added by this iteration. By the arguments in the previous section, it is clear that this family will be densely maximal and have a density filter which is a \(P\)-filter. Moreover, the Hechler reals will form a scale of length \(\mu\) (and every set of reals of size \(<\!\!\mu\) will be dominated by some Hechler real) hence \(\mathfrak{b}=\mathfrak{d}=\mu\). Also, each Hechler real will be \(Q\)-dominated by some element of \(\operatorname{fil}(\mathcal{I}_{\kappa})\). Since the family of Hechler reals is dominating this is enough to ensure that \(\operatorname{fil}(\mathcal{I}_{\kappa})\) is a \(Q\)-filter thus completing the proof.
## 5. Conclusion and Open Questions
We conclude this paper with a list of questions for further research. The most important of these, as mentioned in the introduction is the following.
_Question 1_.: Is there always a selective independent family? If there is one, is there always one of size \(\mathfrak{i}\)?
Towards answering this question we note that very little is even known about the existence of selective independent families in models where the ground model selective independent families are not preserved. Indeed, until the current paper we did not know if \(\mathfrak{i}=2^{\aleph_{0}}>\aleph_{1}\) was consistent with the existence of a selective independent family. In particular we would like to know:
_Question 2_.: Are there selective independent families in the Cohen model?
Turning our attention to the results of this paper point out to the following loose end from the proof of Theorem 3.1: Did we need to choose the diagonalization filters to ensure the \(Q\)-filter property? More precisely of interest is the following:
_Question 3_.: Can an iteration of Mathias forcing as described above produce an independent family whose diagonalization filter is not a \(Q\)-filter?
|
2310.02062 | Gotta Catch 'em All: Aggregating CVSS Scores | Security metrics are not standardized, but inter-national proposals such as
the Common Vulnerability ScoringSystem (CVSS) for quantifying the severity of
known vulnerabil-ities are widely used. Many CVSS aggregation mechanisms
havebeen proposed in the literature. Nevertheless, factors related tothe
context of the System Under Test (SUT) are not taken intoaccount in the
aggregation process; vulnerabilities that in theoryaffect the SUT, but are not
exploitable in reality. We propose aCVSS aggregation algorithm that integrates
information aboutthe functionality disruption of the SUT, exploitation
difficulty,existence of exploits, and the context where the SUT operates.The
aggregation algorithm was applied to OpenPLC V3, showingthat it is capable of
filtering out vulnerabilities that cannot beexploited in the real conditions of
deployment of the particularsystem. Finally, because of the nature of the
proposed algorithm,the result can be interpreted in the same way as a normal
CVSS. | Angel Longueira-Romero, Jose Luis Flores, Rosa Iglesias, IΓ±aki Garitano | 2023-10-03T14:04:40Z | http://arxiv.org/abs/2310.02062v1 | # Gotta Catch 'em All: Aggregating CVSS Scores
###### Abstract
Security metrics are not standardized, but international proposals such as the Common Vulnerability Scoring System (CVSS) for quantifying the severity of known vulnerabilities are widely used. Many CVSS aggregation mechanisms have been proposed in the literature. Nevertheless, factors related to the context of the System Under Test (SUT) are not taken into account in the aggregation process; vulnerabilities that in theory affect the SUT, but are not exploitable in reality. We propose a CVSS aggregation algorithm that integrates information about the functionality disruption of the SUT, exploitation difficulty, existence of exploits, and the context where the SUT operates. The aggregation algorithm was applied to OpenPLC V3, showing that it is capable of filtering out vulnerabilities that cannot be exploited in the real conditions of deployment of the particular system. Finally, because of the nature of the proposed algorithm, the result can be interpreted in the same way as a normal CVSS.
CVSS, security metrics, aggregation, attack graphs, vulnerabilities.
## I Introduction
System security quantification is not an easy task [1]. There exist both a lack of consensus and standardization around security metrics [2, 3, 4, 5, 6, 7, 8]. For this reason, research efforts keep aiming to unify this field [9].
Among these efforts, the Common Vulnerability Scoring System (CVSS) is a widely extended standard for vulnerability quantification [10]. CVSS is a public framework that provides a standardized method for assigning quantitative values to security vulnerabilities according to their severity. A CVSS score is a decimal number in the range [0, 10]1[11].
Footnote 1: The latest version at the time this paper was written is version 3.1.
The CVSS is aimed to quantify the severity of vulnerabilities in individual and specific software items, however the majority of systems are actually a composition of simpler isolated items with different interdependencies. This situation highlights one of the biggest problems related to security quantification [12], the difficulty to really measure the global security state of a composite system. To do so, it would be necessary to aggregate each individual CVSS value into a global one in a consistent and coherent way.
The official CVSS documentation does not propose any kind of aggregation mechanism, and nowadays, there is no standardized method [13]. In addition to this, previous research works do not usually integrate contextual or interdependency information about the vulnerabilities to update the CVSS. This means that aspects such as whether affected functionalities, the environment of deployment, or the existence of exploits are usually neglected.
Context is a critical aspect to integrate in the aggregation process. This can be illustrated using a device implementing multiple functionalities as an example. To perform those functionalities, usually it will contain assets that implement those functionalities. But depending on the context where the device is deployed, some of its functionalities might not be needed. So the assets implementing unused functionalities would be disabled, and therefore, their vulnerabilities could not be exploited. It can also be the case that the asset implementing a functionality is simply inaccessible, so it could not also be exploited.
This research proposes a novel aggregation algorithm for a set of CVSS values2. This approach is based on the Extended Dependency Graphs (EDGs) proposed by Longueira-Romero _et al._[14]. Because EDGs are capable of modeling dependencies, this algorithm can also be applied to computer networks. Our proposal is capable of selecting the most relevant CVSS to be aggregated, taking into account four different context-related properties of the System Under Test (SUT):
Footnote 2: The Python code implementing the aggregation algorithm is available at GitHub [https://github.com/aaalongueira/CVSS_Aggregation](https://github.com/aaalongueira/CVSS_Aggregation).
1. Functionality disruption.
2. Exploitation difficulty.
3. Existence of exploits, and their development state.
4. Context of deployment.
This approach increases the granularity of the CVSS base, environment and temporal metrics, where not every possible value in the scale \([0,10]\) is achievable, or the result of changing the value of a submetric has almost no effect on the final CVSS [13, 15]. Moreover, our proposal is capable of detecting which branch in the EDG is contributing the most (more critical) to the final score.
This paper is organized as follows: We review existing aggregation methods in Section II. Our proposal is explained in Section III, and tested in a use case in Section IV. Finally, Section V contains the conclusions and future work of this research.
## II Related Work
Nowadays, there is no widely-accepted method to aggregate CVSS values for software composition. All of them can be
classified into one of the following categories [16, 17]: (1) Arithmetic Aggregation, (2) Attack Graph-based Aggregation, and (3) Bayesian Network-based Aggregation.
### _Arithmetic Aggregation_
This method uses arithmetic operations to aggregate the values [18, 19, 20, 21]. Common examples of this approach are taking the maximum of the CVSS values, their arithmetic mean, or a combination of them. For example, Heyman _et al._[18], proposed an algorithm to aggregate CVSS values in dependency graph that is based on taking the maximum value in each case, according to certain conditions.
Although their simplicity makes them suitable for initial approximations, their results can be biased in two ways:
1. **Exploitable by quantity:** When a system poses several vulnerabilities that by their own are not critical and cannot be exploited, they can sum up to an aggregated value of a high impact vulnerability (overfitting). This can happen when multiple simple mechanisms are combined as the aggregation algorithm.
2. **Exploitable by criticality:** When there exist a critical vulnerability, the whole system will be usually classified as critical. Nevertheless, that vulnerability might not be exploitable, nor being affecting the functionality of the system. This is specially common when using the maximum as the aggregation algorithm.
### _Attack Graph-based Aggregation_
This approach models the relationships between vulnerabilities using attack graphs, converting CVSS scores into probabilities [22, 23, 24, 25, 26, 27]. In this way, both the CVSS value and the place of the vulnerability in the whole graph are taken into account.
Cheng _et al._ in [16] proposed a graph-based aggregation method that uses the underlying metrics of CVSS, where the dependency relationships between vulnerabilities are usually visible. As the center of the aggregation algorithm, they use the product of the CVSS used as probabilities, also known as the join probability of both vulnerability.
The main drawback with these approaches is that the relationship between individual vulnerabilities cannot be obtained straightforwardly from existing databases. This means that establishing a relation between two vulnerabilities implies that they can be chained during an attack, which is not always obvious. Moreover, factors such as exploitability of the vulnerabilities, or existing exploits are not taken into account.
### _Bayesian Network-based Aggregation_
Going a step further, these methods integrate the conditional relationship between vulnerabilities, modeling them using Bayesian networks [28, 29, 30]. Poolsappasit _et al._[29] proposed a CVSS aggregation framework using Bayesian networks. They used the Bayesian probability factorization formula as the aggregation mechanism:
\[p(x)=\prod_{i=0}p(x_{v}|x_{pa(v)})\]
Bayesian network-based approaches have to deal with establishing the relationships between the vulnerabilities, but also with the calculation of conditional probabilities, that have to be usually estimated. As the previous ones, these techniques do not integrate information about how functionality if affected by existing vulnerabilities, or the possibility to actually exploit them.
## III Proposed Approach for Metric Aggregation
In this paper, we propose a CVSS aggregation algorithm inspired by the risk propagation formula [31] described in MAGERIT [32, 33]. First, we describe the corrections factors involved in our proposal. Then, the aggregation formula is introduced. Finally, the algorithm and the interpretation of the results is explained in detail.
### _Correction Factors_
The proposed aggregation algorithm integrates correction factors to adapt the formula described in MAGERIT. These correction factors apply individually for each CVSS, except for the average and summarized factors. Correction factors are summarized in Table I.
1. _Functionality factor (\(\rho\)):_ This correction factor represents whether any functionality of the systems is affected by its vulnerabilities. It is represented by a binary value, being \(0\) when no functionality is affected, and \(1\) when any of them is affected. For example, a cryptographic library with a vulnerability in SHA1. If the SUT does not make use of SHA1 in any way, the vulnerability would not be exploitable, and could be removed from the analysis (\(\rho=0\)).
2. _Deepness Factor (\(\beta\)):_ This factor represents the difficulty of chained exploitation of each vulnerability. It is represented by a value between \([0,1]\) inversely proportional to the amount of assets to compromise in order to exploit vulnerability. Vulnerabilities close to the entry point will account more for the final aggregation, whereas those that are far away will account less. In this approach, linear interpolation is proposed to calculate the weight of each layer, because of its simplicity. Nevertheless, different interpolations could be used according to the criticality of the system. Fig. 1 shows the corresponding \(\beta\) for a four-layer system.
Fig. 1: Calculation of the deepness factor for a four-layer of dependency example.
3. _Context factor (\(\gamma\)):_ This factor considers whether the exploitation of a vulnerability is actually possible in the real scenario where the system is deployed. It is represented by a binary value, where \(0\) indicated that it is not possible, and \(1\), that it is possible. It is calculated comparing the attack vector of the CVSS with the real conditions where the device is deployed. For example, this can happen when a vulnerability with a high CVSS score needs physical access to be exploited, but in reality the device is physically isolated. To reflect this, the CVSS should be updated, lowering the resulting value [16]. This factor aims to complement the existing submetrics in the temporal and the environment metrics of the CVSS. Both the temporal and the environmental scores lack of an "isolated" value for the attack vector.
4. _Exploit factor (\(\mu\)):_ This factor accounts for the existence of a public exploit for a given vulnerability, being proportional to its state of development. The temporal score of the CVSS already implements this feature, but the CVSS values are not updated in practice [15]. Moreover, taking into account the temporal score has almost no effect as opposed to using the raw initial base score. This means that a CVSS just considering the base score is higher than a CVSS considering an exploit code maturity of "functional exploit exists". To solve this issue, we introduce the following values for the exploit factor: Not defined (\(\mu=0.5\)), Theoretical (\(\mu=1.25\)), Proof-Of-Concept (\(\mu=1.5\)), Functional (\(\mu=1.75\)), and Automated (\(\mu=2\)). These values are equivalent to the scale defined in the CVSS Specification Document [10].
5. _Summarized factor (\(\lambda\)):_ The \(\lambda\) factor accounts for the effect of all the factors above: \[\lambda=\rho\beta\gamma\mu\] (1)
6. _Average factor (\(\sigma\)):_ This factor defines the behavior of the aggregation function. It can be chosen as needed (_e.g._, the arithmetic or harmonic mean), but taking into account all the values to be added.
### _Aggregation Formula_
The aggregation function is defined as:
\[\Gamma(\overrightarrow{V})=10-\frac{1}{\sigma}f(\overrightarrow{V}) \tag{2}\]
Where \(\overrightarrow{V}\) is a vector \((cvss_{0},cvss_{1},\ldots,cvss_{n})\) with all the corrected CVSS values to be added, \(cvss\), being \(n\) the last value to be added. \(f(\overrightarrow{V})=a_{n}\) is defined as the following recursive function:
\[a_{n}=10\left[1-\left(1-\frac{\lambda_{a_{n-1}}}{10}a_{n-1}\right)\cdot \left(1-\frac{\lambda_{cvss_{n}}}{10}cvss_{n}\right)\right] \tag{3}\]
Where the base case is defined as:
\[a_{0}=\lambda_{cvss_{0}}cvss_{0} \tag{4}\]
### _Algorithm_
The proposed aggregation algorithm is divided into the following steps (see Fig. 2):
1. Calculation of the correction factors for each CVSS,
2. Calculation of the summarized factor for each CVSS,
3. Calculation of the corrected CVSS values,
4. Calculation of the average correction function, and
5. Aggregation.
Notice that the dependency graph of the SUT, the vulnerabilities associated to each element of the dependency graph, and their CVSS value are needed.
#### Iii-C1 Correction factors for each CVSS
The first step obtains the values of each correction factor for each CVSS:
1. **Functionality factor (\(\rho\)):** This factor is obtained using the description provided in the corresponding CVE of each CVSS. The description provides enough information to decide whether the functionality of the system is affected.
2. **Context factor (\(\gamma\)):** This factor is obtained by comparing the value of the Attack Vector (AV) submetric of the CVSS, with the real environment of deployment of the SUT.
3. **Exploit factor (\(\mu\)):** To obtain this factor, public databases have to be queried to find any potential exploit for each vulnerability.
4. **Deepness factor (\(\beta\)):** For any given CVSS, its value is obtained according to the deepness in the exploit chain for the SUT [14].
#### Iii-C2 Summarized factor for each CVSS
The summarized factor, \(\lambda\), is obtained by multiplying all the corrections factors obtained in the previous step, following Equation 1.
#### Iii-C3 Corrected CVSS values
The corrected CVSS values are obtained by multiplying each CVSS by its corresponding summarized factor, \(\lambda\). At this point, it is necessary to check for overflows, because the exploitation factor generated corrected CVSS values higher than 10. Values higher than 10 are set to 10 at this stage.
\begin{table}
\begin{tabular}{l l l} \hline CORRECTION FACTOR & DESCRIPT & AUTOMATED \\ \hline Functionality factor (\(\rho\)) & Binary value indicating whether a vulnerability affects or not the functionality of the SUT. & \\ Deepness factor (\(\beta\)) & Value between \([0,1]\) proportional to the position of the affected asset in the EDG of the SUT. & \\ Context factor (\(\gamma\)) & Binary value indicating vulnerability exploitability in the real and particular conditions of the SUT. & \\ Exploit factor (\(\mu\)) & Existence of a public exploit, proportional to its state of development: Not defined (\(\mu=0.5\)), Theoretical & \\ (\(\mu=1.25\)), Proof-Of-Concept (\(\mu=1.5\)), Functional (\(\mu=1.75\)), and Automated (\(\mu=2\)). & \\ Summarized factor (\(\lambda\)) & This factor summarizes the effect of all the above ones, \(\lambda=\rho\beta\gamma\mu\sigma\). & \\ Average factor (\(\sigma\)) & Function that adjust the value of the sum to avoid its rapid evolution to 10. & \\ \hline \end{tabular}
\end{table} TABLE I: Correction factors proposed for adapting the Bayesian sum proposed in MAGERIT.
#### Iii-C4 Correction function
At this point, it is necessary to choose an averaging function. Choosing one function over the other will cause the aggregation result to grow slower or faster toward 10 in each addition. In this case, and for the sake of clarity, we chose the arithmetic mean, but any other kind of mean (_e.g., harmonic mean_) could be used according to each scenario.
#### Iii-C5 Aggregation
Finally, the aggregated value is computed using Equation 2.
### _Interpretation of the result_
The advantage of this method is that the result can be interpreted in the same way that a normal CVSS would be interpreted. This is because of the correction factors in Equation 2, that only let the algorithm return high values when vulnerabilities with high CVSS values are exploitable in reality (\(\lambda\) is close to \(1\)). This mechanism ensures that multiple aggregated low CVSS values do not result in a critical score just because there are a large number of them.
## IV Use Case
To test the potential of our proposal, we analyzed Version 3 of OpenPLC project, obtaining a CVSS aggregated value for its vulnerabilities using the proposed algorithm.
OpenPLC is the first functional open source Programmable Logic Controller (PLC), both in software and hardware [34]. It was mainly created for research purposes, because it provides its entire source code [35, 36]. The current version of the project is OpenPLC V3 [37].
### _Use Case Scenario_
For this use case, we are going to make the next assumptions:
* The system executing OpenPLC V3 is deployed in an isolated network.
* The system running OpenPLC V3 is physically isolated.
* The attacker is an insider without access to the systems.
* The reference point for the deepness factor will be the webserver.py in Fig. 3.
### _Structure of OpenPLC_
The first step was to obtain the inner structure of OpenPLC V3 using the Extended Dependency Graph (EDG) proposed in [14]. To simplify the obtained graph, we only represented the shortest path to each node, so the worst case scenario (more accessible from the outside) is considered. The result is shown in Fig. 3.
### _Calculation of the Correcting Factors_
OpenPLC V3 has five vulnerabilities: two vulnerabilities affecting libgcc_s, and three vulnerabilities affecting libc. Table II shows each vulnerability in more detail.
From these data, it is possible to obtain all corrections factors for each vulnerability, as follows (Table II summarizes the results):
#### Iv-C1 Functionality Factor \((\rho)\)
This factor is obtained from the analysis of the description of each CVE. From these data, we have to decide whether the functionality of OpenPLC V3 is affected ("1") or not ("0").
#### Iv-C2 Deepness Factor \((\beta)\)
By taking a look at Fig. 3, it can be seen that the maximum deepness level is four. So the possible values for the deepness factor are the ones shown in Fig. 1. More precisely, vulnerabilities CVE-2019-15847 and CVE-2018-12886 have a deepness factor of 0.25, because they are at level four. By contrast, vulnerabilities CVE-2017-18269, CVE-2018-11236, and CVE-2018-11237 have a deepness factor of 0.5, because they are at level three.
#### Iv-C3 Context Factor \((\gamma)\)
From the initial assumptions, insiders can only exploit the existing vulnerabilities from the local network. This means that every vulnerability that has an attack vector of "network" (N) can be exploited, thus CVE-2017-18269, CVE-2018-11236, CVE-2018-12886, and CVE-2019-15847 are exploitable by the attacker. Vulnerabilities whose attack vector is "local" (L) cannot be exploited, because physical access is needed. Therefore, CVE-2018-11237 cannot be exploited.
#### Iv-C4 Exploit Factor \((\mu)\)
Public databases have to be queried to find existing exploits for each vulnerability. According to their state of development, a different value is assigned.
#### Iv-C5 Summarized Factor \((\lambda)\)
The summarized factor for each vulnerability is obtained as the product of the previous factors, as shown in Equation 1. At this step, by taking a look at the resulting values of \(\lambda\), it is possible to know which CVSS will contribute to the final aggregation and in which percentage (\(\lambda>0\)), and which ones will not contribute at all (\(\lambda=0\)).
Fig. 2: Flowchart showing the main steps of the aggregation algorithm for each CVSS.
#### Iv-C6 Average Factor \((\sigma)\)
Finally, we obtained the average factor by calculating the arithmetic mean of all the initial CVSS values: \(\sigma=8.6\).
### _Aggregation_
The previous step before the aggregation is obtaining the corrected CVSS value for each initial CVSS. This is done by multiplying each CVSS by their corresponding summarized value (\(\lambda\)). The corrected values are shown in Table II.
Finally, the aggregation is performed using the corrected CVSS values. The aggregation is an iterative process that takes the first two values to be added, and adds them using Equation 2. Then, this result is added to the third value to be added, and so on, until there are no more values.
For OpenPLC V3, this process returns a final aggregated value of \(9.1\). Without the correction factors, the result would be \(10\). Nevertheless, taking into account features such as the exploitability of the vulnerabilities, the context of the SUT, or its functionalities, we can select the most important CVSS values to be aggregated. With such process, the total amount of CVSS values to be added is simplified. This also helps to simplify potential attack paths.
This result was obtained aggregating three of the five CVSS values present in OpenPLC V3. The associated CVSS for CVE-2018-11236 and CVE-2018-11237 were not taking into account for the aggregation, because they do not affect to any functionality of the system, Moreover, CVE-2018-11237 cannot be exploited in the conditions described in the use case.
CVE-2017-18269 (with an associated CVSS of 9.8) is the vulnerability with the highest value for \(\lambda\). Therefore, it is going to contribute the most to the final aggregated value. CVE-2018-12886 and CVE-2019-15847 follow with a CVSS of 8.1 and 7.5 respectively. As it is shown, the selected vulnerabilities have a high CVSS, so it is expected that the aggregated value would be also high. This is reflected in the obtained result of \(9.1\).
Finally, it is worth highlighting that the final result is lower than the highest CVSS value present in OpenPLC V3. This difference is due to the effect of the correction factors: as the CVE-2017-18269 is further away from the entry point of the system (in layer 3), its real CVSS value in lower.
## V Conclusions and Future Work
In this research work, we proposed a new aggregation algorithm for CVSS values. The proposed approach integrates correction factors to select the most relevant CVSS values to be added based on contextual information. For each vulnerability, we check for:
1. Functionality disruption.
2. Exploitation difficulty.
3. Existence of exploits, and their development state.
4. Context of deployment.
We assigned a different correction factor to each one of the previous properties to further ponder the initial CVSS value and adjust it to the real context where the system is operating.
The proposed aggregation algorithm was applied to OpenPLC V3 in a use case. Two of the existing vulnerabilities
\begin{table}
\begin{tabular}{l c c c c c c c} \hline CVE & CVSS & Attack Vector & Functionality (\(\rho\)) & Deepness (\(\beta\)) & Context (\(\gamma\)) & Exploit (\(\mu\)) & Summarized (\(\lambda\)) & Corrected CVSS \\ \hline CVE-2017-18269 & 9.8 & Network & 1 & 0.5 & 1 & 1.25 & 0.625 & 6.125 \\ CVE-2018-11236 & 9.8 & Network & 0 & 0.5 & 1 & 0 & 0 & 0 \\ CVE-2018-11237 & 7.8 & Local & 1 & 0.5 & 0 & 1.25 & 0 & 0 \\ CVE-2018-12886 & 8.1 & Network & 1 & 0.25 & 1 & 1.25 & 0.313 & 2.530 \\ CVE-2019-15847 & 7.5 & Network & 1 & 0.25 & 1 & 1.25 & 0.313 & 2.344 \\ \hline \end{tabular}
\end{table} TABLE II: Vulnerabilities present in OpenPLC V3. For each one, the CVSS is shown, together with their associated Attack Vector (AV), and their correction factors.
Fig. 3: Extended Dependency Graph of OpenPLC V3. Circles represent individual assets, black triangles are the vulnerabilities associated to each asset, and the square represent the entry point to the system, or root node of dependency.
were filtered out by the algorithm, as they cannot be exploited in the described context of OpenPLC V3. The rest of the vulnerabilities were aggregated, and the result (\(9.1\)) was indeed lower than the highest CVSS present in the system (\(9.8\)). This shows that the CVSS for each vulnerability was correctly adjusted to the real context of deployment of OpenPLC V3.
As future work, we plan to perform the aggregation at the submetric level of the CVSS, instead of using the base metric value, giving more granular values for each factor.
## Acknowledgements
Iiaki Garitano is a member of the Intelligent Systems for Industrial Systems research group at Mondragon Unibertsitate (IT1676-22), supported by the Department of Education, Universities and Research of the Basque Government. This work was partially supported by the _Ayudas Cervera para Centros Tecnologicos_ grant of the Spanish Center for the Development of Industrial Technology (CDTI) under the project EGIDA (CER-20191012), and by the Basque Country Government under the ELKARTEK program, project REMEDY - Real Time Control And Embedded Security (KK-2021/00091).
|
2304.00932 | HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion | LiDAR relocalization plays a crucial role in many fields, including robotics,
autonomous driving, and computer vision. LiDAR-based retrieval from a database
typically incurs high computation storage costs and can lead to globally
inaccurate pose estimations if the database is too sparse. On the other hand,
pose regression methods take images or point clouds as inputs and directly
regress global poses in an end-to-end manner. They do not perform database
matching and are more computationally efficient than retrieval techniques. We
propose HypLiLoc, a new model for LiDAR pose regression. We use two branched
backbones to extract 3D features and 2D projection features, respectively. We
consider multi-modal feature fusion in both Euclidean and hyperbolic spaces to
obtain more effective feature representations. Experimental results indicate
that HypLiLoc achieves state-of-the-art performance in both outdoor and indoor
datasets. We also conduct extensive ablation studies on the framework design,
which demonstrate the effectiveness of multi-modal feature extraction and
multi-space embedding. Our code is released at:
https://github.com/sijieaaa/HypLiLoc | Sijie Wang, Qiyu Kang, Rui She, Wei Wang, Kai Zhao, Yang Song, Wee Peng Tay | 2023-04-03T12:43:34Z | http://arxiv.org/abs/2304.00932v2 | # HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion
###### Abstract
LiDAR relocalization plays a crucial role in many fields, including robotics, autonomous driving, and computer vision. LiDAR-based retrieval from a database typically incurs high computation storage costs and can lead to globally inaccurate pose estimations if the database is too sparse. On the other hand, pose regression methods take images or point clouds as inputs and directly regress global poses in an end-to-end manner. They do not perform database matching and are more computationally efficient than retrieval techniques. We propose HypLiLoc, a new model for LiDAR pose regression. We use two branched backbones to extract 3D features and 2D projection features, respectively. We consider multi-modal feature fusion in both Euclidean and hyperbolic spaces to obtain more effective feature representations. Experimental results indicate that HypLiLoc achieves state-of-the-art performance in both outdoor and indoor datasets. We also conduct extensive ablation studies on the framework design, which demonstrate the effectiveness of multi-modal feature extraction and multi-space embedding. Our code is released at: [https://github.com/sijieaaa/HypLiLoc](https://github.com/sijieaaa/HypLiLoc)
## 1 Introduction
Visual relocalization aims at estimating the 6-degree of freedom (DoF) pose of an agent using perception sensors, such as LiDARs and cameras. It plays a crucial role in many fields that include robot navigation [12], autonomous driving [23], and scene recognition [22]. Image-based relocalization methods have achieved good performance in various applications [36, 15, 33]. However, images taken from cameras can only capture RGB color information and are easily influenced by environmental conditions, including low illumination and light reflections. By contrast, LiDARs, which cast active beams to estimate the depth of surrounding objects, are more robust against those changes.
In recent years, the LiDAR has become an important sensor in smart robots, autonomous vehicles, and mobile devices. LiDAR-based relocalization, which is a basic and important module impacting other perception tasks, has attracted more attention [20, 27, 37, 8, 47]. One of the classical approaches, LiDAR odometry, estimates the relative poses among successive LiDAR frames to obtain locally accurate pose estimation. However, errors accumulate over the trajectory, resulting in unsatisfactory global pose estimation. To compensate for the error, LiDAR odometry is usually treated as a component in a complete simultaneous localization and mapping system (SLAM), where the global pose estimated by a global positioning method or detected loop closure is used to correct the accumulated error in the LiDAR odometry [32, 44].
LiDAR-based retrieval is also used for relocalization [34]. It first constructs a database of LiDAR features learned from all candidate LiDAR frames. During inference, given a query LiDAR scan, the similarities between the query feature and all features stored in the database are computed so that the top-matched poses can be obtained. Although this approach provides accurate global pose estimation, it inherently suffers from high computation cost and storage burden [39]. Therefore, it is more appropriate for offline scenarios rather than for real-time mobile applications.
Pose regression is favored as a relocalization method due to its lower computation and storage cost during inference. The pose regression network is still trained on a database containing LiDAR frames in an end-to-end manner to obtain a regression model. During inference, taking the LiDAR scan as input, the pose regression network directly regresses the global pose without any pre-constructed candidate database or map. It can mitigate the high computation and storage burden that occurs in the LiDAR-based retrieval methods. As a result, pose regression can be operated in real-time to satisfy various relocalization requirements in robotics, unmanned aerial vehicles (UAVs), mobile relocalization APPs, autonomous vehicles, and SLAM systems.
In this paper, we propose a relocalization method called HypLiLoc, which is a pose regression network with LiDAR
data as input. HypLiLoc uses a parallel feature extraction design, in which 3D features and 2D spherical projection features are obtained in two backbone branches simultaneously. The paper [24] leverages hyperbolic embeddings for 3D point clouds that can be viewed as hierarchical compositions of small parts. We thus follow this motivation to design our pipeline with hyperbolic learning. Specifically, we conduct feature fusion in both Euclidean and hyperbolic spaces to enhance the information representation and to achieve more effective multi-modal feature interaction. We test HypLiLoc in both outdoor and indoor datasets. Experiments indicate that HypLiLoc surpasses current approaches and achieves state-of-the-art (SOTA) performance.
Our main contributions are summarized as follows:
1. We propose a novel LiDAR-based pose regression network HypLiLoc. It has one backbone that learns 3D features directly from the 3D point cloud and another backbone that learns features from a 2D projection of the point cloud onto a spherical surface. To achieve effective multi-modal feature interaction, the features are embedded in both Euclidean and hyperbolic spaces using multi-space learning. An attention mechanism is then used to fuse the features from different spaces together.
2. We test our network in both outdoor and indoor datasets, where it outperforms current LiDAR pose regression counterparts and achieves SOTA performance. We also conduct extensive ablation studies on the effectiveness of each design component.
## 2 Related Work
In this section, we shall introduce more relocalization works, including LiDAR odometry, point cloud retrieval, and pose regression. Besides, we shall also provide more details of hyperbolic learning that are related to our method design.
### LiDAR Odometry
LiDAR odometry methods address the relocalization problem under local views. Given several close or nearby LiDAR scans, they estimate the relative poses among them. The iterative closest point (ICP) method [6] solves the relative pose by iteratively searching correspondence between source points and target points and optimizing the least square error. Besides, another popular method is LOAM [47], which classifies the keypoints into edges and planes and uses KD-trees to search the neighborhood of the keypoints. DCP [40] uses PointNet [28] and DGCNN [42] as backbones to extract point cloud features, and then it predicts the pose using the Transformer module.
### Point Cloud Retrieval
Point cloud retrieval approaches treat the relocalization task as the place recognition problem [34]. The core of these approaches is query-database matching. A database needs to be constructed to store the features of all candidate LiDAR scans with corresponding poses. In the inference process, for a given query LiDAR scan, its feature is extracted by the neural network, and then the query-database feature matching is performed for every possible pair. The final pose estimation is obtained from the top-matched pairs.
### Pose Regression
Given the query sensor data, the pose regression models directly outputs the pose using the trained neural network. They do not depend on the query-database matching procedure, which speeds up the inference stage significantly when compared to point cloud retrieval methods. These models are still trained on a database of training samples that include sensor scans and the ground truth sensor poses. However, during the inference stage, the database is no longer required, in contrast to retrieval methods.
PoseNet [17] proposes simultaneous learning for location and orientation by integrating balance parameters. MapNet [15] uses visual odometry as the post-processing technique to optimize the regressed poses. AD-PoseNet [16] leverages semantic masks to drop out the dynamic area in the image. AtLoc [36] introduces global attention to guide the network to learn better representations. MS-Transformer [33] focuses on simultaneous pose regression for multiple scenes using a single network. RobustLoc [38] leverages multi-view images for robust camera pose regression.
A LiDAR actively casts beams to estimate the sparse depth of its surrounding environment. Since LiDARs are less likely to be influenced by illumination changes than cameras, they have become core sensors in many applications. PointLoc [39] uses LiDAR point clouds to achieve pose regression. Its model consists of PoinNet++ [29] followed by self-attention modules to generate point cloud features. The paper [46] studies the memory-friendly pose regression learning scheme and proposes four LiDAR pose regression models.
### Hyperbolic Learning
Hyperbolic embedding for features has been proposed for datasets that have some underlying tree structure [31]. The paper [13] derives hyperbolic versions of several deep learning tools, including multinomial logistic regression, feed-forward networks, and recurrent networks. In the field of natural language processing (NLP), [25] and [26] introduce feature embeddings with hyperbolic models. In the field of deep graph learning, HGCN [7] considers hyperbolic node embeddings in Graph Convolutional Neural Networks (GCNs). GIL [48] proposes to use weighted embedding features in
both Euclidean and hyperbolic spaces. In the computer vision community, [11] uses pair-wise cross-entropy loss with hyperbolic distances to train the vision transformer [10], and [4] considers hyperbolic embeddings in the semantic segmentation task. More recently, hyperbolic embeddings have also been studied for the 3D point cloud [24], where the 3D point cloud is treated as nature compositions of small parts that follow the hierarchical architecture. This motivates us to introduce hyperbolic embeddings in our pipeline for better feature representations.
## 3 Proposed Model
In this section, we provide a detailed description of our proposed approach. We first summarize the HypLiLoc pipeline as follows.
1. Given a LiDAR point cloud scan, in addition to the traditional backbone of extracting 3D features from the point cloud, we additionally project the 3D points into a sphere to generate a 2D projection image. These two types of features are extracted by separate backbones.
2. We merge the two modal features together as the fusion features. The fusion features are then embedded in both Euclidean and hyperbolic spaces to achieve more effective representations.
3. After features interact in different spaces and modalities, the global feature vector is obtained by applying the global average pooling operation on the fusion features. The final pose prediction is generated using the global feature vector with the pose regression head.
### Modal-Specific Backbones
**Projection Feature Extraction.** Multi-modal feature extraction has shown promising performance in various tasks [30, 45, 1, 43]. The point cloud generated by LiDARs is convertible into multiple modalities by projecting 3D points into specific 2D spaces. Each projection provides us with a different way to define the neighbors of a point so that the point can aggregate feature representations from different definitions of its "neighborhood". To this end, we consider two typical projection methods, including the spherical projection and the bird's-eye view (BEV) projection. For the currently most commonly used multi-line spinning LiDAR, we visualize the point cloud projection in Fig. 2.
Suppose we have as the input a set of \(N\) LiDAR points \(\{p_{i}\}_{i=1}^{N}\), each represented by 3D Cartesian coordinates \(p_{i}=(x_{i},y_{i},z_{i})\in\mathbb{R}^{3}\). For the spherical projection, we first convert the Cartesian coordinates into polar coordinates as:
\[\phi_{i}=\arctan\Biggl{(}\frac{z_{i}}{\sqrt{x_{i}^{2}+y_{i}^{2}}}\Biggr{)}, \tag{1}\]
Figure 1: The overall architecture of our proposed HypLiLoc. We use two backbone branches to perform feature extraction. In the 3D backbone, we consider both local set abstraction and global attention aggregation. In the feature fusion block, the extracted multi-modal features are embedded into both Euclidean and hyperbolic spaces to achieve space-specific interaction. The fusion features are then decoupled to their own modality to perform modal-specific interaction. The final training loss is applied on both the 3D/projection level and the final fusion level.
\[\theta_{i} =\arctan\biggl{(}\frac{y_{i}}{x_{i}}\biggr{)}, \tag{2}\] \[r_{i} =\sqrt{x_{i}^{2}+y_{i}^{2}+z_{i}^{2}}. \tag{3}\]
Each point projected on the sphere is then denoted as \(p_{i}^{\rm sph}=(\phi_{i},\theta_{i})\in\mathbb{R}^{2}\). Projecting all the \(N\) points, an image \(I^{\rm sph}\in\mathbb{R}^{H\times W}\) is obtained, in which each pixel contains the radius value:
\[I^{\rm sph}\biggl{(}\left\lfloor\frac{\phi_{i}H}{2\pi}\right\rfloor,\left\lfloor \frac{\theta_{i}W}{2\pi}\right\rfloor\biggr{)}=r_{i}. \tag{4}\]
On the other hand, the BEV projects each point \(p_{i}\) as \((x_{i},y_{i})\) and generates the image \(I^{\rm BEV}\in\mathbb{R}^{H^{\prime}\times W^{\prime}}\), in which each pixel contains the height value:
\[I^{\rm BEV}\Bigg{(}\left\lfloor\frac{y_{i}H^{{}^{\prime}}}{2y_{ \rm max}}\right\rfloor,\left\lfloor\frac{x_{i}W^{{}^{\prime}}}{2x_{\rm max}} \right\rfloor\biggr{)}=z_{i}, \tag{5}\]
where \(x_{\rm max}\) and \(y_{\rm max}\) are the maximum LiDAR range in the \(x\) and \(y\) directions, respectively.
We test the two projection counterparts in Section 4.4, where the spherical projection performs better than the BEV. The reason could be that LiDARs operate with the spinning mechanism, which is better modeled by the spherical projection. By contrast, the BEV projection loses information as some points are stacked on the same pixel and is thus not a bijective mapping.
In the following discussion, we use the spherical projection strategy in our pipeline. We treat the spherical projection points as the image modality input, while the 3D features are extracted directly from the point cloud in a separate backbone that we will introduce later. Following the golden rule for 2D image processing, we use ResNet [14] as the backbone for the image modality. Denoting the ResNet backbone as \(f^{\rm sph}(\cdot)\), the final spherical projection features \(F^{\rm sph}\in\mathbb{R}^{H^{\rm sph}\times W^{\rm sph}\times C}\) are obtained as:
\[F^{\rm sph}=f^{\rm sph}\bigl{(}I^{\rm sph}\bigr{)}. \tag{6}\]
**3D Feature Extraction.** Effective 3D point feature extraction is critical in the model design. PointNet++ [29] has shown promising performance in various tasks [39, 40]. In our pipeline, we use PointNet++ as the backbone branch for 3D feature extraction.
PointNet++ only considers the neighboring information within a determined range, i.e., in the set abstraction (SA) layer, each centroid uses the maximum neighboring feature value as its updated feature as shown in Fig. 3. In pose regression, the estimation accuracy of the pose benefits from an effective global representation. Thus to enable PointNet++ to additionally aggregate more global information, we introduce an additional graph attention (GA) layer after each SA layer to build the set abstraction graph attention (SAGA) layer as shown in Fig. 3. In this GA layer, we construct a complete graph whose node set contains all centroids from the SA layer. We denote the output of the SA layer as \(P\in\mathbb{R}^{N^{\rm SA}\times C^{\rm SA}}\) with \(N^{\rm SA}\) centroids, each with a \(C^{\rm SA}\)-dimensional feature vector. We first use a Fully-Connected (FC) layer to generate the multi-head features:
\[P_{k}^{\rm FC}=PW_{k}+b_{k}, \tag{7}\]
where \(W_{k}\) and \(b_{k}\) are a linear operation and an additive bias, respectively. These are learnable parameters of the \(k\)-th head. Then the attention weight matrix \(A_{k}\in\mathbb{R}^{N^{\rm SA}\times N^{\rm SA}}\) can be obtained by computing the dot product among all the neighboring nodes:
\[A_{k}={\rm Softmax}\Bigl{(}P_{k}^{\rm FC}P_{k}^{\rm FC\text{\text{\texttau}}} \Bigr{)}, \tag{8}\]
where \({\rm Softmax}(\cdot)\) denotes the row-wise softmax function. The output features of the GA layer \(P^{\rm GA}\in\mathbb{R}^{N^{\rm GA}\times C^{\rm GA}}\) are generated by concatenating the weighted features from all heads as:
\[P^{\rm GA}=\big{\|}\big{\|}_{k}A_{k}P_{k}^{\rm FC}, \tag{9}\]
where \(\|\) denotes the concatenation operation. We then stack \(L^{\rm 3D}\) such SAGA layers to build the 3D features extraction
Figure 3: The SAGA layer consists of a SA layer and a GA layer.
Figure 2: Visualization of the spherical and BEV projection methods.
backbone. We denote the final output features as \(F^{\rm 3D}\in\mathbb{R}^{N^{\rm 3D}\times C}\) shown in Fig. 1.
### Hyperbolic Feature Learning
In this subsection, we first state the motivation for such hyperbolic feature learning. We then introduce the hyperbolic embedding operators that will be used in Section 3.3 to fuse features extracted from the 3D LiDAR point cloud and spherical LiDAR projection in Section 3.1.
**Motivation.** After feature extraction using the two backbone branches, we need an effective fusion strategy to consider both point features and projection features. Embedding in a hyperbolic space has recently gained increased interest and shown promising performance in various fields [4, 7, 11, 13, 48]. The paper [24] argues that 3D point cloud objects possess inherent hierarchies due to their nature as compositions of small parts, which can be embedded in the hyperbolic space. Following this motivation, we consider leveraging the hyperbolic embedding method in our pipeline, such that features can be equipped with more various representations that come from different embedding spaces. Our ablation study (cf. Table 3 of Section 4.4) also indicates that hyperbolic embedding can lead to improvements in the pose estimation accuracy.
**Hyperbolic Embedding.** Different from the common Euclidean space, the hyperbolic space is equipped with constant negative curvature and has a different metric rather than the Euclidean \(\ell_{2}\) norm \(\|\cdot\|\). This special metric renders a ball in hyperbolic embedding space to have exponentially increased volume with respect to its radius rather than polynomially as in Euclidean spaces.
Similar to [11], we use the \(n\)-dimensional _Poincare ball_\((\mathbb{D}_{c}^{n},g^{\mathbb{D}})\) for our hyperbolic embedding with the parameter \(c\) indicating constant negative curvature \(-c^{2}\). More specifically, \(g^{\mathbb{D}}\) is the Riemannian metric, and \(\mathbb{D}_{c}^{n}\) is defined as:
\[\mathbb{D}_{c}^{n}=\big{\{}x\in\mathbb{R}^{n}\,:\,c\|x\|^{2}<1,c\geqslant 0 \big{\}}, \tag{10}\]
where the distance \(d\) between two points \(x\) and \(y\) on \(\mathbb{D}_{c}^{n}\) is defined as:
\[d(x,y)=\frac{2}{\sqrt{c}}\operatorname{arctanh}(\sqrt{c}\|{-x}\oplus_{c}y\|), \tag{11}\]
where \(\oplus_{c}\) is the _Mobius addition_ defined as follows:
\[x\oplus_{c}y=\frac{(1+2c\langle x,y\rangle+c\|y\|^{2})x+(1-c\|x \|^{2})y}{1+2c\langle x,y\rangle+c^{2}\|x\|^{2}\|y\|^{2}}. \tag{12}\]
To support features transferring from Euclidean spaces to hyperbolic spaces, the differentiable bijective operator named _exponential map_ is induced. For a fixed base point \(x\in\mathbb{D}_{c}^{n}\), where the tangent space at \(x\) is a Euclidean space, the exponential map \(\exp_{x}^{c}:\mathbb{R}^{n}\to\mathbb{D}_{c}^{n}\) establishes the connection between the tangent Euclidean space and the hyperbolic space at \(x\) as:
\[\exp_{x}^{c}(v)=x\oplus_{c}\bigg{(}\tanh\!\bigg{(}\sqrt{c}\frac{ \lambda_{x}^{c}\|v\|}{2}\bigg{)}\frac{v}{\sqrt{c}\|v\|}\bigg{)}, \tag{13}\]
where \(\lambda_{x}^{c}\) is the _conformal factor_. In our pipeline, we assume the input features are in this tangent space and would like to embed them in the hyperbolic space \(\mathbb{D}_{c}^{n}\).
### Feature Fusion Block
Based on the above-mentioned motivation, we propose the feature fusion block (FFB) to achieve effective feature interaction. Each FFB conducts both space-specific interaction and modal-specific interaction alternatively, which is similar to the commonly used cross-self-attention operation. We stack \(L\) such FFBs in our pipeline.
**Feature Merging.** Given the extracted 3D features \(F^{\rm 3D}\) and spherical projection features \(F^{\rm sph}\), we first pass them through an \(\ell_{2}\) normalization layer such that all features are constrained on a sphere. This is a common way to process multi-modal data [30]. We then formulate a fusion graph with complete edge connections, in which each node contains features from either \(F^{\rm 3D}\) or \(F^{\rm sph}\). In addition, to enable features to interact directly with the global representation, we add two extra node features that are processed by the global average pooling module. The fusion graph node features are collected in the following set:
\[F=\big{\{}F^{\rm 3D},F^{\rm sph},\mathrm{Pooling}(F^{\rm 3D}),\mathrm{Pooling }(F^{\rm sph})\big{\}}. \tag{14}\]
**Space-specific Interaction.** We embed the fusion features \(F\) into the Euclidean and hyperbolic spaces as \(F^{\mathbf{H}}=\exp_{x}^{c}(F)\) (where the \(\exp\) operator is applied node-wise) and \(F^{\mathbf{E}}=F\), respectively, to perform feature interaction using GA layers. Specifically, in the same way as (7), we obtain the \(k\)-th head FC features for \(F^{\mathbf{H}}\) or \(F^{\mathbf{E}}\), denoted as \(F_{k}^{\rm FC}\). We additionally leverage a learnable matrix \(M\) regarded as a feature relationship metric such that the attention weights are computed as:
\[A_{k}^{M}=\mathrm{Softmax}\Big{(}F_{k}^{\rm FC}MF_{k}^{\rm FCT }\Big{)}. \tag{15}\]
In Riemannian geometry, a Riemannian metric on a smooth manifold is a smooth symmetric covariant 2-tensor field that is positive definite at each point. The learnable matrix \(M\) can be viewed as a more general extension of the Riemannian metric, where we do not impose any constraint on it, leaving it to update freely. The effectiveness of this design can be seen in Table 5, where the free metric surpasses other counterparts.
The learned feature embeddings from the Euclidean and hyperbolic spaces are then passed into two different GA
layers and finally fused together using element-wise adding:
\[F^{\text{space}}=w^{\mathbf{E}}F^{\mathbf{E}}+w^{\mathbf{H}}F^{\mathbf{H}}, \tag{16}\]
where \(w^{\mathbf{E}}\) and \(w^{\mathbf{H}}\) denote learnable weights for the Euclidean and hyperbolic embeddings \(F^{\mathbf{E}}\) and \(F^{\mathbf{H}}\), respectively. Each node feature has thus aggregated information from both Euclidean and hyperbolic spaces, which can be viewed as an adaptive combination of linearity and non-linearity that can contribute to a more effective feature representation.
**Modal-specific Interaction.** We next decouple the merged features back to 3D features and projection features again, enabling them to turn around and learn information within their own modality. This is similar to the self-attention operation in the cross-self-attention pipeline. Specifically, for the 3D features, we pass them through a GA layer (with the learnable matrix) with preceding and succeeding MLP layers, while for the 2D projection features, we pass them through a basic ResNet block. After the modal-specific interaction, 3D features and projection features are merged together again using (14) to reconstruct the fusion features.
### Pose Regression Head and Loss Function
The task of LiDAR pose regression requires predicting a 6-DoF pose. However, since the translation and rotation elements do not scale compatibly, the regression converges in different basins. To deal with this problem, previous methods [39, 46] consider the regression head with two parallel MLPs for translation and rotation regression, respectively. We thus use the same decoding head design as [39], which consists of two MLP layers for translation and rotation regression as shown in Fig. 1.
During training, to provide sufficient supervision to the whole pipeline, we use not only the fusion features but also the 3D and projection features at lower levels. As shown in Fig. 1, for the three features \(F^{\text{3D}}\), \(F^{\text{sph}}\), and \(F\), we use three different regression heads \(g^{\text{3D}},g^{\text{sph}},g\) respectively to predict their corresponding 6-DoF poses \((t^{\text{3D}},r^{\text{3D}})\), \((t^{\text{sph}},r^{\text{sph}})\), \((t,r)\). Specifically, we first perform global average pooling and then regression to obtain the predicted poses, which can be described as follows:
\[(t^{\text{3D}},r^{\text{3D}}) =(g^{\text{3D}}\circ\text{Pooling})(F^{\text{3D}}), \tag{17}\] \[(t^{\text{sph}},r^{\text{sph}}) =(g^{\text{sph}}\circ\text{Pooling})(F^{\text{sph}}),\] (18) \[(t,r) =(g\circ\text{Pooling})(F), \tag{19}\]
where \(\circ\) denotes the composition operation, and \(\text{Pooling}(\cdot)\) denotes the global average pooling operation. As for the rotation, we use the logarithmic format of the quaternion [39, 46]. Denoting the translation and rotation targets as \(t^{*}\) and \(r^{*}\), the final loss function is computed as:
\[\mathcal{L} =(\|t^{\text{3D}}-t^{*}\|+\|t^{\text{sph}}-t^{*}\|+\|t-t^{*}\|)e^ {-\lambda}+\lambda\] \[\quad+(\|r^{\text{3D}}-r^{*}\|+\|r^{\text{sph}}-r^{*}\|+\|r-r^{* }\|)e^{-\gamma}+\gamma \tag{20}\]
where \(\lambda\) and \(\gamma\) are learnable parameters. During inference, the pose \((t,r)\) predicted by the fusion features \(F\) is treated as the final prediction.
## 4 Experiments
In this section, we first evaluate our proposed model on datasets collected from outdoors and indoors. We next present ablation studies to demonstrate the effectiveness of our model design.
### Implementation Details
We use ResNet34 [14] pre-trained on ImageNet [9] as the backbone for projection features extraction. We use a batch size of \(32\). The number of attention heads is set as \(8\). Following [11], we set the base point \(x\) as \(0\) for hyperbolic embedding. We set \(L^{\text{3D}}=2\) and \(L=2\). The Adam [18] optimizer with the initial learning rate \(1\times 10^{-3}\) and weight decay \(5\times 10^{-4}\) is used for training. We train our network for \(150\) epochs. All the experiments are conducted on either an NVIDIA RTX 3090 GPU or an NVIDIA RTX A5000 GPU.
### Datasets
**Oxford Radar** is a large-scale outdoor autonomous driving dataset [5]. It provides data from multi-modal sensors, including LiDARs, cameras, Radars, and GPS, but in our experiments, we use only LiDAR information. It contains sensor data in the time span of \(1\) year and a length span of \(1000\) km. In addition, it covers various seasons and weather conditions, which thus allows a comprehensive evaluation of the models. Following [39, 46], we use the same benchmark data split setting, and we also report the mean translation rotation error.
**vReLoc** is an indoor robot dataset [39]. It consists of data from LiDARs, cameras, depth cameras, and motion trackers. In our experiments, we use only LiDAR information. It contains both static and dynamic scenarios with people walking around. Following [39, 46], we use the same benchmark data split setting, and we also report the median translation and rotation error.
### Main Results
We first compare HypLiLoc with other baselines on the Oxford Radar dataset. From Table 1, we observe that HypLiLoc achieves SOTA performance in all metrics. Especially on the route Full-9, HypLiLoc obtains \(3.45\) m mean translation error in the city-wise relocalization task compared to the second best performer PoseSOE with \(7.27\) m,
which demonstrates the effectiveness of HypLiLoc. In addition, compared with camera pose regression approaches that take images as inputs, LiDAR-based ones are generally more accurate. This verifies that point clouds generated by LiDARs are a more effective data modality for the re-localization task. Table 1 also indicates that for large-scale pose estimation, LiDAR pose regression approaches surpass both retrieval-based and odometry-based ones, and thus this approach is promising for many applications.
Note that pose regression approaches can be integrated into SLAM systems [2, 3, 21] to achieve even better accuracy and to perform fast global pose estimating, especially in cases where a global navigation satellite system is not available (e.g., indoors and urban areas with dense skyscrapers).
We next test HypLiLoc on the indoor vReLoc dataset in Table 2, where it achieves SOTA performance in \(7\) out of \(8\) metrics and shows strong competitiveness. We note in the indoor environment, the LiDAR-based approaches also generally outperform image-based ones.
### Analysis
**Ablation Study.** We provide insights into our design choices for HypLiLoc by ablating each module. From Table 3, every module in our design contributes to the final improved estimation accuracy. Making use of information from the projected point cloud image and hyperbolic-Euclidean feature fusion strategy both contribute to more accurate pose regression outputs.
\begin{table}
\begin{tabular}{l|l|c c c c} \hline \hline & Model & Full-6 & Full-7 & Full-8 & Full-9 \\ \hline \hline _LiDAR Retrieval_ & PointNetVLAD [34] & 28.48 / 5.19 & 17.62 / 3.95 & 23.59 / 5.87 & 13.71 / 2.57 \\ \hline _LiDAR Odometry_ & DCP [41] & 18.45 / 2.08 & 14.84 / 2.17 & 16.39 / 2.26 & 13.60 / 1.86 \\ \hline \multirow{4}{*}{_Image-based PR_} & PoseLSTM [35] & 26.36 / 6.54 & 74.00 / 9.85 & 128.25 / 18.59 & 19.12 / 3.05 \\ & MapNet [15] & 48.21 / 6.06 & 61.01 / 5.85 & 75.35 / 9.67 & 44.34 / 4.54 \\ & AD-MapNet [16] & 18.43 / 3.28 & 19.18 / 3.95 & 66.21 / 9.42 & 15.10 / 1.82 \\ & AtLoc+ [36] & 17.92 / 4.73 & 34.03 / 4.01 & 71.51 / 9.91 & 10.53 / 1.97 \\ & MS-Transformer [33] & 11.69 / 5.66 & 65.38 / 9.01 & 88.63 / 19.80 & 7.62 / 2.53 \\ \hline \multirow{4}{*}{_LiDAR-based PR_} & PointLoc [39] & 13.81 / 1.53 & 9.81 / 1.27 & 11.51 / 1.34 & 9.51 / 1.07 \\ & PosePN [46] & 16.32 / 2.43 & 14.32 / 3.06 & 13.48 / 2.60 & 9.14 / 1.78 \\ \cline{1-1} & PosePN++ [46] & 10.64 / 1.78 & 9.59 / 1.92 & 9.01 / 1.51 & 8.44 / 1.71 \\ \cline{1-1} & PoseSOE [46] & 8.81 / 2.04 & 7.59 / 1.94 & 9.21 / 2.12 & 7.27 / 1.87 \\ \cline{1-1} & PoseMinkLoc [46] & 11.20 / 2.62 & 14.69 / 2.90 & 12.35 / 2.46 & 10.06 / 2.15 \\ \cline{1-1} & HypLiLoc (ours) & **6.00** / **1.31** & **6.88** / **1.09** & **5.82** / **0.97** & **3.45** / **0.84** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean translation and rotation error (m/\({}^{\circ}\)) on the Oxford Radar dataset. The best and the second-best results in each metric are highlighted in **bold** and underlined, respectively. PR stands for pose regression. HypLiLoc achieves the best performance in all metrics.
\begin{table}
\begin{tabular}{l|l|c c c} \hline \hline & Model & Seq-05 & Seq-06 & Seq-07 & Seq-14 \\ \hline \multirow{4}{*}{_Image-based PR_} & PoseLSTM [35] & 0.16 / 4.23 & 0.18 / 5.28 & 0.24 / 7.05 & 0.13 / 4.81 \\ & MapNet [15] & 0.26 / 6.67 & 0.28 / 6.91 & 0.39 / 9.17 & 0.25 / 6.85 \\ & AD-MapNet [16] & 0.17 / 3.33 & 0.21 / 3.37 & 0.24 / 4.38 & 0.14 / 4.12 \\ & AtLoc+ [36] & 0.18 / 4.32 & 0.24 / 5.14 & 0.26 / 6.04 & 0.16 / 4.61 \\ & MS-Transformer [33] & 0.16 / 3.98 & 0.15 / 3.56 & 0.18 / 5.32 & 0.13 / 4.83 \\ \hline \multirow{4}{*}{_LiDAR-based PR_} & PointLoc [39] & 0.12 / 3.00 & 0.10 / 2.97 & **0.13** / 3.47 & 0.11 / 2.84 \\ & PosePN [46] & 0.12 / 4.38 & 0.09 / 3.16 & 0.17 / 3.94 & **0.08** / 3.27 \\ \cline{1-1} & PosePN++ [46] & 0.15 / 3.12 & 0.10 / 3.31 & 0.15 / 2.92 & 0.10 / 2.80 \\ \cline{1-1} & PoseSOE [46] & 0.14 / 3.15 & 0.11 / 2.90 & 0.15 / 3.06 & 0.11 / 3.20 \\ \cline{1-1} & PoseMinkLoc [46] & 0.16 / 5.17 & 0.11 / 3.74 & 0.21 / 5.74 & 0.12 / 3.64 \\ \cline{1-1} & HypLiLoc (ours) & **0.09** / **2.52** & **0.08** / **2.58** & **0.13** / **2.55** & 0.09 / **2.34** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Median translation and rotation error (m/\({}^{\circ}\)) on the vReLoc dataset. The best and the second-best results in each metric are highlighted in **bold** and underlined, respectively. PR stands for pose regression. HypLiLoc achieves the best performance in 7 out of 8 metrics.
\begin{table}
\begin{tabular}{l c} \hline \hline Method & Mean Error (m/\({}^{\circ}\)) on Full-8 \\ \hline base model & 9.78 / 1.99 \\ + global graph attention & 8.91 / 1.74 \\ + spherical-projection backbone & 7.26 / 1.36 \\ + feature fusion block & 6.19 / 1.13 \\ + learnable metric (full model) & **5.82** / **0.97** \\ \hline full model w/o hyperbolic branch & 6.57 / 1.19 \\ full model w/o Euclidean branch & 6.24 / 1.16 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study for different modules on Full-8 route of the Oxford Radar dataset.
**Different Projection Strategies.** We next compare different modality strategies. As shown in Table 4, we first test the performance using the single modality input, including the 3D point cloud, the spherical projection, and the BEV projection. Among them, the 3D and the spherical projection show similar performances, while the BEV performance is worse. This verifies our insight that the BEV projection is not a bijective mapping, and thus less information is retained.
When feeding two modalities, the combination _3D + spherical_ surpasses the other two counterparts, which is our final model choice for HypLiLoc. When we further add the BEV input, the performance drops instead.
**Learnable Matrix \(M\) Design.** We test the performance by applying different constraints on the learnable matrix \(M\) in (15). The Riemannian metric, which formulates a positive definite and symmetric matrix, has the strictest constraints. However, as shown in Table 5, the Riemannian metric does not provide performance improvements compared with the setting without any metric. If we only impose either the positive definite constraint or the symmetric constraint, the performance improves. Furthermore, if we exclude all constraints and enable the metric to evolve freely, we can achieve optimal performance.
**Computational Time and Storage.** We compare LiDAR-based models that belong to different relocalization pipelines. As observed in Table 6, regression-based models can operate at least \(2\) times as fast as both the retrieval-based and odometry-based approaches. For the runtime memory, regression-based models need only \(1/3\) that of retrieval and odometry methods (less than \(7\) GB). HypLiLoc can perform inference at a speed of \(48\) FPS, which is \(4\) times as fast as than the retrieval model and over \(2\) times as fast as PointLoc.
**Visualization.** We visualize a typical output pose trajectory of HypLiLoc and PointLoc in Fig. 4. HypLiLoc outputs a smoother and more accurate pose trajectory compared with PointLoc.
## 5 Limitations
Although we have tested the proposed model in a city-wise dataset, verification of HypLiLoc's performance in challenging scenarios (e.g. with noise perturbations and adversarial attacks) is necessary for practical implementations.
## 6 Conclusion
In this work, we propose HypLiLoc, a novel network for LiDAR-based pose regression. It achieves effective feature extraction with global graph attention, hyperbolic-Euclidean interaction, and modal-specific learning. It achieves SOTA performance in both outdoor and indoor datasets.
## 7 Acknowledgment
This research is supported by A*STAR under its RIE2020 Advanced Manufacturing and Engineering (AME) Industry Alignment Fund - Pre Positioning (IAF-PP) (Grant No. A19D6a0053) and the National Research Foundation, Singapore and Infocomm Media Development Authority under its Future Communications Research and Development Programme. The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore ([https://www.nscc.sg](https://www.nscc.sg)).
\begin{table}
\begin{tabular}{c l l l} \hline \hline & Model & \begin{tabular}{c} Runtime \\ Speed \\ \end{tabular} &
\begin{tabular}{c} Runtime \\ Total Memory \\ \end{tabular} \\ \hline _Retrieval_ & PointNetVLAD [34] & 11FPS & 26GB \\ \hline _Odometry_ & DCP [41] & 10FPS & 22GB \\ \hline _Regression_ & PointLoc [39] & 22FPS & 7GB \\ HypLiLoc (ours) & 48FPS & 6GB \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparison of the runtime speed and the runtime total memory of different models.
Figure 4: Trajectory visualization on the Oxford Radar dataset. The ground truth trajectories are shown in bold blue lines, and the estimated trajectories are shown in thin red lines.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \#Modalities & 3D & Sph. & BEV & Mean Error (m\({}^{o}\)) on Full-8 \\ \hline \multirow{3}{*}{1} & β & & & 8.91 / 1.74 \\ & & β & & 8.94 / 2.18 \\ & & β & & 9.46 / 2.33 \\ \hline \multirow{3}{*}{2} & β & β & & **5.82 / 0.97** \\ & β & β & & 9.44 / 1.80 \\ & & β & β & 6.77 / 1.02 \\ \hline \multirow{3}{*}{3} & β & β & β & 6.32 / 1.01 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of different projection methods on Full-8 route of the Oxford Radar dataset. For the single modality, we do not use the feature fusion block.
\begin{table}
\begin{tabular}{l c} \hline \hline Method & Mean Error (m\({}^{o}\)) on Full-8 \\ \hline w/o \(M\) & 6.19 / 1.13 \\ Riemannian & 6.34 / 1.28 \\ positive definite & 6.18 / 1.13 \\ symmetric & 6.02 / 1.24 \\ no constraint & **5.82 / 0.97** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of different constraints on Full-8 route of the Oxford Radar dataset. |
2309.00716 | Migration processes in the Solar System and their role in the evolution
of the Earth and planets | We discuss problems of planetesimal migration in the emerging Solar System
and exoplanetary systems. Protoplanetary disk evolution models and the
formation of planets are considered. The formation of the Moon and of the
asteroid and trans-Neptunian belts is studied. We show that Earth and Venus
could acquire more than half of their mass in 5 million years, and their outer
layers could accumulate the same material from different parts of the feeding
zone of these planets. The migration of small bodies toward the terrestrial
planets from various regions of the Solar System is simulated numerically.
Based on these computations, we conclude that the mass of water delivered to
the Earth by planetesimals, comets, and carbonaceous chondrite asteroids from
beyond the ice line could be comparable to the mass of Earth's oceans. The
processes of dust migration in the Solar System and sources of the zodiacal
cloud are considered. | M. Ya. Marov, S. I. Ipatov | 2023-09-01T19:46:47Z | http://arxiv.org/abs/2309.00716v1 | # Migration processes in the Solar System and their role in the evolution of the Earth and planets
###### Abstract
We discuss problems of planetesimal migration in the emerging Solar System and exoplanetary systems. Protoplanetary disk evolution models and the formation of planets are considered. The formation of the Moon and of the asteroid and trans-Neptunian belts is studied. We show that Earth and Venus could acquire more than half of their mass in 5 million years, and their outer layers could accumulate the same material from different parts of the feeding zone of these planets. The migration of small bodies toward the terrestrial planets from various regions of the Solar System is simulated numerically. Based on these computations, we conclude that the mass of water delivered to the Earth by planetesimals, comets, and carbonaceous chondrite asteroids from beyond the ice line could be comparable to the mass of Earth's oceans. The processes of dust migration in the Solar System and sources of the zodiacal cloud are considered.
Solar System, migration, planetesimals, terrestrial planets, giant planets, growth of planetary embryos, formation 10.3367/UFNe.2021.08.039044
###### Contents
* 1 Introduction
* 2 Formation of the protoplanetary disk and planetesimals
* 2.1 Formation and evolution of the protoplanetary disk; 2.2 Formation and evolution of dust clusters; 2.3 Formation of rarefied clumps and planetesimals; 2.4 Timeline of planetary formation and estimates of the age of the Solar System
* 3 Migration of planetesimals during the formation of the terrestrial planets
* 3.1 Modeling an isolated aggregation of the terrestrial planets; 3.2 Modeling the aggregation of the terrestrial planets with the influence of the giant planets taken into account; 3.3 Formation times of the terrestrial planets; 3.4 Formation of the Earth-Moon system
* 4 Migration of planetesimals and planetary embryos in the feeding zone of the giant planets
* 5 Formation of the asteroid and trans-Neptunian belts
* 6 Volumes of water and volatiles delivered to the terrestrial planets
* 7 Migration of bodies from the asteroid and trans-Neptunian belts to the Earth's orbit and the problem of the asteroid-comet hazard
* 8 Migration of dust in the Solar System and the formation of the zodiacal cloud
* 9 Migration of planetesimals in exoplanetary systems
* 10 Conclusion
## 1 Introduction
The migration of bodies is one of the key processes in the formation and evolution of a planetary system. Migration processes make a significant contribution to the transfer of planetesimals and to the dynamics of the resulting configurations (planetary system architecture) at the formation stage of planet embryos and their satellite systems, and retain their role throughout the evolution with regard to the most dynamical small bodies (asteroids, comets, and meteoroids). Due to migration, matter was transferred during the formation of Solar System planets from the regions of the giant planets and the Kuiper belt toward the terrestrial planets, with a fundamental impact on their nature. In the modern era, the asteroid-comet hazard (ACH) problem for the Earth is directly related to the migration of small bodies. Planet formation models and the fallout of planetesimals on growing planets, initially formed at different distances from the Sun, have been considered in numerous studies by researchers in Russia and elsewhere, including the authors of this paper, by numerically solving the relevant physical and dynamical problems of planetary cosmogony. The structure and formation of the Solar System, including the data on planets, their satellites, and small bodies, are considered in detail in [1, 2, 3].
In this paper, we discuss the problems of planetesimal migration and planetary dynamics in the emerging Solar
System, including models of the migration of bodies from different regions of the Solar System toward the terrestrial planets. The formation of planetary bodies and asteroid and trans-Neptunian belts is considered. The migration of dust in the Solar System and the sources of the zodiacal cloud are studied. The problems of migration of planetesimals and planets in exoplanetary systems are discussed based on similarities with the dynamical properties of the Solar System. The original models considered by the authors serve to develop modern concepts in key areas of stellar and planetary cosmogony.
## 2 Formation of the protoplanetary disk and planetesimals
### Formation and evolution of the protoplanetary disk
Stars are born in gradually compressing clusters during the fragmentation of interstellar clouds with the formation of a protostellar nebula, the cradle of a star and its planetary system. According to currently accepted notions, the sequence of planetary system formation processes includes the formation of an accretionary gas-dust disk around the parent star (mainly of the late spectral type) and its decay into primary clumps, from which solid bodies (planetesimals), planetary embryos, and eventually the planets themselves form. The key role in this sequence is played by the different (hydrodynamic and gravitational) instability types in the disk, which initiate its fragmentation, the accretion of solids and their subsequent growth, and various dynamical processes, among which the leading role is played by resonances, tidal effects, and migration of planetesimals and planet embryos (see, e.g., [4, 5, 6, 7]).
A significant contribution to the formation of a protosolar nebula could be made by the explosion of a relatively nearby supernova and the appearance of a shock front. This would lead to additional cloud compression and implantation of short-lived isotopes such as \({}^{26}\)Al and \({}^{60}\)Fe, which have a significant impact on the cloud heating and evolution at an early stage [8, 9]. Evidence of such a process is the enrichment of a number of meteorites with the stable \({}^{26}\)Mg isotope, which is produced from the parent isotope \({}^{26}\)Al with a half-life of \(\sim 0.72\) Myr [10], of which the daughter \({}^{60}\)Fe isotopes are much more short-lived.
The cloud contains only 1 to 2% of dust particles by mass. Due to the rapid rotation, it condenses and flattens, which causes the formation of a hot dense thickening in the central region (a proto-Sun overcoming the threshold of thermo-nuclear fusion reactions), surrounded by a gas-dust accretion disk made of the remaining initial material of the nebula, whose mass is no more than 2 to 3% of the proto-Sun mass according to estimates. This disk serves as a cradle for the formation of planets and small Solar System bodies and is obviously analogous to protoplanetary circumstellar disks. The subsequent evolution includes continued accretion of the nebula matter onto the disk and simultaneous partial accretion of the disk matter onto the proto-Sun until this process is superseded by an intensive sweeping out of gas and volatile components and the removal of high-temperature condensates by the radiation pressure and proto-Sun plasma from its vicinity to the periphery of the Solar System [2]. This is evidenced by the presence of refractory chondrules embedded in the chondrite matrix at radial distances \(R>2\)-3 AU.
This scenario underlies modern cosmogonic models based on astronomical observations of circumstellar protoplanetary disks, their structural features, and numerous exoplanet systems of both single and binary stars (see, e.g., [11, 12, 13, 14, 15, 16, 17]). An enormous contribution to the study of circumstellar disks and planetary system formation was made by observations of their structure, composition, and dynamics obtained on millimeter waves by the network of ground-based radio telescopes ALMA (Atacama Large Millimeter Array); this includes studies within the Resolving Star Formation with ALMA program and Protostellar Interferometric Line Survey (PILS) (see [18, 19, 20, 21]). Together with IR observations from the Hubble, Spitzer, and Herschel space telescopes, they produced a breathtaking picture of all the protostellar nebula components combining to create planetary systems out of this 'cosmic stew.'
Reconstruction of the formation and growth of primary solid particles in a gas-dust protoplanetary disk at an early stage of evolution is an extremely difficult task, which can only be properly defined and solved using mathematical modeling methods, with a number of constraints imposed by the available results of astronomical observations and laboratory experiments on modeling particle interactions. A protoplanetary disk is a gas-dust turbulent medium with a magnetic field, which generally requires the use of heterogeneous mechanics and magnetohydrodynamic methods. Setting aside plasma effects, the motion of the gas-dust medium in the disk is modeled most properly in the framework of the mechanics of heterogeneous turbulent media with numerous factors taken into account, including the physico-chemical properties of phases, heat and mass transfer, variations in the medium opacity to stellar radiation, viscosity, chemical reactions, phase transitions (position of the evaporation-condensation border), and coagulation. A rigorous mathematical treatment of these problems was given in monographs [22, 23] based on numerous publications by the authors on planetary cosmogony problems. They analyze the nature of the dynamical interaction between turbulent gas and dust, including the effect of the turbulence energy of the carrier phase on the behavior of solid particles and the inverse effect exerted by the dust component on the dynamical and thermal regimes of the gas phase, with the coagulation processes occurring in the disk taken into account. According to modern concepts, the collapsing gaseous shell of a nebula loses significant mass in the active young sun state at the T Tauri stage, before it joins the main sequence of the H-R diagram, with a characteristic time \(\sim 10\) Myr. The loss of gas terminates much later, at the stage of the growth of dust particles to the initial solids, and therefore the interaction of particles with gas must be taken into account in modeling.
### Formation and evolution of dust clusters
Numerical experiments have shown that it is not individual particles but their agglomerations -- dust clusters -- that merge much more easily in mutual collisions. According to [23, 24], rarefied dust clusters with a fractal structure and their interaction during collisions at moderate velocities are the key mechanisms for the agglomeration of dust particles and the growth of primary solids, which thus become a basis for the subsequent formation of planetesimals and planetary embryos. Physically, such a process seems to be substantiated quite well. The actual structure of dust clusters has an extremely complex and irregular geometry, and, although the mass fractal dimension used does not fully reflect the geometric properties of the fractal, it nevertheless allows
taking the main properties of loose fractal structures into account when modeling cluster-cluster association processes [25; 26; 27; 28; 29; 30] and spatiotemporal evolution. Not all particles of the gas-dust disk necessarily belong to clusters: some of them can form dust clouds that fill the Hill sphere and later fall onto planetesimals and planetary embryos.
Collisional interactions of dust clusters lead to the formation of denser structures, and the clusters themselves can contain both dense and loose (porous) particles [24; 31]. The clusters presumably also have a porous or fluffy structure, and, mimicking the tendency to form snow particles, are capable of forming very loose conglomerates of a fractal nature. This greatly facilitates the mathematical modeling of the growth of bodies in the disk due to the collision of clusters and particles inside them. When a large number of small dust clusters combine, homogeneous 'fuzzy' ('shaggy') aggregates form that have self-similar properties at short distances and whose inner voids gradually increase in volume, with a simultaneous increase in the average density of the merging bodies [24].
Indeed, such fluffy aggregates, due to their extremely high porosity, are resistant to destructive collisions at high impact velocities, and their radial drift in the disk is very slow. For typical shaggy aggregates of a relatively large geometric cross section compared to compact dust particles, the entire regime of motion in the gas carrier flow changes; in particular, the conditions for the occurrence of flow instability change due to significant changes in the aerodynamic drag of dust and gas. In addition, the efficiency of bouncing of colliding porous structures can change significantly [1; 24].
Thus, we regard the set of loose dust clusters of a protoplanetary subdisk as a special type of continuous (fractal) medium that has points and regions not filled with components, which greatly facilitates the process of merger/ growth of particles and the formation of primary solids. We note that spectral observations of disks around young T Tauri stars indicate that fine (\(\lesssim\) 1um) dust particles persist in them for 1-10 Myr. At the same time, in accordance with model estimates, much larger clumps of the same mass could grow during this time in the inner part of the disk (at distances \(R<10\) AU from the star),
Obviously, each of the proposed mechanisms of formation of large solid bodies in the disk with the subsequent formation of planetesimals and planetary embryos is based on the concept of the merger of the initial nanometer-size dust particles during their collisional interactions. But, as we have already mentioned, the direct merger of even small dust particles is ineffective, as is evidenced by estimates and results of laboratory experiments. Especially problematic as regards growth is the range from centimeter- to meter-size bodies. But even in the nanometer and micrometer-size ranges (the typical size of dust in interstellar clouds), the growth mechanism of solid dust particles, apparently driven mainly by van der Waals forces and electrostatic interactions, is problematic. Some exceptions may be provided by particles of amorphous water ice, concentrated beyond the ice line. Meanwhile, there is quite definitive evidence of the accretion of fairly large bodies such as pebble-cobblestones and their further association in the form of 'pebble piles' bound by gravitational forces.
### Formation of rarefied clumps and planetesimals
Turbulence strongly affects the evolutionary processes in the disk. As turbulence phases out, the particles descend to the central plane of the disk, where they create a dusty subdisk, which in the course of its further compaction and the development of gravitational instability decomposes into primary clumps (dispersive dust clusters) of a wide range of sizes. Collisional interactions of the clumps and a progressive increase in particle size then gradually give rise to intermediate-size solid bodies (planetesimals) and planetary embryos.
Historically, several scenarios for such a process have been proposed. According to [32], the initial planetesimals are \(\sim 100\) km in size in the zone of Neptune and \(\sim 1\) km in the zone of the terrestrial planets. Greenberg et al. [33] supposed that most of the planetesimals in the Neptune zone were assumed to much smaller, of the order of a kilometer. According to estimates by Safronov and Vityazev [34], the initial masses of planetesimals were \(\sim 10^{20}\) g in Earth's zone, reaching \(\sim 10^{26}\) g in Jupiter's zone, \(10^{27}\) g in Saturn's, and \(3\times 10^{27}\)-\(10^{28}\) g in Uranus's and Neptune's. The model by Eneev and Kozlov [35; 36] was fundamentally different: according to it, there was a certain mechanism for delaying the contraction of clumps, which combined into giant rarefied protoplanets about the size of their Hill spheres, and planets were formed during their subsequent contraction. In the 1990s, models that assumed the aggregation of planetesimals from small solids gained popularity (see, e.g., [37]), because self-generated turbulence in the dust subdisk was believed to prevent the accretion of dust clumps and the formation of protoplanetesimals due to gravitational instability.
In the early 2000s, new arguments were found to support the formation of rarefied dust clumps -- clusters [38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61]. In addition to dust, such clumps could include decimeter-size objects such as pebbles and larger bodies (cobblestones) formed during collisions of dust clusters. In contrast to the planet formation region, in the trans-Neptunian region at radial distances greater than 30 AU from the Sun, shear turbulence in the dusty subdisk does not extend to the zone near the equatorial plane, where the critical density is reached more easily and the disk breaks up into dust clusters [38]. According to estimates, relatively small (\(\sim 10^{9}\) cm) disk fragments could rapidly contract, forming bodies \(\sim 10\) km in size and larger (\(\sim 100\) km) planetesimals in about \(\sim 10^{6}\) years. Studied in [60] was the formation of solid planetesimals with radii from 10 to several thousand kilometers during the contraction of clusters the size of a Hill sphere, located at about 40 AU from the Sun. The mechanisms of the formation and growth of dust clumps due to dust absorption were considered by Marov et al. [56] with the thermodynamic constraints taken into account, including the effect of dust on the opacity of the gas-dust medium and the dissipation of turbulent energy, with the conclusion that, in Earth's zone, such dense clumps (protoplanetesimals) with a mass of \(\sim 10^{21}\)-\(10^{22}\) g and a radius of 50-100 km could form in \(\sim 10^{3}\)-\(10^{4}\) years, depending on the initial particle sizes.
In [43], evidence was obtained for the effective formation of gravitationally bound clumps in a mass range corresponding to planetesimals 100 to 400 km in radius formed during contraction in the asteroid belt and 150 to 730 km in radius in the trans-Neptunian belt. Trans-Neptunian objects (TNOs) with semimajor axes 30 to 50 AU, often called the Kuiper belt or the Edgeworth-Kuiper belt (although, we note, G Kuiper, who studied this problem in 1951, believed that such a belt existed only during the formation of the Solar System and is currently nonexistent). The presence of this belt was predicted much earlier by other researchers: Frederick C Leonard and Armin O Leuschner in 1930 and Kenneth Edgeworth in
1943. The masses of the formed planetesimals obtained in the computations in [43] are proportional to \(R^{3/4}\), and therefore at a distance of 30 AU they are 5 to 6 times larger than in the asteroid belt. This is consistent with the characteristic maximum masses of asteroids and TNOs. The possibility of the formation of binary planetesimals was shown in [45]. Arguments in favor of the emergence of initially large asteroids (with diameter \(d>100\) km) were discussed in [50]; in [62], the size of TNOs was shown to not exceed 400 km, with larger objects slowly growing due to interaction with surrounding bodies. The results of observations of particle ejection from comets 67P and 103P [59; 63; 64; 65] testify in favor of the formation of comet nuclei during the contraction of rarefied clusters consisting of particles in the millimeter, centimeter, and decimeter size ranges.
An alternative (or indeed complementary) approach to the development of gravitational (Jeans) instability in the disk, which is responsible for the formation of initial clumps, planetesimals, and large solids, is of a hydrodynamic nature. It is also known as streaming instability. The main concept is the imbalance between the surface gas-dust density and mass transfer [66]. Two main scenarios for such instability have been proposed. The first is based on the idea that disk/subdisk turbulence can create localized regions with a high dust-to-gas ratio, which then grow and eventually reach the size of large bodies [67]. It is assumed that there is either a passive accumulation of particles by turbulence on large scales comparable to the dissipative interval of turbulence or accumulation of particles inside turbulent eddies, which, as it were, play the role of a trap, as was also noted in [23; 68]. The appearance of such formations in zonal flows [69], including aerodynamically emerging regions between vortices [70; 46], is possible. The second scenario assumes a feedback between gas and clumped particles in a two-phase flow, i.e., back reaction of particles on the gas flow. Such interaction between gas and dust is usually referred to as linear flow instability [67; 71]; it is responsible for the generation of initial protoplanetesimal embryos. Numerical simulation yields the dust-to-gas density ratio and some other parameters necessary for the implementation of such a mechanism. Large dust 'lumps,' especially those containing centimeter-size grains formed in a preceding collision and coagulation/coalescence processes, can have a significant impact on the stability of the flow of the gas-dust medium. It has also been shown that the nonlinear evolution of flow instability can be attended by gravitational instability at lower dust-to-gas ratios [72; 73; 74; 75; 76; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 199; 200; 201; 202; 203; 204; 205; 206; 207; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 285; 286; 287; 289; 288; 289; 291; 288; 289; 281; 280; 281; 282; 284; 286; 287; 288; 289; 292; 288; 289; 293; 294; 295; 296; 297; 298; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 324; 325; 326; 327; 328; 339; 340; 341; 342; 343; 35; 35; 36; 37; 38; 39; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 81; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 99; 100; 101; 11; 12; 133; 14; 15; 16; 17; 18; 19; 19; 102; 103; 104; 105; 106; 107; 108; 109; 111; 108; 109; 120; 111; 121; 133; 140; 106; 108; 109; 121; 134; 15; 15; 16; 17; 18; 19; 19; 110; 122; 135; 156; 17; 19; 18; 19; 190; 136; 191; 200; 214; 22; 22; 23; 24; 25; 26; 27; 28; 293; 294; 295; 296; 297; 298; 30; 311; 32; 333; 34; 35; 36; 37; 38; 39; 40; 41; 43; 44; 45; 46; 47; 48; 49; 50; 52; 53; 54; 55; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 111; 12; 14; 15; 16; 17; 18; 19; 199; 111; 19; 120; 121; 136; 18; 193; 194; 195; 196; 197; 198; 199; 200; 215; 26; 27; 28; 29; 21; 29; 23; 27; 28; 29; 30; 31; 32; 33; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 55; 56; 57; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 73; 74; 75; 76; 78; 79; 82; 83; 85; 86; 87; 89; 93; 94; 95; 96; 97; 98; 99; 100; 11; 12; 13; 14; 15; 16; 17; 18; 19; 199; 11; 18; 19; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 29; 30; 31; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 45; 46; 47; 48; 49; 52; 49; 53; 56; 57; 58; 59; 60; 66;
equatorial plane, while the clumps contracted in about 1 Myr with 1000 km objects formed by contraction of clusters containing 10 km bodies. It was shown by Myasnikov and Titarenko [80; 81] that the lifetime of such gas-dust clusters can exceed several million years, depending on the optical properties of the particles and the concentration of short-lived radionuclides.
There are alternative estimates, however. In [51], the time of planetesimal formation from rarefied clumps was shown to decrease with distance from the Sun. According to [43], it took 25 revolutions around the Sun for the maximum values of the particle mass per unit volume to become about 3000 times the gas density in the central plane. It was shown in [47] that some dense clumps evolve into dense objects in just 100 to 1000 revolutions around the Sun, and in [51; 82] the planetesimal formation rate was shown to be higher than the one obtained previously in [47]. In addition, it was found in [83] that the concentration of particles relative to the mean value increased by 3 to 4 orders of magnitude in just 15 to 30 revolutions around the Sun. According to the model in [57], the clusters that form planetesimals with more than 1 km in a radius contracted in no more than 300 years. In the computations in [84], satellite systems that formed from clusters in 100 years (i.e., in 0.6 orbital periods at a distance of 30 AU) were usually made of two or more large objects and hundreds of smaller bodies. The authors of [85] believe that, at a distance of 40 AU from the Sun, planetesimals greater than 100 km in radius formed during collapse in 25 years, the process being much slower for smaller clumps. We note that all these assessments were underestimated, because they did not take the angular momentum of the collapsing cluster into account. The dependence of the contraction time of a rarefied cluster on its angular momentum was studied by Safronov [4] and Vityazev et al. [79].
It is difficult to judge what planetesimal contraction time is closest to reality. This time depends on the conditions in a given part of the disk and on the specific parameters of the cluster under consideration. The shortest contraction times correspond to the model of solids that merge in any collision, without taking the influence of gas and angular momentum into account (this is essentially the time of free fall into the center of the cluster). These factors, as well as various interaction options during collisions of objects that make up the cluster and the dependence of their size and composition on the distance from the Sun, significantly affect the estimated cluster contraction time. We note that this time can be longer than the time of their existence, and while some clumps are just being formed in a particular zone of the disk, others are already contracted into planetesimals.
In our models of planetary accretion and body migration discussed in Sections 3-9, we study the evolution stages when the gas in the disk is exhausted and planetesimals have already formed. Therefore, the exact clump formation time and the nature of planetesimal formation (via contraction of clumps or via pebble accretion; see below) were not needed in these calculations. An exception is Section 3.4, where the contraction times of clumps were estimated in studying the formation of embryos in the Earth-Moon system and satellite systems of TNOs. It follows from the estimates presented in Section 3.4 that the time before the collision of two clumps giving rise to a parent clump for Earth-Moon embryos was most likely short, \(\sim 100\) years. As regards TNOs, explaining the fraction of satellite systems formed in them requires a lifetime of clumps of the order of \(10^{3}\) revolutions around the Sun. In other words, the lifetime of clumps in this zone of the Solar System does not exceed several hundred thousand years, and we therefore believe that the age of clumps equal to several million years in the scenario by Myasnikov and Titarenko [80; 81], which we mentioned above, is unlikely.
Based on an analysis of the properties of satellite systems, we can assume that only retrograde satellite orbits are quasistable up to the edge of the Hill sphere, while prograde orbits begin to lose stability under the influence of perturbations at about half of that sphere radius. This can be taken into account in the case of particle motion at the boundary of the Hill sphere at a low gas density. In our studies, we consider collisions of two clumps of the order of the Hill sphere in size with a low gas concentration, where the dominant role is played by the behavior of the major part of the mass. In studying the contraction of clumps, their angular momenta are taken into account in calculations by a number of authors; in particular, in Section 3.4, we discuss the effect of the angular momentum of an already formed cluster on the formation of satellite systems.
In recent years, several authors [86; 87; 88; 89; 90; 91] have studied the growth of planet and planetesimal embryos via pebble accretion. Such accretion occurred in the gas and terminated when the planet embryo reached a certain mass, the so-called pebble isolation mass. Such accretion could play a much greater role in the feeding zone of the giant planets than in that of the terrestrial planets. According to [88], this can explain the dichotomy of planets at different distances from the Sun. As noted in [90], the accretion of pebbles was faster than the subsequent accretion of planetesimals; the ratio of the total mass of pebbles falling on an embryo at a distance of 5 AU from the Sun to the total mass of pebbles passing through the vicinity of the embryo was about 20%, and a large amount of matter remained in the swarm of planetesimals after the formation of giant planets [87]. The planetary embryos could migrate toward the Sun due to interaction with the gaseous disk [88; 91], but the embryo heating during their accretion prevented such migration [92].
The discussion of the results of simulations of migration processes indicates that various possible mechanisms and formation times of planetesimals could be fundamental for the formation of a circumstellar planetary system at a later stage. A special case is the Solar System. Obviously, its initial configuration was very different from the present-day one, which formed as a result of a long-term dynamic interaction of the multiple various-size initial bodies via the oligarchic growth of large planet embryos, which absorbed smaller bodies at the final stage of the formation of a diverse planetary architecture, as is observed in exoplanet systems.
This also relates to the problem of the formation of eccentricities and inclinations of planetary rotation axes. If the angular momentum of a planet relative to its center of mass is perpendicular to its orbit plane before the collision with an impactor and the angular momentum increment acquired in the collision is perpendicular to the angular momentum itself, then the ratio of the mass of the impactor to the mass of the planet is [93]
\[\frac{m_{\rm l}}{m_{\rm pl}}\approx\frac{2.5r_{\rm pl}\,z\tan I}{\pi\,v_{\rm par }\,T_{\rm pl}(1+\tan^{2}I)^{1/2}}\,, \tag{1}\]
where \(r_{\rm pl}\) is the radius of the planet, \(T_{\rm pl}\) is the planet axial rotation period, \(xv_{\rm par}\) is the tangential component of the
collision velocity, \(v_{\rm par}\) is the surface parabolic velocity of the planet, and \(\tan{I}\) is the tangent of the angle \(I\) between the planet axis of rotation after the collision and the perpendicular to the planet orbit plane. The moment of inertia of the planet is \(0.4\gamma m_{\rm pl}r_{\rm pl}^{2}\). For \(\chi=\alpha=1\), \(T_{\rm pl}=24\) h, and \(I=23.44^{\circ}\), relation (1) gives \(m_{\rm l}/m_{\rm pl}\approx 0.0065\). Hence, the present-day inclination of Earth's rotation axis could be caused by its collision with an impactor with a mass of about \(0.01m_{\rm E}\), where \(m_{\rm E}\) is Earth's mass. Presumably, the collision with an impactor of a much larger mass led to the inclination of the rotation axis of Uranus, which lies almost in the orbital plane (deviating from it by of \(2.23^{\circ}\)), and also to the 'topping over' accompanied by deceleration and reversal of the sense of proper rotation of Venus [94; 95]. The orbit parameters and the tilts of rotation axes are likely to be even more diverse in exoplanet systems.
### Timeline of planetary formation and estimates of the age of the Solar System
Scenarios of the growth of initial solids and the formation of planetesimals are closely related to the timeline of planetary formation and to estimates of the age of the Solar System. The terrestrial planets were formed by aggregation of planetesimals, as were the embryos (cores) of Jupiter, Saturn, Uranus, and Neptune, which later acquired their gas and ice shells, while the formation of the terrestrial planets was completed even later. The most accurate determination of the Solar System's age is provided by the dating of refractory calcium-aluminum inclusions (CAIs) of micrometer to millimeter size formed during the crystallization of meteorites. They are likely to belong to ancient solid matter that was part of the primary composition of the protosolar nebula, which allows determining the absolute age of the Solar System from the first condensed dust particles to the present day. The dating of CAIs in primitive meteorites by different groups of authors using corrected U-Pb and Pb-Pb analyses gave close values of \(4567.22\pm 0.21\) Myr and \(4568.67\pm 0.17\) Myr (the latter value is believed to be more accurate [96; 97; 98; 99; 100]). At the same time, the age of chondrules is in the range of \(4567.32\pm 0.42\) to \(4564.71\pm 0.30\) Myr, which indicates that chondrules were formed almost simultaneously with CAIs and this process took \(\sim 3\) Myr. This period of time is close to the time of the evidence of the protoplanetary accretion disk, whose secular evolution is obviously directly related to this process.
At the same time, the absolute age of iron meteorites has been determined as \(4567.5\pm 0.5\) Myr. Therefore, given the error range, the time of the origin of the Solar System is determined up to \(\sim 1\) Myr, or with an accuracy of \(0.002\%\). The oldest anorthositic rocks of the Moon and terrestrial zircons are only slightly younger: they are estimated to be \(\sim 4.4\) billion years old. The absolute age of strong meteorites is \(4564.91\pm 2.58\) Myr. The difference \(A=3.64\pm 1.52\) Myr can be regarded as an estimate of the total time of formation and differentiation in the course of thermal evolution of the parent bodies of these ancient meteorites. Based on the totality of available data, it can be assumed that the Solar System is \(4567.3\pm 0.1\) billion years old [101].
On the whole, the time scale outlined above is consistent with computer simulation results. They give grounds to believe that, while the accretion of disk matter on the proto-Sun was completed in 1-2.5 Myr after the birth of the protoplanetary system, the dust subdisk, consisting of approximately centimeter-size particles, formed much earlier, in 0.1 to 1.0 Myr at the radial distance \(R\sim 1\) AU, where gravitational instability developed after the critical density was reached. Obviously, the subsequent \(\sim 1\)-2 Myr suffered for the formation and thermal evolution of the first solids.
## 3 Migration of planetesimals during the formation of the terrestrial planets
### Modeling an isolated aggregation of the terrestrial planets
As already noted, many authors (see, e.g., [4; 34; 79; 102; 103]) assumed that the terrestrial planets and the cores of the giant planets were formed from a swarm of solid bodies -- planetesimals moving around the Sun along initially nearly circular orbits. The process of the formation of the terrestrial planets was first studied analytically [104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131]. In particular, the time of the formation of the Earth was estimated at a bout \(100\) Myr. At present, a large number of papers have been published on numerical simulations of the formation of the terrestrial planets from planetesimals [132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159], mostly dealing with the model of the evolution of disks of gravitating bodies that merge in collisions in the feeding zone of terrestrial planets. The actual planet formation process was very complex and depended on many factors, but studies relying on relatively simple models allow capturing a number of important features of the process. In [160; 161; 162; 163; 35; 160; 163], numerical simulations were used to study the formation of protoplanets by the merger of highly rarefied gas-dust clumps moving in almost circular orbits. The clumps were assumed to merge into giant rarefied protoplanets of the masses of the modern planets even before they were compressed to the density of solid bodies.
In 1978, the first studies of the evolution of solid-body rings appeared, where gravitational interactions of the disk bodies were modeled in a planar model and two bodies were assumed to merge when the distance between their centers of mass became equal to the sum of their radii (rather than the radius of some large conventional sphere) [164; 165]. A spatial model of the evolution of such disks was first considered in [102]. In [137; 102], the number of initial bodies was 100, and the method of spheres was used to describe the gravitational influence of the bodies. The evolution of disks containing up to 1000 gravitating bodies each in the feeding zone of the terrestrial planets, which merged during collisions, was considered by Ipatov [103; 141; 142; 143; 144; 145]. Their mutual gravitational influence was taken into account within the method of the spheres of action, when bodies are assumed to move around the Sun along unperturbed Keplerian orbits outside the sphere, whereas their relative motion inside the sphere is determined by a two-body problem. The initial distances from the bodies to the Sun ranged from 0.36-0.40 AU to 1.2 AU, and their total mass was \(1.87m_{\rm E}\), where \(m_{\rm E}\) is Earth's mass. Calculations of the evolution of planar disks showed that, in the case of almost circular initial orbits, the number of formed planets is greater than four, and the actual number of terrestrial planets is obtained only at the initial eccentricities \(e_{0}=0.35\). The actual number of planets can also be obtained in the model of spatial disk evolution; in one of the computations, four planets with masses greater than \(0.046m_{\rm E}\) were formed. The actual number of planetesimals was much more than 1000, and only part of the matter comprising two colliding bodies formed a new body. The computing power of 20th century computers did not allow considering more complex models, however; accounting for
the fragmentation of colliding bodies could somewhat increase the planetary formation time.
In computing the evolution of spatial disks, the initial eccentricities of the orbits of the bodies were taken to be \(e_{0}=0.02\), and it was shown that such eccentricities are quite rapidly achieved when taking the mutual gravitational influence of the bodies into account at distances greater than the radii of their spheres of action. In turn, the average eccentricity \(e_{\rm av}\) of the orbits of bodies in the course of evolution exceeded 0.2, and in a number of versions was greater than 0.4 at some instants. For example, in a version with \(e_{0}=0.02\) and 960 initial bodies, \(e_{\rm av}\) was respectively equal to 0.09, 0.20, and 0.35 for 500, 250, and 100 bodies in the disk. Large average orbital eccentricities were observed for bodies located along the disk edges, with major semiaxes \(a<0.4\) AU and \(a>1.2\) AU, and by end of evolution, the orbits of some planets with masses of the order of the masses of Mercury and Mars acquired eccentricities close to the actual eccentricities of these planets. We note that the increase in the eccentricities of Mercury and Mars (and the inclination of the orbit of Mercury) could be due to not only the influence of large bodies from the feeding zone of the terrestrial planets but also the gravitational influence of bodies that entered that feeding zone from the feeding zones of the giant planets. At the same time, these bodies themselves could avoid collisions with bodies in this feeding zone and only perturb their orbits gravitationally. The high abundance of iron in the core of Mercury is usually explained by the loss of most of the silicate shell mass in high-speed impacts. At the same time, part of the planetesimals in the vicinity of Mercury's orbit, which passed relatively close to the Sun before their collisions with Mercury's embryo, could have lost part of their silicate composition during such passages and thereby affect the high content of iron in Mercury's core.
The masses of the embryos of unformed terrestrial planets could exceed \(0.05me\), which allows explaining both the tilt of the rotation axis and Earth's axial rotation period. According to estimates, the time of formation of 80% of the mass of the largest stony planet (analogous to Earth) did not exceed 10 Myr, while the total time of evolution of the disks of gravitating bodies was about 100 Myr. In computations by the method of spheres of action, times of the order of 1-10 Myr were obtained for the formation of the major part of planet mass by considering a 'deterministic' method for selecting pairs of approaching bodies: a pair of bodies with the minimum approach time was selected in modeling [103; 166]. Meanwhile, the use of a 'probabilistic' method for selecting pairs of approaching bodies (proportionally to the probability of their approach) yielded formation times of most of the planetary mass almost an order of magnitude longer. The time it took for the number of bodies in the disk to decrease from \(N_{0}\) to \(N\) was usually about half the time it took for the number of bodies to decrease from \(N_{0}\) to \(N/2\), and most of the evolution of the disks was taken by the last stages of planetary formation. It was therefore concluded that the total disk evolution time for \(N_{0}=10^{12}\) is approximately the same as for \(N_{0}=10^{3}\), but taking collisional fragmentation of bodies into account can lead to a severalfold increase in the time needed for the major part of a planet's mass to accumulate [103; 111].
As shown by more recent results of numerical integration of the equations of motion with the mutual gravitational influence of bodies taken into account more thoroughly [133; 134; 135; 136; 139; 140; 147; 149; 151; 153; 154; 155], the deterministic approach reflects the actual evolution of the disks of bodies and the planetary formation times quite satisfactorily (and better than the probabilistic approach). In the first such computations [136], 56 planetary embryos were considered and the computations required about three years of processor time, but the number of bodies simulated recently lies in the thousands (as, for instance, in simulations with 6000 planetesimals in [149]), which evidently improves the statistics. The main conclusions about the formation of the terrestrial planets drawn from the calculations by the method of spheres of action and by numerical integration of the equations of motion are approximately the same.
Modeling the aggregation of the terrestrial planets with the influence of the giant planets taken into account
The formation and migration of the giant planets are closely related to the aggregation of the terrestrial planets. Planet-simus from the feeding zone of the giant planets, which were acquiring orbits with small perihelion distances during the Solar System's evolution, perturbed the orbits of planetesimals and planet embryos in the feeding zone of the terrestrial planets, as well as bodies of the asteroid belt, and often collided with them. Changes in Jupiter's and Saturn's orbits caused changes in the positions of resonances and contributed to the cleaning of the asteroid belt zone, from which some bodies could penetrate into the feeding zone of the terrestrial planets. Therefore, the influence of the forming giant planets and their feeding zone bodies must be taken into account when studying the accretion of the terrestrial planets.
Various scenarios of such influence have been considered. In [153; 154; 155], the impact of Jupiter on the formation of the terrestrial planets was simulated for various values of its orbit and mass. In these computations, the terrestrial planets reached half of their final masses in the first 10 to 20 Myr, although separate bodies continued falling on them after 100 Myr. In [154; 155], the initial disk with 1000 to 2000 planetesimals was considered, a number 5 to 10 times greater than in previous papers where the mutual gravitational influence of bodies was taken into account by numerically integrating the equations of motion. The initial disk with a mass of \(9.9m_{\rm F}\) extended up to 5 AU. Over a billion years, the asteroid belt was cleared by more than 99% due to resonances of planetesimals with Jupiter determined by their mutual gravitational influence and the influence of embryos. It was noted in [140] that, as the gas dissipated, the secular resonances (\(v_{5},v_{6},v_{15}\), and \(v_{16}\)) with Jupiter and Saturn moved inward, affecting the planetesimals. After 3 Myr, the gaseous disk decreased in mass by a factor of 20 and did not produce a dynamical effect on the migration of planetesimals.
A number of authors studying the formation of the terrestrial planets considered the Grand Tack model (see, e.g., [167; 168; 169; 170]). In this model of the early dynamical rearrangement of the Solar System, interaction with the gas in the disk first made Jupiter migrate closer to the Sun, up to 1.5 AU; then, after the formation of a massive Saturn and the scattering of gas, Jupiter, together with Saturn, started moving away from the Sun, staying in a 2:3 resonance with Saturn. As a result of this migration, Jupiter 'cleared' the asteroid belt, reduced the amount of material in Mars's feeding zone, and facilitated the delivery of water to the forming terrestrial planets. It was assumed in [170] that Jupiter and Saturn acquired rather large masses of gas over 600 thousand years and respectively migrated from 3.5 and 4.4 AU to 1.5 and 2 AU during the first 100 thousand years;
after the mass of Saturn increased from \(10m_{\rm E}\) to its present-day mass, Jupiter and Saturn migrated to respective distances of up to 5.25 and 7 AU from the Sun in 500 thousand years. In Section 4, we discuss models treating the subsequent stages of the agglomeration of the giant planets and the migration of Uranus's and Neptune's embryos, including the Nice model, named after the place of its creation, the French observatory in Nice. In the Nice model, the cause of abrupt changes in the orbits of these embryos is assumed to be the 1:2 orbital resonance established between Jupiter and Saturn.
The mechanism of giant planet migration explains a number of events in the early history of the Solar System, including the presence of numerous TNOs in resonances with Neptune, the Kuiper belt, and Oort cloud formation, and phenomena such as the hypothetical late heavy bombardment of the inner Solar System, although it is proposed mechanism is not universally accepted, despite being used to estimate the role played by an exogenous source of volatiles in the evolution of the terrestrial planets. The delivery of water to the Earth by small bodies is assumed to have occurred mainly after these planets acquired 60 to 80% of their final mass. According to other estimates [171], volatile components in Earth's zone could be accumulated by parent planetesimal bodies only 1 Myr after the formation of the protoplanetary circumsolar disk, when its temperature dropped to below 700 K. As we discuss in what follows, in various models (for example, the one in [169]), the growth time of an Earth analogue to \(0.5m_{\rm E}\) was in the range from several to 20 Myr.
Studies within the Nice model included the evolution of the orbits of asteroids during an abrupt change in the orbit of Jupiter, leading to a sharp change in the positions of resonances [172]. The probability of collisions of asteroids with the Moon was obtained equal to \(4\times 10^{-5}\), and 20 times higher for the Earth. According to [173; 174], the Nice model explains the formation of Mars and the asteroid belt well if the above instability occurred within 1 to 10 Myr after the gaseous disk dissipation.
In contrast to the studies discussed in Section 3.1 of the evolution of disks of bodies that merge in collisions, another model was used in [146] to study the formation of the terrestrial planets. Migration of planetesimals was computed within the feeding zone of the terrestrial planets, divided into seven regions, depending on distance from the Sun. The gravitational influence of all planets, including the giants, was taken into account, while the planetesimals and the planets themselves were regarded as point masses and their collisions were not taken into account directly. In a number of versions of the model, embryos with masses from 0.1 to 0.3 of modern planet masses were considered instead of the terrestrial planets. The arrays of orbital parameters of migrating planetesimals obtained in the computations with a step of 500 years were used to calculate the probabilities of their collisions with the planets, their embryos, and the Moon. This approach allowed more accurately calculating the probabilities of collisions between planetesimals and planetary embryos for a number of evolutionary stages, especially if these probabilities are small. Later, we carried out computations similar to those in [146] for a model where the planetesimals that have collided with a planet are excluded from further computations.
When studying the composition of the planet embryos from planetesimals initially located at various distances from the Sun, narrower zones of planetesimal origin were considered than were in previous studies, and not only the final composition of the planets but also the change in the composition of the embryos over time was studied. The conclusion drawn from the computations was that terrestrial planet embryos with masses of the order of or less than one tenth of the modern planet masses were mainly accumulating planetesimals from the vicinity of their orbits. The inner layers of a terrestrial planet were formed mainly from matter from the vicinity of the planet orbit. When planetesimals fell out of Jupiter's and Saturn's feeding zone onto terrestrial planet embryos, these embryos had not yet acquired the masses of modern planets, and matter (including water and volatiles) from this zone could have fallen into the inner layers of the terrestrial planets and affected their composition. With the masses of Earth's and Venus' embryos of the order of a third of their modern masses, the probabilities of fallouts of planetesimals formed at a distance from 0.7 to 0.9 AU from the Sun onto these embryos differed by not more than a factor of two in the considered time interval \(T>2\) Myr.
Based on the considered model, it was also found that the total masses of planetesimals that migrated from each zone in the region from 0.7 to 1.5 AU from the Sun and collided with almost-formed Earth and Venus differed by not more than a factor of two. The outer layers of Earth and Venus could have accumulated the same material from different parts of the feeding zone of the terrestrial planets. At the final formation stages, the planetesimals initially located at 1.1 to 2.0 AU from the Sun could have become part of Earth and Mars in a ratio not much different from that of the masses of these planets.
In [146], the fraction of planetesimals that fell on the Sun could exceed 10% for the initial planetesimal distances from the Sun in the range from 0.3 to 0.5 AU and from 1.1 to 2.0 AU. In the versions where planetesimals that collided with planets were excluded from computations, the evolution time of planetesimal disks was typically equal to several hundred Myr. But, in some versions of computations with small initial eccentricities, individual planetesimals were moving in 1:1 resonances with Earth or Venus even after a billion years or more. The time interval considered in these versions of computations was longer than the one in [146], and the proportion of bodies that collided with the Sun, given the present-day planetary masses, was mainly in the range of 0.24-0.32 for the initial semimajor axes of planetesimals \(a_{0}<1.1\) AU, and reached \(2/3\) for \(1.5\leq a_{0}\leq 2\) AU. In most cases, for planet masses no greater than half their modern masses and for a time interval not exceeding 10 Myr, no planetesimals collided with the Sun or were ejected into hyperbolic orbits.
As in earlier computations [142; 143; 144; 145], the proportion of planetesimals ejected from the feeding zone of terrestrial planets into hyperbolic orbits did not exceed 10%. At the same time, the probability of a collision with Jupiter for a planetesimal initially located in the feeding zone of the terrestrial planets was no more than a few percent of the probability of its collision with Earth, and the probability of a collision with Saturn was even lower by an order of magnitude.
The above model estimates of the formation of terrestrial planet embryos were based on a model that takes the joint gravitational influence of giant planets and terrestrial planet embryos into account. Accounting for the mutual gravitational influence of planetesimals, including those that come from the feeding zones of the giant planets, leads to an increase in the mixing of matter in the feeding zone of the
terrestrial planets and an increase in the probability of collisions of planetesimals with the Sun and their ejection into hyperbolic orbits. For the mass ratio of Earth and Moon embryos equal to 81, similar to the modern one, the ratio of the probabilities of planetesimals falling onto Earth and Moon embryos in the considered versions did not exceed 54, and it was maximum for embryo masses of approximately one third the modern masses of these bodies.
In recent years, the formation of the terrestrial planets has mainly been studied based on the Nice and Grand Tack models mentioned above. As we have seen, the first model is based on the hypothesis of an abrupt change in giant planet orbits when Jupiter and Saturn establish a resonance, and Jupiter's migration to the orbit of Mars and back is assumed in the second model. It was noted in [146], however, that the peculiarities of the formation of the terrestrial planets and the clearing of the asteroid belt can be explained without using these models, based solely on a relatively smooth decrease in Jupiter's semi-major axis and a shift in the positions of resonances due to ejection of planetesimals into hyperbolic orbits by Jupiter. In such a model, the embryo formation of Uranus and Neptune near Saturn's orbit is assumed and the migration of these embryos to the modern orbits of Uranus and Neptune due to interaction with planetesimals is considered [103; 145; 175; 176].
### Formation times of the terrestrial planets
When discussing model approaches in the preceding sections, we already touched upon estimates of the time scale of the agglomeration of the terrestrial planets. We now consider this problem in more detail, invoking the results of studying the isotope composition of meteorites.
From analyses of the lead isotope ratio \({}^{207}\)Pb/\({}^{206}\)Pb in zircon crystals contained in a substance from the Martian meteorite NWA 7034 [177], it was concluded that the formation of the core and the crystallization of a magma ocean on Mars were completed no later than 20 Myr after the beginning of the formation of the Solar System, measured from the formation of CAIs. This estimate is consistent with the hafnium-tungsten scale \({}^{182}\)Hf\(-\)\({}^{182}\)W, indicating the age within 10 Myr [178]. The obtained estimate of the Hf/W \(\sim\) 4 isotope ratio for the Martian mantle with a \(\sim\) 25% uncertainty corresponds to the Martian core formation time in the range from 0 to 10 Myr [179]. Thermal models also indicate that the solidification of Mars was completed within \(\sim\) 10 Myr [180], during which Mars grew to approximately its present size. It was assumed in [181] that agglomeration of Mars was completed within approximately 5 Myr. According to [146], Mars grew more slowly than Earth and Venus, and individual planetesimals could remain in its feeding zone even after 50 Myr. It can therefore be assumed that, during clump contraction, a rather large Martian embryo with a mass of at least \(0.02m_{\rm E}\) was initially formed, and planetesimals from Jupiter's and Saturn's feeding zones contributed to a more rapid removal of planetesimals from Mars' feeding zone. A similar scenario has been proposed for the formation of the Mercury embryo with the same initial mass.
According to the model in [149], in a disk with a mass of \(\sim 7m_{\rm E}\) in the range of 0.2-3.8 AU from the Sun, the average (over several calculation schemes) mass of a Mercury analogue was about \(0.2m_{\rm E}\), greatly exceeding the mass of Mercury. The main contributors to the mass of a Mercury analogue were bodies from the zone at 0.2 to 1.5 AU over 10 Myr, later supplemented by bodies from the zone at up to 3 AU from the Sun. The orbits of these analogues had semimajor axes close to 0.27-0.34 AU, and their eccentricities and inclinations were small. According to [139], considering the evolution of the disk at a distance from 0.7 to 1.0 AU from the Sun shows that the analogues of Earth and Mars accumulated most of their mass in 10 Myr. Individual planetesimals could fall on the forming terrestrial planets until 100 Myr [118], which is confirmed by numerical calculations [142; 136; 143; 144; 145].
Of particular interest are Earth and Venus. Based on the model of accumulation of bodies in their feeding zones in any sort of collisions, the masses of their embryos could double in 1 Myr, respectively starting from \(0.1m_{\rm E}\) and \(0.08m_{\rm E}\). We note that the bodies from Jupiter's feeding zone start penetrating into the feeding zone of the terrestrial planets in the same period. Earth and Venus could have acquired more than half their masses in approximately 5 Myr. The 3 to 5 Myr estimate for the time scale is supported by data from studies of the isotopic composition of Earth's atmosphere [182]. During that time, most of the planetesimals initially located at a distance of 0.7 to 1.1 AU from the Sun fell on the growing Earth and Venus [146]. Obviously, taking the release of matter during collisions into account would lead to an increase in the accretion time.
As already noted, at the initial stages of the Solar System's evolution, the residual gas in the disk played an important role. When studying the formation of bodies in the zone from 0.5 to 4 AU in [140], it was assumed that the surface density of the gas decreased exponentially with time, \(\Sigma_{\rm gas}(r,\,t)=\Sigma_{\rm gas,0}(1\,\,{\rm AU},\,0)\,(r/1\,\,{\rm AU })^{-1}\exp{(-t/\tau)}\). It was assumed in these calculations that \(\tau=1\) Myr and \(\Sigma_{\rm gas,0}=2000\) g cm\({}^{-2}\). After 4.6 Myr, only 1% of the gas remained. The gas could possibly dissipate in a time \(\sim 10\) Myr [183], and therefore the agglomeration of planetesimals occurred in the presence of the gas component, which reduced the eccentricities of the planetesimals; the formation time of the terrestrial planets did not exceed 10 Myr either.
The evolution of narrow planetesimal rings 0.02 AU and 0.092 AU wide at a distance of about 1 AU from the Sun, with and without taking the gas drag into account for bodies moving in the rings, was studied in the models in [184; 185]. It was shown that the distance between the orbits of the emerging planetary embryos was about (5-10)\(\mu_{\rm T}\), where \(r_{\rm H}\) is the radius of the embryo's Hill sphere. The formation time of protoplanets with masses of \(\sim 10^{26}\) g (\(\sim 0.016m_{\rm E}\)) was about 0.5 Myr, and after 1 Myr the main mass of the disk was concentrated in bodies with a mass of no less than several \(10^{26}\) g. In semianalytic models [134], the formation of a \(0.1m_{\rm E}\) embryo at a radial distance of 1 AU and of a \(10m_{\rm E}\) embryo at 5 AU, respectively, takes 0.1 and 1 Myr. Upon reaching a mass of \(\sim 10m_{\rm E}\), Jupiter's embryo could relatively quickly continue increasing its mass by gas accretion.
As shown by numerical studies of the migration of planetesimals from Jupiter's and Saturn's feeding zones, a major part of their mass fell this zone in several million years. This indicates the time when the planetesimals coming from the zones of Jupiter and Saturn influenced the formation of the terrestrial planets. We emphasize that, according to the calculations performed in the framework of the model that takes the interaction of all Solar System planets into account [186], the maximum evolution times of the orbits of some planetesimals that started in the zone of Jupiter and Saturn can be much longer than in the absence of Uranus and Neptune (50 and 4 Myr, respectively). Even in that case,
however, the main contribution to collisions with the terrestrial planet embryos was made by planetesimals from Jupiter's and Saturn's feeding zones in the first million years after the formation of a significant mass of Jupiter. This time is estimated as 1 to 2 Myr from the origin of the Solar System. Meanwhile, individual planetesimals from Uranus's and Neptune's feeding zones fell on the Earth even after hundreds of millions of years, and may even remain in the Solar System to this day. In the Grand Tack model, when planetesimals fell out of Jupiter's and Saturn's feeding zones and from the zone of the outer asteroid belt onto the terrestrial planet embryos, these embryos had not yet acquired the modern planetary masses, and material (including water and volatiles) from these regions could accumulate in the inner layers of the forming terrestrial planets and the Moon.
### Formation of the Earth-Moon system
Historically, various models of the origin of the Moon have been proposed, but this problem is not yet decisively resolved. We discuss several of the most significant options here.
According to the theory of coaccretion (see, e.g., [187, 188, 189, 190]), the Moon formed from a near-Earth swarm of small bodies. Their main source, according to the Schmidt-Ruskol-Safronov model, was the sticking together of protoplanetary disk particles during collisions ('free to free' and 'free to bound').
The mega-impact model proposed by a number of authors [191, 192, 193, 194, 195, 196, 197, 198, 199] has won considerable popularity. The Moon is assumed to have been formed in a catastrophic collision with a Mars-size body (named Theia), which caused Earth's molten silicate mantle to be ejected into low Earth orbit. A proto-Moon formed from the merged fragments of the ejection gradually receded to its present-day orbit due to tidal interaction with Earth in the course of evolution. The attractiveness of this hypothesis consists primarily in its explaining the average density of the Moon, which is equal to the density of Earth's mantle. Several modifications of the mega-impact model have been proposed. In particular, calculations in [197] showed that, when a body of a mass from \(0.026m_{\rm E}\) to \(0.1m_{\rm E}\) hits a proto-Earth that is rapidly rotating with a period of about 2.5 h, a Moon-forming disk can emerge, consisting mainly of the substance of Earth's mantle. According to [194], a head-on collision of two bodies of almost equal masses (with a mass ratio not greater than 1.5) could give rise to similar compositions of Earth and the Moon. We emphasize, however, that these models require the subsequent removal of a part of the angular momentum of the formed Earth-Moon system by means of an orbital resonance between the Sun, Earth, and the Moon.
The canonical mega-impact model encounters certain difficulties, primarily of a geochemical nature, and is currently being critically scrutinized. It is not capable of explaining the similar isotopic abundances of a number of elements on the Earth and the Moon, primarily oxygen, iron, hydrogen, silicon, magnesium, titanium, potassium, tungsten, and chromium [199, 200, 201, 202, 203, 204]. It can hardly be assumed that the body that formed the Moon, even if coming from a relatively close vicinity of the forming terrestrial planets, had a composition similar to that of Earth, because, according to this model, most of the Moon's substance originates from the impactor rather than the proto-Earth. These results undermine geochemical substantiation of the mega-impact model. In addition, the mega-impact hypothesis does not offer a means of explaining the absence of isotopic shifts in lunar and terrestrial matter, because the material ejected during a giant impact should be 80-90% vapor, and the isotopic compositions of K, Mg, and Si should change noticeably when the melt evaporates [201]. The giant impact hypothesis suggests that, after the collision, a magma ocean formed on Earth's surface, but the connection of the ancient magma ocean on Earth with this event is debatable [205].
An alternative to the mega-impact model is the multi-impact model [206, 207, 208, 209, 210, 211, 212] and the model of the formation of Earth and Moon embryos from a single initial rarefied gas-dust cluster in a protoplanetary nebula, followed by the formation and contraction of two fragments [213, 214, 200, 202, 201, 202, 215, 200, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212].
The multi-impact model is based on the hypothesis of multiple collisions (macroimpacts) of planetesimals with Earth's embryo. It was found in [209] that matter ejected from the Earth into prograde orbits easily joins the prograde protosatellite disk, while matter ejected from the Earth into retrograde orbits falls back to the Earth. In [212], up to \(10^{6}\) particles were included in calculations of the impactor collision with Earth's embryo by the method of smoothed-particle hydrodynamics (SPH). In this method, the liquid is regarded as a set of discrete moving elements, particles. The impactor mass varied from 0.01 to 0.09 of the Earth embryo mass, and the collision velocity varied from 1 to 4 parabolic velocities on the embryo surface. The collision angle and the angular velocity of rotation of the embryo were also varied. The fraction of iron in the formed Moon did not exceed 10% in 75% of the cases considered.
The model of the formation of the Earth and Moon embryos from a single initial rarefied gas-dust clump in the protoplanetary nebula with the subsequent formation and contraction of two fragments [213, 214, 200, 201, 202, 215, 203, 216, 217], in addition to satisfying geochemical constraints, also allows explaining the known differences in the chemical composition of Earth and the Moon, including iron deficiency, depletion of volatiles, and enrichment in refractory oxides of Al, Ca, and Ti of the Moon compared to Earth.
The bound of no more than 30 Myr for the formation of the major part of the mass of Earth and the Moon has been obtained by studying the hafnium-tungsten Hf/W isotope abundance [218, 219]. Based on the study of the neon \({}^{20}\)Ne/\({}^{22}\)Ne ratio, it was concluded that the presence of nebular neon requires Earth's embryo to acquire a significant mass in a few million years and to be able to capture the nebular gases that had dissolved in the ancient magma ocean [220]. In contrast to the studies cited in Section 3.3, where relatively short formation times for most of the mass of terrestrial planets were assumed based on the analysis of the \({}^{182}\)Hf\(-^{184}\)W and \({}^{87}\)Rb\(-^{86}\)Sr systems, Galimov [215] came to the conclusion that the formation of the Earth and Moon cores could not have begun earlier than 50 Myr from the CAI-dated origin of the Solar System. It has also been suggested that, before the Moon formed as a condensed body, it must have evolved in an environment with a higher Rb/Sr ratio. Due to its large atomic weight, rubidium cannot escape from the surface of the Moon, but can escape from the heated surface of small bodies or particles, and therefore the original lunar substance had probably remained in a dispersed state for the first 50 Myr, for example, in the form of a gas-dust clump.
The model suggested by Galimov et al. is not free of shortcomings either; we discuss it critically in [221]. In particular, it remains undetermined where the iron-depleted
substance has gone from the inner part of the clump until the moment when the embryo started growing in its outer part. Under the condition of the inflow of matter from outside the Hill sphere, the model with zero relative velocities considered in [202] could hardly be realized: even inside the clump, the particle velocities could not be zero in the presence of rotation. At zero velocities, the particles would very quickly (with a free-fall time equal to 25 years according to [57]) fall onto the center of the clump before the embryos formed in its hot inner part, where the evaporation of particles alone took tens of thousands of years [202]. We therefore believe that the existence of the clump that gave rise to the Earth and the Moon within 50 Myr is unlikely, which is corroborated by a review of the studies of clump lifetimes in Section 2.3. As noted in Section 3.3, the time of the essential formation of the Earth probably did not exceed 5 Myr.
In contrast to the model of Galimov et al., where the mass of the initial clump was equal to the total Earth plus Moon mass, the embryos of Earth and the Moon in Ipatov's model [221] were formed from a common rarefied clump with a mass greater than \(0.01m_{\rm E}\) and an angular momentum sufficient for the formation of the Moon embryo. The angular momentum of the clump necessary for the formation of Earth-Moon system embryos was acquired in the collision of two clumps. The growth of the Earth and Moon embryos was considered within the multi-impact model. Most of the matter that became part of the Moon embryo, mainly moving in the vicinity of the Earth, was ejected from the Earth during its numerous collisions with planetesimals and smaller bodies. Some minor bodies also have satellites [222]; a mechanism analogous to the model in [221] was proposed for the model of the formation of trans-Neptunian satellite systems [223; 224; 225].
Let us consider this scenario in more detail. It is based on the collision of two spherical clumps with radii \(r_{1}\) and \(r_{2}\) moving before the collision in the same plane in circular heliocentric orbits with the difference between their semi-major axes \(a\) equal to \(\Theta(r_{1}+r_{2})\), without taking their mutual gravitational influence into account. In this case, the tangential component of the collision velocity is \(v_{\rm r}=v_{\rm c}(r_{1}+r_{2})a^{-1}k_{\Theta}\). For \((r_{1}+r_{2})/a\ll\Theta\), we have \(k_{\Theta}\approx 1\)-\(1.5\,\Theta^{2}\). It follows that \(k_{\Theta}\) can take values from \(-0.5\) to \(1\). The average value of \(|k_{\Theta}|\) is \(0.6\). With this \(k_{\Theta}\), the collision velocity of the clumps is
\[v_{\rm col}=v_{\rm c}(r_{1}+r_{2})a^{-1}(1-0.75\Theta^{2})^{1/2}\,, \tag{2}\]
and the tangential component of the collision velocity is
\[v_{\rm r}=v_{\rm c}(r_{1}+r_{2})a^{-1}(1-1.5\Theta^{2})=v_{\rm c}(r_{1}+r_{2}) a^{-1}k_{\Theta}\,, \tag{3}\]
where \(v_{\rm c}=(GM_{\rm S}/a)^{1/2}\) is the heliocentric velocity of the clump, \(G\) is the gravitational constant, and \(M_{\rm S}\) is the solar mass [223].
Given formula (3), it follows [223] that, before the collision, the angular momentum of two colliding clumps (with radii \(r_{1}\) and \(r_{2}\) and masses \(m_{1}\) and \(m_{2}\)) moving in circular heliocentric orbits with semi-major axes close to \(a\) is equal to
\[K_{\rm s}=k_{\Theta}(GM_{\rm S})^{1/2}(r_{1}+r_{2})^{2}m_{1}m_{2}(m_{1}+m_{2}) ^{-1}a^{-3/2}\,. \tag{4}\]
The values of \(K_{\rm s}\) and \(k_{\Theta}\) are positive for \(0<\Theta<0.8165\) and negative for \(0.8165<\Theta<1\). In the case of a collision of two identical clumps whose radii are equal to \(k_{\rm H}r_{\rm H}\) (where \(r_{\rm H}\) is the Hill sphere radius for the clump with a mass \(m_{1}=m_{2}\)), it follows from formula (4) that
\[K_{\rm s2}=K_{\rm s}\approx 0.96\,k_{\Theta}\,k_{\Theta}^{2}\,a^{1/2}m_{1}^{5/3} \,G^{1/2}M_{\rm S}^{-1/6}\,. \tag{5}\]
Using formula (5), we can see that \(K_{\rm s2}\) is equal to the present-day angular momentum \(K_{\rm XEM}\) of the Earth-Moon system for \(k_{\Theta}=1\) and \(2m_{1}\approx 0.096m_{\rm E}\). Thus, the angular momentum of the Earth-Moon system could be acquired in a collision of two clusters (moving in circular heliocentric orbits before the collision) with a total mass not less than the mass of Mars. As shown in [221], the initial mass of the rarefied clump that gave rise to the Earth and Moon embryos could be relatively small (\(0.01m_{\rm E}\) or even less) if the increase in the angular momentum of the embryos due to the growth of their masses is taken into account. For nonzero eccentricities of the heliocentric orbits of the clumps, the angular momentum acquired in their collision can be greater than for the circular heliocentric orbits considered above. Part of the mass and the angular momentum is lost in the collision (especially in a tangential collision) and the contraction of the cluster. Therefore, the mass and angular momentum of the colliding clusters could be greater than those of the resulting parent clump and of the satellite system formed during its contraction.
Taking into account that \(K_{\rm s}=J_{\rm s}a_{\rm c}\), we discover from formula (4) that the angular velocity of the clump formed in the collision of two clumps is
\[\omega_{\rm c}=2.5\,k_{\Theta}\gamma^{-1}(r_{1}+r_{2})^{2}r^{-2}m_{1}m_{2}(m_{ 1}+m_{2})^{-2}\Omega\,, \tag{6}\]
where \(\Omega=(GM_{\rm S}/a^{3})^{1/2}\) is the heliocentric angular velocity of the clump. The moment of inertia of the resulting clump of radius \(r\) and mass \(m\) is \(J_{\rm s}=0.4\gamma\,mr^{2}\), where \(\gamma\) characterizes the distribution of matter inside the clump (\(\chi=1\) for a homogeneous spherical clump, considered in [84]). For \(r_{1}=r_{2}\), \(r^{3}=2r_{1}^{3}\), \(m_{1}=m_{2}=m/2\), and \(\chi=1\), we have \(\omega_{\rm c}=1.25\times 2^{1/3}k_{\Theta}\Omega\approx 1.575k_{\Theta}\Omega\).
In calculations [84] of the contraction of clumps (of mass \(m\) and radius \(r\)) residing in the trans-Neptunian region, their initial angular velocities were taken equal to \(\omega_{0}=k_{\omega}\Omega_{0}\), where \(\Omega_{0}=(Gm/r^{3})^{1/2}\) is the circular (orbital) velocity on the surface of the cluster. We note that \(Q_{0}/\Omega=3^{1/2}(r_{\rm H}/r)^{3/2}\approx 1.73(r_{\rm H}/r)^{3/2}\); if \(r\ll r_{\rm H}\), then \(\Omega\ll\Omega_{0}\). In the case of Hill spheres, assuming the angular velocity \(\omega_{\rm c}\approx 1.575k_{\Theta}\Omega\) of the clump formed by the collision of two identical clumps to be equal to \(\omega_{0}\), we have \(k_{\omega}\approx 0.908k_{\Theta}/\chi\). This relation shows that collisions of clumps with \(k_{\Theta}=\chi=1\) can yield values of \(\omega_{\rm c}=\omega_{0}\) corresponding to \(k_{\omega}\) values up to \(0.909\). In [84], binary or triple systems were obtained only for \(k_{\omega}\) equal to \(0.5\) or \(0.75\). We can therefore conclude that the initial angular velocities of clumps at which binary systems were formed could be gained in their collisions. According to [4], the initial angular velocity of a rarefied clump with respect to its center of mass is \(0.2\Omega\) for a spherical cluster and \(0.25\Omega\) for a flat disk. The initial angular velocity is always positive and can be almost an order of magnitude less than the angular velocity acquired in the collision of clumps. Because \(Q_{0}/\Omega\approx 1.73(r_{\rm H}/r)^{3/2}\), it follows that for \(r=r_{\rm H}\) we have \(\Omega\approx 0.58\Omega_{0}\) and the initial angular the rotation velocity of a rarefied spherical clump with respect to its center of mass is \(0.2\Omega\approx 0.12\Omega_{0}\). If \(r\ll r_{\rm H}\), then \(\Omega\ll Q_{0}\). It follows from the above estimates that the angular velocity and angular momentum of the clump
formed directly from a protoplanetary disk were insufficient for the formation of a satellite system.
We note that two clumps whose collision resulted in forming the clump whose contraction gave rise to the Earth and Moon embryos could have moved around the Sun in different planes before the collision, and therefore the orbital plane of the Moon embryo could differ from the ecliptic plane and the existing inclination of \(5.1^{\circ}\) was formed. Ipatov [224] showed that the characteristic paths of the constituents of the colliding clumps before the collision with objects of another clump are shorter than the size of the clumps, which indicates the possibility of clump merger during their collision. In the same study, estimates were obtained for the angular momentum \(K_{\rm s}\) of a clump of mass \(m_{\rm f}\) that grew by accumulating smaller objects. For radius \(r\) of the growing cluster equal to \(k_{\rm H}r_{\rm H}\) (where \(k_{\rm H}\) is a constant and \(r_{\rm H}\) is the Hill radius of the growing cluster) and \(|v_{\rm r}|=0.6v_{\rm r}a^{-1}\), we have
\[K_{\rm s}\approx 0.173\,k_{\rm H}^{2}G^{1/2}a^{1/2}m_{\rm f}^{5/3}M_{\rm S}^{-1/ 6}\Delta K\,, \tag{7}\]
where \(M_{\rm S}\) is the solar mass and \(\Delta K=K^{+}-K^{-}\) is the difference between the positive \(K^{+}\) and negative \(K^{-}\) angular momentum increments for the clump when small celestial objects fall on it (\(K^{+}+K^{-}=1\)). Formula (7) was obtained by integrating the angular momentum increment with respect to the mass \(m\) from \(0\) to \(m_{\rm f}\). It was taken into account that the angular momentum increment is equal to \(\Delta K_{\rm s}=rv_{\rm s}\,\Omega n\), with \(\Omega m=4\pi\rho r^{2}\,\Omega r\) and \(m=4\pi\rho r^{3}/3\) (density \(\rho\) of the growing clump was assumed constant). When considering the growth of the clump mass from \(m_{0}\) to \(m_{\rm f}\) in formula (7), \(m_{\rm f}^{5/3}\) is replaced with \((m_{\rm f}-m_{0})^{5/3}\).
It is interesting to compare the angular velocity of the clump acquired in the course of accumulation of smaller objects with the angular velocity \(\omega_{0}\) required for the formation of a satellite system under clump contraction. Comparing \(K_{\rm s}=J_{\rm s}\omega_{0}\) (where \(\omega_{0}=k_{\rm\alpha}\Omega_{0}\) and \(J_{\rm s}=0.4\,\mu m^{2}\)) with the \(K_{\rm s}\) calculated by formula (7), we obtain \(\Delta K\approx 0.8\gamma\,k_{\rm\alpha}\) (for any \(r\) and \(m\)). It follows from this relation that, for \(\chi=1\), \(\Delta K\) is approximately equal to \(0.4\), \(0.5\), and \(0.6\) for the respective values of \(k_{\rm\alpha}\) given by \(0.5\), \(0.6\), and \(0.75\). The values of \(\Delta K\) are usually smaller for colliding objects with higher density and larger eccentricities of heliocentric orbits [103]. The above estimates are consistent with the fact that, in some cases, the clump that grew by accumulation of smaller objects could acquire the angular velocity necessary for the formation of a binary system.
Generally speaking, the clump angular momentum necessary for the formation of the Earth-Moon system could be gained by accumulating only small objects by a clump with the final mass \(m_{\rm f}>0.15m_{\rm E}\). We believe, however, that the main contribution to the angular momentum of the parent clump was made by collisions of large clumps. Otherwise, Venus and Mars could have been formed together with large satellites (which is not the case) if their parent clusters had acquired sufficient angular momentum. Probably, unlike Earth, the clumps that formed the embryos of other terrestrial planets in the course of contraction did not collide with massive clumps at this stage. If this scenario is true, then the clump that gave rise to Mars's embryo did not have a large angular momentum, and only small satellites. Phobos and Deimos could form during its contraction, although another mechanism for their appearance can also be envisaged. The angular momenta of the clumps that gave rise to the embryos of Mercury and Venus were insufficient, even for the formation of small satellites.
We emphasize that objects ejected from Earth's embryo in its collisions with other objects were more likely to become part of a large embryo of the Moon than to stick to similar smaller-mass objects. This contributed to the formation and growth of a larger Earth satellite than would have been if formed only from matter ejected from Earth. The presence of an Earth satellite formed during the contraction of the clump can explain the absence of Venus's satellites. Various planetesimals fell on Venus and on Earth with approximately the same distributions of masses and velocities. Under these collisions, matter was also ejected from the surface of Venus, but no satellite was formed from it.
This approach, which suggests that the initial embryos of the Moon and the Earth could form from a common parental clump, differs substantially from earlier studies [206; 207; 208; 209; 210; 211] in which the Moon embryo was assumed to form and grow mainly due to Earth's crust material ejected from Earth's embryo during its numerous collisions with protoplanetary disk bodies. The main difference implemented in Ipatov's model [221] is that the initial embryo of the Moon was formed not from the substance ejected from Earth's embryo but from the same clump that Earth originated from, and the further growth of the Earth and Moon embryos formed under the contraction of the parent clump was similar to the multi-impact model. The matter that entered the Moon embryo could be ejected from Earth during numerous collisions of planetesimals and other smaller bodies with Earth, and not only during the \(\sim 20\) major collisions, as was considered, for example, in [212].
A fundamentally important question regarding the origin of the Earth-Moon system is the cause of the differences in the iron composition of these bodies and the role of migration processes in that difference. Assuming that the fraction of iron in the initial embryo of the Moon and in planetesimals was \(0.33\), and the respective fraction of iron in Earth's crust and in the Moon, according to modern data, is \(0.05\) and \(0.08\), and using the relation \(0.05k_{\rm E}+0.33(1-k_{\rm E})=0.08\), we can estimate the fraction \(k_{\rm E}\) of Earth's crust substance in the composition of the Moon as \(\sim 0.9\). Therefore, to explain such an iron content of the Moon, we should assume that the fraction of matter ejected from Earth's embryo and deposited on the Moon embryo was almost an order of magnitude greater than the total mass of planetesimals that fell directly on the Moon embryo and the initial mass of the Moon embryo formed from the parental clump, assuming that that embryo contained the same fraction of iron as the planetesimals [221; 226]. As we can see, the smaller the estimate of the fraction of Earth's crust matter in the Moon becomes, the more the Moon embryo, formed during the contraction of the clump, was depleted in iron and the greater its mass. As already noted, most matter that entered the Moon embryo could have been ejected from Earth during its numerous collisions with planetesimals and other bodies. We note that the ratio of the probability of a planetesimal collision with Earth to the probability of its collision with the Moon was less than the ratio of the Earth and Moon masses [146; 186]. Therefore, if all collisions of planetesimals with the Earth and the Moon had ended in mergers, the Moon growth rate would have been higher than that of the Earth. For a more accurate comparison of the growth rates for the Earth and Moon embryos, one should use the results of modeling planetesimal collisions with these embryos. We note that, due to the lower
mass (and hence weaker gravitational field) of the Moon embryo compared with Earth's embryo, some high-speed collisions of planetesimals with the Moon could lead not to mergers but, conversely, to the ejection of matter from the Moon embryo surface and even to a decrease in its mass.
The models of formation of satellite systems that we propose impose some restrictions on the times of existence of rarefied clumps. For circular heliocentric orbits whose difference in the semi-major axes \(a\) is equal to the Hill radius \(r_{\rm Ho}\), the ratio of the heliocentric periods of two clumps is about \(1+1.5r_{\rm Ha}\), where \(r_{\rm Ha}=r_{\rm Ho}/a\). In this case, the angle, with the apex placed at the Sun, between directions to the two clumps changes by \(2\pi{\rm I}.5r_{\rm Ha}n_{\rm r}\) rad in \(n_{\rm r}\) revolutions of the clumps around the Sun. We assume that the collision of clumps occurs when the semi-major axes of their orbits differ by \(r_{\rm Ho}\), and the initial angle, with the apex at the Sun, between the directions to the clumps is equal to \(\pi\) rad. Then, the collision occurs after about \(\left(3r_{\rm Ha}\right)^{-1}\) revolutions. With a mass of \(0.01m_{\rm E}\), the corresponding time between collisions is approximately \(10^{8/3}/3\approx 155\) revolutions. In other words, the collision of clumps that gave rise to the parental clump of the Earth-Moon system embryos could occur within about 100 years after their formation.
In [224], the number of collisions was studied for clumps with masses \(m_{0}=10^{-7}m_{\rm E}\) (e.g., solid bodies with the diameter \(d_{\rm s}=100\) km and density \(\rho\approx 1.15\) g cm\({}^{-3}\)) moving in the same plane in the disk, with the ratio of the distances from the disk edges to the Sun \(a_{\rm rat}=1.67\) (for example, the disk between 30 and 50 AU from the Sun) and the minimum distance \(a_{\rm min}\) from the Sun. If the surface density is the same over the entire disk, then the number of planetesimals at a distance from the Sun in the range from \(a-2r_{\rm H}\) to \(a+2r_{\rm H}\) is \(N_{\rm{}n\rm{}r\rm{}t}=8Nr_{\rm{}Ha}(a/a_{\rm min})^{2}/(a_{\rm rat}^{2}-1)\), where \(N\) is the number of clumps in the disk and \(r_{\rm Ha}\) and \(a_{\rm rat}\) are dimensionless quantities. The number was \(N=10^{7}\) for \(m_{0}=10^{-7}m_{\rm E}\) and the total disk mass \(M_{\rm E}\)equal to \(m_{\rm E}\) (the mass of the Earth). In this case, \(N_{\rm{}n\rm{}r\rm{}t}\) ranges from \(2.5\times 10^{3}\) to \(6.9\times 10^{3}\) for the near and far edges of the disk. The average number \(N_{\rm c}\) of collisions of the Hill spheres of the considered clumps with other clumps in \(n_{\rm r}\) revolutions around the Sun can be estimated as \(1.5r_{\rm Ha}n_{\rm r}N_{\rm{}n\rm{}t}\), if we assume that collisions can occur when the semi-major axes of the orbits of converging clusters differ by no more than \(2r_{\rm Ha}\) and the initial angle \(\Delta\varphi_{0}\) between the cluster directions with the apex at the Sun is equal to \(\pi\) radians. For \(r_{\rm Ha}\approx 4.6\times 10^{-5}\) and the possibility of collisions of clumps with semi-major axes differing by \(2r_{\rm Ho}\), the average number \(N_{\rm cI}=N_{\rm c}/n_{\rm r}\) of clump collisions per revolution around the Sun is \(0.2\) for \(N_{\rm{}n\rm{}t}\approx 3\times 10^{3}\) and \(0.4\) for \(N_{\rm{}n\rm{}t}\approx 6\times 10^{3}\). That is, in \(3\times 10^{3}\) revolutions, the fraction of collisions is approximately \(20\%\) of the number of initial clumps. The fraction of binary systems in the population of minor planets is assumed to be \(0.3\) for cold classical TNOs and \(0.1\) for all other TNOs. In the above model of the formation of satellite systems of TNOs during collisions of clumps, clumps should shrink several-fold over a time of about \(3\times 10^{3}\) revolutions around the Sun (760 thousand years at 40 AU). Estimates for this model are given for \(\Delta\varphi_{0}=\pi\). Earlier collisions occurred at smaller values of \(\Delta\varphi_{0}\). Therefore, such a model gives an estimate of the clump contraction times from above. It was shown in [225] that the satellite system formation model based on the collision of two clumps is consistent with observations according to which about \(40\%\) of binary objects discovered in the trans-Neptunian belt have a negative angular momentum relative to their centers of mass.
## 4 Migration of planetesimals and planetary embryos in the feeding zone of the giant planets
The first calculations of the evolution of disks composed of hundreds of gravitating solid bodies that merge during collisions, for the final stages of accretion in the zone of the giant planets, date back to the 1980s [227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237]. Initial disks were assumed to include planetesimals and giant planet embryos whose initial orbits were close to the orbits of modern planets. Ipatov also modeled disks without planetary embryos and showed that the total mass of planetesimals ejected into hyperbolic orbits is an order of magnitude greater than the total mass of planetesimals that merged with planets. In the models in [234, 235, 236, 237], the merger of bodies occurred when they approached distances of 4 to 8 radii of the bodies; when taking the gravitational influence of planets into account, spheres smaller than their spheres of influence were used, and the mutual gravitational influence of planetesimals was not taken into account. Therefore, the ejection of bodies into hyperbolic orbits turned out to be significantly suppressed compared with that obtained by Ipatov [228, 229, 230]. In addition, it was found in the models referred to above that, while the semi-major axis of Jupiter's orbit decreased, the semi-major axes of the orbits of the other giant planets mainly increased. The results of numerical simulations also suggest that, in the process of accretion of the giant planets, more ice and rocky material could enter Jupiter's core and shell than occurs with the other planets.
The idea of the formation of Uranus's and Neptune's embryos near the orbit of Saturn was first proposed in [238, 175]. Studying the composition of Uranus and Neptune, the authors of these papers concluded that the embryos of these planets acquired hydrogen shells with a mass of about \((1\)-\(1.5)m_{\rm E}\) in Jupiter's and Saturn's growth zones even before the dissipation of gas from the protoplanetary disk occurred. Calculations in [239, 176, 145] showed that, under the influence of planetesimals that migrated from Uranus's and Neptune's feeding zones to Jupiter, the nearly formed Uranus and Neptune could migrate from Saturn's orbit to their modern orbits in 10 Myr, constantly moving in weakly eccentric orbits. Gravitational interactions were taken into account by the method of spheres. The eccentricities of Uranus's and Neptune's orbits remained small all this time. Later, similar calculations using the symplectic integration method were carried out in [240].
A model of the evolution of a disk that initially consisted of the terrestrial planets, Jupiter, Saturn, 750 identical bodies at a distance \(R\) from 8 to 32 AU from the Sun with the total mass equal to \(150m_{\rm E}\), and 150 smaller bodies at a distance \(2<R<4\) AU was studied by Ipatov [103, 145]. In the course of evolution, the smaller bodies were swept out of the asteroid belt, and individual massive bodies acquired semi-major axes of highly eccentric orbits at \(R<2\) AU. Such bodies completely penetrated the feeding zone of the terrestrial planets. Similar results were obtained in modeling the evolution of disks with the same initial bodies but with the present-day Jupiter and Saturn masses and with the Uranus and Neptune embryo masses equal to \(10m_{\rm E}\) moving in almost circular initial orbits (right-hand plots in Fig. 1). The initial values of the semi-major axes of the orbits of these planets were, respectively, 5.5, 6.5, 8, and 10 AU. In the course of evolution, Uranus and Neptune acquired orbits close to the present-day ones. We note that the results of calculations of such a migration [239, 176, 145] were published long before
the first studies based on the above-mentioned Nice model appeared [241; 242; 243], and, unlike in the Nice model, the migration of Uranus's and Neptune's embryos occurred without resonances with the giant planets. Also, the total mass of planetesimals in Uranus's and Neptune's feeding zones was greater; in the variants considered, it ranged from \(135m_{\rm E}\) to \(180m_{\rm E}\), and more than 80% of planetesimals were ejected into hyperbolic orbits. It was concluded in [103] that a disk of planetesimals with the total mass equal to \(100m_{\rm E}\) suffices for the migration of Uranus's and Neptune's embryos into present-day orbits. This mass decreases if the semi-major axes of the initial orbits of Uranus's and Neptune's embryos are taken larger than in the calculations (where they were 8 and 10 AU).
The main changes in the orbit parameters of the giant planet embryos occurred over a period no longer than 10 Myr, although individual bodies could fall on these embryos billions of years later. If most of the disk mass was due to small bodies, then the migration times of the planet embryos could be longer than with the initial values of the body masses chosen as \(0.2m_{\rm E}\) in the calculations. In addition to calculations of the migration of the giant planets that were initially in circular or almost circular orbits, the case of large (0.75-0.82) initial eccentricities of the orbits of the massive (\(10m_{\rm E}\)) embryos of Uranus and Neptune was also considered (left-hand plots in Fig. 1) [103; 145]. In this model, the eccentricities of the embryo orbits decreased during the interactions of the embryos with planetesimals, and the orbits could transform into the modern orbits of Uranus and Neptune if the initial perihelia of their orbits lay beyond Saturn's orbit; at smaller perihelion distances, on the other hand, these embryos were in most cases ejected into hyperbolic orbits. But the acquisition by Uranus's and Neptune's embryos of such eccentric orbits with perihelia lying outside the orbit of Saturn is unlikely.
The total mass of bodies going beyond the orbit of Neptune could reach several ten Earth masses. The fact that Neptune has the smallest eccentricity among the giant planets may be due to the masses of the largest planetesimals in its feeding zone being less than in the zones of other giant planets. The dynamical lifetimes of most planetesimals inside the orbit of Neptune are less than 100 Myr. Therefore, the end of intense bombardment of Earth, which took place 4.5-3.8 billion years ago, could correspond in the most part to objects that came from eccentric orbits, mainly residing outside Neptune's orbit, or from the zone of the outer asteroid belt.
We briefly discuss the issue of the growth rate of the giant planets. In studying the migration of planetesimals initially located at different distances from the Sun, estimates were also obtained for the fraction of planetesimals that collided with the giant planets. The evolution of planetesimal orbits under the influence of planets was modeled by numerical integration of the equations of motion. The probability of a collision with Uranus or Neptune for a planetesimal initially located outside Jupiter's orbit did not exceed 0.015 and was no more than a few thousandths in most variants of calculations. Therefore, with the total mass of planetesimals beyond Saturn's orbit being less than \(200m_{\rm E}\), the massive embryos of Uranus and Neptune that migrated from the zone in the vicinity of Saturn's orbit to the modern orbits increased their masses by no more than \(2m_{\rm E}\). The probability of a planetesimal collision with Jupiter in most variants of calculations did not exceed 0.05 and was several times less for Saturn. According to the estimates in [244; 245; 246; 247], the mass of the silicate component is (15-20)\(m_{\rm E}\) for Jupiter and is greater for Saturn. Along with silicates, planetesimals in Uranus's and Neptune's feeding zones also contained ice. Therefore, with the total mass of planetesimals beyond the orbit of Saturn being less than \(200m_{\rm E}\), the increase in the mass of Jupiter's silicate component due to such planetesimals probably did not exceed several Earth masses. Hence, we can conclude that the total mass of planetesimals beyond the orbit of Saturn being not greater than \(200m_{\rm E}\) is consistent with the composition of the giant planets.
With the ratio of the mass of dust made of rocky matter and ice to the mass of gas equal to 0.015 [248; 249; 250], the \(200m_{\rm E}\) mass of planetesimals corresponds to a disk mass equal to \(0.04M_{\rm S}\). Approximately the same values of the protoplanetary disk mass in the range of (0.04-0.10)\(M_{\rm S}\) were obtained in other studies [4; 34; 115; 250]. In a number of papers [167; 168; 169; 170], the Grand Tack model was studied, where Jupiter moves in the asteroid belt zone while being formed, as we discussed in Section 3.2.
Of great interest is the question of how Jupiter was formed. A detailed review of relevant papers is given in [251]. The simulation results show that, with the formation of planetesimals at a distance of 5.2 AU, Jupiter's embryo could reach the mass of \(3m_{\rm E}\) in 0.1 Myr, but the planetesimals in the vicinity of its orbit were then practically exhausted.
Figure 1: Dependences on the number of bodies in the disk for semi-major axes (a) and eccentricities of the orbits of giant planet embryos (dashed lines in Fig. b) and the mean eccentricity of the orbits of bodies (solid line in Fig. b), ratios of the total mass of bodies ejected into hyperbolic orbits to their initial total mass (c), ratios of the total mass of bodies with orbital semi-major axes greater than 49.5 AU to their initial total mass (d), and ratios of the current time to the total disk evolution time (e). Cases of large, 0.75-0.82 (left), and small, 0.02 (right), initial eccentricities of Uranusβs and Neptuneβs embryo orbits are shown. (According to [103; 145].)
With the growth of Jupiter's embryo via the formation much smaller (\(\sim 1\) cm\(-1\) m) solids in the gas, its mass could increase to \(100\mu_{\rm E}\) in just several ten thousand years. When Jupiter's embryo and the gas in its immediate vicinity reached some critical values of mass, the stage of rapid gas accretion began, during which Jupiter's embryo mass increased several-fold in a time of \(\sim 0.1\) Myr. In some models, the time of Jupiter's mass growth from zero to approximately the present value was about \(2\) Myr [251]. However, already upon reaching a mass of about \(10\mu_{\rm E}\) in a time of \(\sim 0.1\) Myr, Jupiter's embryo was able to increase the orbital eccentricities of bodies from its feeding zone such that they could reach the feeding zone of the terrestrial planets at perihelia. As the masses of planetesimals increased and the gas density in Jupiter's feeding zone decreased, the mutual gravitational influence of the planetesimals increased the possibility for some planetesimals to start crossing Jupiter's orbit and then migrate to orbits with small perihelion distances.
A review of studies of secular perturbations of the planetary orbit elements was already given in a classic book by Subbotin [252] (Chapter XVIII, Section 7). In [253; 254], the evolution of the orbits of the four giant planets and of Pluto was studied in the respective intervals of \(\pm 100\) and \(\pm 50\) Myr. In [253], the bounds for the semi-major axes \(a\), eccentricities \(e\), and inclinations \(i\) of the orbits of these planets were calculated and changes in Pluto's orbit elements were plotted. In [255], the equations of motion were integrated for the 8 planets on a 30 Myr interval and plots of the secular changes in \(e\) and \(i\) for the terrestrial planets were presented. Ipatov [256] obtained plots of changes in orbital elements of the planets due to their mutual gravitational influence. The integration was carried out over an interval of 20 Myr into the past. In particular, the respective orbital eccentricities were found to have changed from the present-day values to 0.07, 0.06, 0.127, 0.09, 0.076, and 0.02 for Venus, Earth, Mars, Saturn, Uranus, and Neptune. This means that they could differ significantly from the present-day values. At the same time, for Uranus's, Neptune's, and Pluto's orbits, the respective range of changes in the semi-major axes was 0.23, 0.41, and 1 AU.
## 5 Formation of the asteroid and trans-Neptunian belts
A number of authors [120; 257; 258; 259] developed a model according to which the rearrangement of resonances associated with a change in the semi-major axis of Jupiter's orbit, along with the influence of planetesimals from the feeding zones of the giant planets, could be one of the reasons for the sweeping of bodies from the main asteroid belt. Studies of the evolution of disks that initially consisted of the terrestrial planets, Jupiter, Saturn, 250 planetesimals with a total mass \(m_{\rm p}^{0}\sim 10m_{\rm E}\) and semi-major axes of their initial orbits from 5 to 10 AU, and 250 'asteroids' with semi-major axes of initial orbits from 2 to 4 AU [103; 145] showed that, in this scenario, the semi-major axes of the orbits of Jupiter, \(a_{\rm j}\), and Saturn, \(a_{\rm s}\), respectively decreased by \(0.005m_{\rm p}^{0}/m_{\rm E}\) AU and \(0.01m_{\rm p}^{0}/m_{\rm E}\) AU. In calculations with the eight planets and initial bodies in Uranus's and Neptune's feeding zones, \(a_{\rm j}\) decreased by \(0.05m_{\rm m}^{0}/m_{\rm E}\) AU, and \(a_{\rm s}\) increased by \((0.01\)-\(0.03)m_{\rm m}^{0}/m_{\rm E}\) AU, where \(m_{\rm m}^{0}\) is the total initial mass of bodies in Uranus's and Neptune's feeding zones. For \(m_{\rm m}^{0}/m_{\rm E}\geq 100\), the shifting resonances overlapped with a significant part of the asteroid belt. The dependences of the change in \(a_{\rm j}\) on \(m_{\rm m}^{0}\) and \(m_{\rm p}^{0}\) were approximately the same, which means that the changes in \(a_{\rm j}\) were mainly dependent on the total mass of planetesimals in the feeding zone of the giant planets rather than on the mass distribution over distances in this zone. In the course of several million years, \(a_{\rm j}\) first decreased by \(0.005m_{\rm p}^{0}/m_{\rm E}\) AU due to the ejection of bodies by Jupiter from Jupiter's and Saturn's feeding zones, and then more slowly decreased by \(0.005m_{\rm m}^{0}/m_{\rm E}\) AU due to the ejection of bodies initially located beyond the orbit of Saturn. The positions of the resonances changed, and some bodies penetrated the asteroid belt zone and the feeding zone of the terrestrial planets. At some stages of the evolution, the orbits of about 1% of the bodies initially located in Uranus's and Neptune's feeding zones crossed Earth's orbit. Values of \(m_{\rm p}^{0}\) much smaller than the actual total mass of planetesimals in Jupiter's and Saturn's zones were assumed in the calculations in order to see the effect exerted by lower-mass planetesimals on 'asteroids.' An increase in the average eccentricities of the 'asteroid' orbits to values not less than in the present-day asteroid belt was obtained, with most of the 'asteroids' 'gested into hyperbolic orbits; 5% of the 'asteroids' fell on Venus and 2.5% on Earth. In this model, Mercury and Mars even left the Solar System, which was apparently because the masses used in the calculations (equal to \(0.04m_{\rm E}\)) were much greater than the average masses of real planetesimals in Jupiter's and Saturn's feeding zones.
The evolution of similar disks consisting of asteroids and massive bodies in Jupiter's and Saturn's zones was also considered in [259]. In [151], the influence of the residual gas in the disk was studied and it was noted that, with the current eccentricity of Jupiter's orbit, most of the asteroid belt bodies were swept out due to the motion of secular resonance. In [260], it was assumed that the initial asteroid belt could have even been empty.
The formation of TNOs with diameters of 100-1000 km from planetesimals with diameters of 1-10 km was studied in [261; 262; 263; 264; 265; 266; 267; 268; 269]. In these models, the TNO formation process took place at small eccentricities (usually \(e\sim 0.001\)) and for a massive belt (tens of \(m_{\rm E}\)). A runaway growth of objects in 100 Myr for \(e=0.001\) and in 700-1000 Myr for \(e=0.01\) was obtained in [266]. These times are longer than the formation time of massive Jupiter, which, as we have seen, does not exceed tens of millions of years. In other calculations that take the gravitational influence of the giant planets into account [103], the maximum TNO eccentricities always exceeded 0.05 in a period of 20 Myr. Obviously, deceleration in gas could reduce the planetesimal orbit eccentricities, and the gravitational influence of the forming giants could be less than that of modern planets. It is unlikely, however, that, in the presence of the gravitational influence of the forming giant planets and the mutual gravitational influence of planetesimals, small eccentricities could persist for the time necessary for the formation of TNOs more than 100 km in diameter from planetesimals 1 to 10 km in diameter. It is more probable [270] that TNOs with diameter \(d\geq 100\) km formed in the \(a>30\) AU zone by contraction of large rarefied clumps rather than accumulating small solids. According to [271], planetesimals with a diameter of several hundred kilometers in the zone of giant planets and large asteroids could form similarly. Some smaller objects could be fragments of these large objects, while other small objects could form directly by contraction of clumps. Even if the masses of the initial clumps into which the dust disk fragmented were approximately the same at some distance from the Sun, the processes of their
merger and contraction gave rise to a rather arbitrary distribution of the resulting solids over mass [2, 24]. At a certain stage, oligarchic growth (run-away accretion) begins such that large planetesimals grow much faster than others due to the absorption of smaller ones. A similar effect can be observed when rarefied clumps merge.
The total mass of planetesimals entering the trans-Neptunian belt from the feeding zones of the giant planets could reach several ten \(m_{\rm E}\). These planetesimals increased the orbital eccentricities of local TNOs, whose total initial mass could exceed \(10m_{\rm E}\), and swept most of these bodies out of this zone. A small fraction of these planetesimals may have remained beyond Neptune's orbit in highly eccentric orbits. Such a mechanism of TNO formation in highly eccentric orbits and the formation of 'local' TNOs in weakly eccentric orbits from matter located outside Neptune's orbit [230], as well as first estimates of the gravitational interaction of TNOs, had been discussed even before the 1992 discovery of the first TNO beyond the orbit of Neptune [272, 273]. It was shown that, over the past 4 Gyr, several percent of TNOs could change their semi-major axes \(a\) by more than 1 AU due to gravitational interactions with other TNOs. We note that even small changes in the TNO orbit elements, occurring as a result of mutual gravitational influence and collisions, can lead to significant changes in the TNO orbit elements under the gravitational influence of the planets [274]. The swinging of TNO orbits by three very large (\(1.5m_{\rm E}\)) 'planetesimals' was modeled in [275]. Currently, TNOs moving in highly eccentric orbits with perihelion distances \(q\) exceeding 40 AU are known (extended scattered disk objects), while typical scattered disk objects have perihelion distances \(q\sim 35\)-38 AU. In our opinion, many such TNOs with large \(q\) could be former planetesimals from the feeding zone of giant planets.
It can be assumed that the population of trans-Neptunian bodies with diameter \(d>100\) km has not changed significantly in the time equal to the age of the Solar System because, at relatively low collision velocities in this belt, even collisions of large bodies with \(d\sim 100-150\) km are unlikely and, according to estimates, about 5% of such bodies participated in such collisions [268]. A body with a mass of about \(\sim 100\) times less than a 100-km object is not capable of destroying the latter, but can change its orbital velocity by several m s\({}^{-1}\) and its semi-major axis by \(\sim 0.2\)% (\(\sim 0.1\) AU). Such events can generally be frequent enough to provide some flow of bodies the size of Chiron. The semi-major axes of the fragments formed in the collision then differ by 0.1 to 1 AU from the semi-major axes of the parent bodies.
## 6 Volumes of water and volatiles delivered to the terrestrial planets
The migration of small Solar System bodies, reflecting its dynamical properties, is related to a number of planet formation processes. According to current ideas shared by most researchers in the field of planetary cosmogony, the delivery of water and volatiles from the outer to the inner regions of the Solar System had a decisive effect on the evolution of the terrestrial planets, primarily on the Earth, making it suitable for life. This question is relevant, because the Earth and the terrestrial planets were formed in the high-temperature (\(\sim 1000\) K) zone of the protoplanetary disk, where water and volatiles were not retained but accumulated beyond the ice line at a distance \(R>3\) AU. Studying the role of migration processes is therefore important for understanding the key problem of the origin of life, which is basic for astrobiology [1, 2, 276]. One way or another, despite a number of constraints, explaining the presence of water and volatiles on the Earth, which are mainly concentrated in the oceans and the atmosphere, by the migration of bodies from the outer regions of the Solar System allows circumventing the complications associated with the formation of the terrestrial planets in the high-temperature zone of the protoplanetary disk.
Endogenous and exogenous sources of water are considered the main potential mechanisms for the formation of Earth's oceans and presumed ancient oceans on Venus and Mars. Both mechanisms have certain limitations and can contribute jointly to solving this problem. Endogenous mechanisms are investigated by geochemical studies of the intrusive substance of magmatic melts of Earth's lithosphere, while studies of exogenous mechanisms are based on computer simulations. Both approaches allow reconstructing the geological history of Earth, accounting for a range of dynamical processes that have occurred throughout the history of the Solar System.
Endogenous sources of water could include direct adsorption of hydrogen from nebular gas into magma melts, followed by the reaction of H\({}_{2}\) with FeO, which could increase the D/H ratio in Earth's oceans 2 to 9 times [277], and the accumulation of water by the protoplanetary disk particles before the start of gas dissipation in the inner part of the young Solar System [278, 279]. The idea of a high water content in the mantle is supported by a number of studies, including laboratory analyses of olivine in Archean komamitite-basaltic associations (ultramafic lavas in Earth's green belts) formed during melting under extreme conditions at the boundary of Earth's upper mantle [280]. These results indicate the melting of the mantle at a temperature of 1630 K and a partial water content \(\sim 0.5\)%, extrapolating which to the entire volume of the mantle corresponds to several Earth oceans. In [278], the volume of water in the minerals of silicate Earth is estimated at 5 to 6 (up to 50) volumes of Earth oceans. It was noted in [281] that deep mantle water could have been acquired as a result of water adsorption on fractal particles during Earth's accretion period and has a low D/H ratio.
Exogenous sources of water and other volatiles are the migration processes from the outer to the inner regions of the Solar System. They could include the migration of bodies from the outer part of the main asteroid belt [282, 283, 284, 169, 285, 150, 168, 286] and the migration of planetesimals from beyond Jupiter's orbit [282, 287, 288, 289, 290, 282]. The migration of planetesimals from the 6-9.5-AU zone was considered within the Grand Tack model [170]. In these scenarios, the probability of collision of bodies with the Earth and other terrestrial planets and the mass of delivered water and other volatiles were estimated. According to [278], the fraction of bodies from beyond Jupiter's orbit did not exceed \(\sim 50\)% of the water delivered to the Earth.
A number of authors advocated the hypothesis that most of the water came to the Earth from the outer asteroid belt zone. For example, according to [283], several embryos that came from this zone at the final stage of Earth's formation could deliver an amount of water to Earth that was an order of magnitude greater than the current value. Such embryos could be the size of Mars [169]. But this hypothesis has not been corroborated. The key argument against such an ample asteroid source of water on the Earth was data on the isotope
composition of osmium (Os) in the primary upper mantle of Earth, which turns out to be closer to ordinary anhydrous chondrites than to hydrorous carbonaceous C-chondrites [278]. It is unlikely that the main source of water could be the outer asteroid belt rather than the zone of the giant planets and endogenous sources, because, if C-asteroids had come from the feeding zones of the giant planets [260], then the D/H ratio in Earth's oceans would have been similar to that in bodies directly arriving on the Earth from these zones.
A more probable source of water on the Earth could be planetesimals, akin to comets, in the feeding zones of the forming giant planets. The vast majority of these bodies, according to current ideas, were ejected into hyperbolic orbits and the periphery of the Solar System in the course of evolution and formed the Oort cloud, but many entered the zone of the terrestrial planets. At present, their main pools in the planetary system are the Jupiter- and Neptune-family comets. These bodies, whose icy matrix is abundant in water and volatiles, could become a source of heterogeneous accretion at the final stage of the formation of the Earth and other terrestrial planets. A well-known limitation of the model is the difference between the \(\rm D/H\) ratio [295] in comets (except for a few comets, in particular, 103P/Hartley2) and the standard \(\rm D/H=1.5576\times 10^{-4}\) value in Earth's oceans (standard mean ocean water, SMOW), which lies in the range of \(\sim(2\)-\(4)\times 10^{-4}\) (Fig. 2). This limitation is removed, however, if we assume that there were several sources of exogenous water on the Earth, along with comets and CI and CM chondrites.
A number of researchers believe that the difference between \(\rm D/H\) values was determined at the earliest stages of the planetary system formation. According to [296], most of the water in the oceans was delivered by bodies formed in Jupiter's zone, where vapor from the inner Solar System condensed onto icy interstellar particles before they were accreted onto large bodies. It is believed in [278] that the measured \(\rm D/H\) and \(\rm Ar/O\) ratios in the coma and tails of comets do not faithfully represent the composition of comet nucleus material. It was also shown in [297] that the \(\rm D/H\) water ratio is different for bodies formed at different distances from the Sun: low for the hot inner disk, increasing with distance from the Sun, and then decreasing again. According to [260], C-type asteroids were formed at a distance of 5 to 20 AU from the Sun and acquired their present-day orbits when gas was still present in this zone, on a time scale of 3-5 Myr [298]. In [299], the cause of the low \(\rm D/H\) ratio for water in Earth's oceans is associated with the deuterium-to-tritium ratio affected by cosmic radiation, which acted by implanting a neutron in deuterium on dust particles in the disk.
As we can see, due to a number of factors, the \(\rm D/H\) ratio could be different in many planetesimals from the feeding zones of the giant planets, in asteroids from the inner and outer belts, and in comets. In addition, as already noted, some contribution from an endogenous source can be assumed, and the study of lavas [281] showed that deep mantle water has a rather low \(\rm D/H\) ratio. In other words, as noted in [186], the water in Earth's oceans (and hence the modern SMOW \(\rm D/H\) ratio) could be the result of mixing water from several sources, with large and small \(\rm D/H\) ratios.
In a number of studies [288; 289; 290; 291; 292; 293; 294] based on numerical simulations, quantitative estimates were obtained for the delivery of water and volatiles to the Earth and the terrestrial planets. The authors studied the migration of tens of thousands of small bodies (Jupiter-family comets or planetesimals) and dust particles that originated from such bodies. The gravitational influence of seven planets (from Venus through Neptune) was taken into account. A symplectic integrator was used to integrate the equations of motion [300]. In particular, the orbit evolution was studied for \(>30\),000 bodies with initial orbits close to those of Jupiter-family comets, Halley-type comets, long-period comets, and asteroids in the \(3/1\) and \(5/2\) resonances with Jupiter, together with \(>20\),000 dust particles [289; 290; 291; 292; 293; 294]. Integration continued until all bodies or particles reached the distance 2000 AU from the Sun or collided with the Sun. Based on the orbital elements of migrating bodies or particles obtained with a certain time step in the course of the dynamical lifetime, the probabilities of their collisions with planets were studied. The mean value of the probability \(p_{\rm E}\) of colliding with the Earth exceeded \(4\times 10^{-6}\) for a Jupiter-family comet and was \(p_{\rm E}=2\times 10^{-6}\) for a planetesimal from Jupiter's and Saturn's zones [186]. Half of that value is obtained if the gravitational influence of the planets is taken into account in calculations by the method of spheres [301].
Figures 3 and 4 show the values of \(p_{\rm E}\times 10^{6}\) for various distances from the Sun. Each value is derived from the evolution of the orbits of several hundred (up to 2000) planetesimals under the gravitational influence of the planets. In each run of the calculations, the initial values of the semi-major axes of the orbits of 250 planetesimals ranged from \(a_{\rm min}\) to \(a_{\rm min}+d_{\rm a}\) [AU], the initial eccentricities were equal to \(e_{\rm 0}\), and the initial inclinations were \(0.5e_{\rm 0}\) rad. For each pair of \(a_{\rm min}\) and \(e_{\rm 0}\) values, several (up to 8) calculation runs were launched. For \(a_{\rm min}\geq 3.6\) AU (except at \(a_{\rm min}=4.2\) AU), the considered time interval was such that no more than a few percent of the initial bodies remained in elliptical orbits at the end of evolution. For \(a_{\rm min}\leq 3.6\) AU, the time interval reached the lifetime of the Solar System. Figure 3 shows the \(p_{\rm E}\times 10^{6}\) values for \(d_{\rm a}=2.5\) AU and \(a_{\rm min}\) from 2.5 to 4.0 AU, and Fig. 4, for \(d_{\rm a}=0.1\) AU and \(a_{\rm min}\) from 3.0 to 4.9 AU. For one among several thousand planetesimals, the probability of a collision with the Earth could be greater than the total probability for thousands of other planetesimals. If Fig. 3 were drawn with the data for such planetesimals excluded, then, instead of two
Figure 2: Hydrogen isotope ratios (\(\rm D/H_{H_{0}O}\)) in water molecules measured in objects of the outer Solar System in comparison with data for carbonaceous CM chondrites and Earthβs Vienna Standard Mean Ocean Water (VSMOW). Data are presented for the Saturn system, short-period Jupiter family comets (JFCs), Halley-type comets, and long-period comets (LPCs). (According to [295].)
\(p_{\rm E}\) peaks at 7.5 and 10 AU, there would be values close to the \(p_{\rm E}\) values for the neighboring \(a_{\rm min}\). Nevertheless, we see from Figs 3 and 4 that the \(p_{\rm E}\) values tend to decrease as \(a_{\rm min}\) increases; in Fig. 4, for \(3.2\leq a_{\rm min}\leq 4.1\) AU, the values of \(p_{\rm E}\) are on average substantially greater than those for \(a_{\rm min}\geq 4.2\) AU.
Using the value \(p_{\rm E}=2\times 10^{-6}\) and the estimate of the total mass of \(\sim 100m_{\rm E}\) of planetesimals from Jupiter's and Saturn's feeding zones given in Section 4, we obtain the total mass of bodies that fell on the Earth at \(2\times 10^{-4}m_{\rm E}\). Approximately the same mass of bodies could be acquired by the Earth from beyond the orbit of Saturn and the outer asteroid belt. Assuming that the ice of water and other volatiles accounted for about half this mass, we obtain the result that the total mass of water delivered to the Earth from beyond the ice line was \(\sim 2\times 10^{-4}m_{\rm E}\), which corresponds to the mass of Earth's oceans (\(2\times 10^{24}\) g) or is slightly less if we assume that the fraction of ice in planetesimals was \(\sim 1/3\)[186].
A significant fraction of water could have been delivered to Earth's embryo when its mass was much less than the modern Earth mass. The results of numerical simulation show that, when the embryo grew to half the modern mass of Earth, it received \(\sim 30\%\) of the total volume of water delivered to the Earth from Jupiter's and Saturn's feeding zones [186]. The volume of water delivered to Venus, per unit mass of the planet, turned out to be approximately the same as for Earth, and that to Mars, approximately two to three times greater. These estimates show that the terrestrial planets could receive an amount of water and volatiles comparable to or even greater than that received by the Earth due to planetesimals from the feeding zone of the giant planets, which testifies in favor of the hypothesis positing ancient oceans on Mars and Venus [302]. The mass of water in bodies that came from beyond Jupiter's orbit and collided with the Moon could be less than that for bodies that collided with the Earth, but no more than by a factor of 20 [186]. However, the fraction of water evaporated during collisions of bodies with the Moon was greater than during collisions with the Earth.
Naturally, these values are estimates. This applies primarily to calculations of the probability of collisions between planets and bodies from beyond Jupiter's orbit. In particular, the results of simulations with 250 bodies showed that the \(p_{\rm E}\) values calculated in different variants with the same initial values of the semi-major axes and eccentricities could differ by a factor of 100 or more. Thus, in calculations of the migration of planetesimals with semi-major axes of the initial orbits from 3 to 5 AU, the \(p_{\rm E}\) values beyond Jupiter's orbit varied from less than \(10^{-6}\) to about \(10^{-3}\), although they were typically confined to between \(10^{-6}\) and \(10^{-5}\). Therefore, for better statistics and improved accuracy of simulation results, as many initial bodies as possible must be used and, accordingly, large computing power is needed. However, this remark does not affect the main conclusion about the important role of the exogenous source of water and volatiles in the evolution of the Earth and can only somewhat change the above quantitative estimates.
Migration of bodies from the asteroid and trans-Neptunian belts to the Earth's orbit and the problem of the asteroid-comet hazard
The main asteroid belt, the trans-Neptunian belt (Kuiper belt), and the Oort cloud are considered to be the main sources of near-Earth objects (NEOs) with a perihelion distance \(q<1.33\) AU. The fraction of close encounters of active comets with the Earth among encounters of all bodies is about 0.01, but 'extinct' comets, whose nuclei, like asteroids, show no activity, can be much more numerous than active comets.
The problem of the asteroid-comet hazard (ACH) is connected with the problem of asteroids, comets and meteoroids approaching the Earth. It attracts increasing attention as a potential source of threats to Earth's civilization. Among the NEOs, the greatest danger is posed by the three main groups of near-Earth asteroids of the Amor, Apollo, and Aten groups. Bodies from the Amor group come close to the Earth's orbit and those from the Apollo and Aten groups cross it, their semi-major axes being respectively greater or less than 1 AU. There is also a small group of asteroids of the Atira group whose orbits lie inside the Earth's orbit (Fig. 5). Some NEOs reach kilometer sizes, and their collision with the Earth could cause a global catastrophe, as has happened more than once in the geological history of our planet (see, e.g., [1]). ACH questions are discussed in many papers (see, e.g., [271; 288; 303; 304; 305; 306; 307; 308; 309]). The degree of hazard depending on the body size and the expected rate of events is determined according to the so
Figure 4: Probability \(p_{\rm E}\) of a planetesimal colliding with the Earth as a function of \(a_{\rm min}\). In each run of calculations, initial values of semi-major axes of planetesimal orbits changed from \(a_{\rm min}\) to \(a_{\rm min}+0.1\) AU. Different lines correspond to initial orbital eccentricities equal to 0.02 or 0.15.
Figure 3: Probability \(p_{\rm E}\) of a planetesimal colliding with the Earth as a function of \(a_{\rm min}\). In each run of calculations, initial values of semi-major axes of planetesimal orbits changed from \(a_{\rm min}\) to \(a_{\rm min}+2.5\) AU. Different lines correspond to initial orbit eccentricities equal to 0.05 or 0.3.
called Torino scale (Fig. 6), which, however, does not have an official international status. The fall of the Chelyabinsk meteorite in 2013 [307; 310] testifies to the serious consequences of the fall of even a relatively small (\(\sim 20\) m) body in a densely populated area. Figure 6 allows estimating for bodies of various sizes the probability of collisions with the Earth and the kinetic energy released in this case on a scale from 1 (minimal damage) to 10 (global catastrophe). Obviously, the larger the size of the bodies that pose a real threat in the \(\Lambda\)CH context, the lower the probability.
Many researchers (see, e.g., [308; 311]) believe that asteroids are the main source of NEOs, while comets are more dangerous due to their sudden appearance. During the evolution of the Solar System, the migration of small bodies from the Kuiper belt led to the replenishment of NEOs. According to [312; 270], almost all NEOs came from the trans-Neptunian belt, while in [313] at least half the NEOs were assumed to be former short-period comets. The cometary nature of NEOs was also advocated in [314]. The cometary origin of some asteroids is indicated by the spectral characteristics of meteorites, which differ from those of asteroids [315]. In [316], about 40 active and 800 extinct comets of the Jupiter family with a diameter of more than 1 km and period \(P<20\) years, crossing Earth's orbit, and about 140 to 270 active Neptune-family Halley-type comets (\(20<P<200\) years) were assumed to exist. According to the estimates in that paper, active and extinct comets are responsible for about 20% of craters greater than 20 km in diameter on the Earth.
The most effective NEO sources are resonances in the main asteroid belt. NEOs can be replenished from the inner part of the asteroid belt (in particular, due to the \(v_{5}\), \(v_{6}\), and \(v_{16}\) secular resonances) [317; 318; 319; 320; 321; 322] and can also come from Kirkwood gaps [323; 324; 325; 326]. Small asteroids can enter into resonances due to the Yarkovsky effect and as a result of mutual collisions. When bodies establish a resonance, they can greatly increase the eccentricities of their orbits, reaching the orbits of Mars and Earth, and also leave the resonance zones due to close encounters with these planets. In all likelihood, Kirkwood gaps in the main asteroid belt -- regions corresponding to minima in the distribution of orbit semi-major axes \(a--\) were formed in that way. This hypothesis about the origin of the gaps was put forward based on the ratios of the heliocentric orbital periods of Jupiter and an asteroid found to be 3:1 [327; 328], 5:2 [324; 329; 330], and also 2:1 or 7:3. For a number of hypothetical asteroids in the 5:2, 3:1, and 2:1 ratios, with quasiperiodic changes in eccentricities \(e\), an increase in \(e\) was obtained from 0.15 to, respectively, 0.75, 0.45, and 0.35 [331]. Studies (including by numerical integration of the equations of motion) of the evolution of the orbits of hypothetical asteroids in the vicinity of the 5:2 resonance with Jupiter were carried out in [331; 332; 333; 334; 335; 336]. In [337; 338] for the 3:1 resonance and in [334; 335; 336; 337] for the 5:2 resonance, it was found that the boundaries of the regions of the initial \(a\) and \(e\) values that allow reaching the orbit of Mars are close to the boundaries of the corresponding Kirkwood gaps.
Figure 5: Near-Earth objects. (Source: James Green, NASA_)
Figure 6: Torino scale used to determine the probability (expectation value) of a collision with Earth for a cosmic body (asteroid or comet) depending on the kinetic energy of the collision, expressed in TNT megatonnes, and the diameter of the body at a typical collision speed. The hazard score ranges from 1 (minimum) to 10 (maximum, threatening a global catastrophe). (Source: Wikipedia.)
It was shown in [339; 340] that, in the presence of mean motion resonances with Mars (3:5, 7:12, 4:7, 5:9, 7:13, and 1:2) and with Jupiter (7:2 and 10:3), and a three-body resonance among Jupiter, Saturn, and an asteroid, or among Mars, Jupiter, and an asteroid, matter can be delivered to Mars and Earth. Because even small changes in the semi-major axis of an asteroid can bring it into the resonance region, the number of bodies that delivered matter to Earth was significant, and collisions with them pose a threat in the modern era.
Based on model calculations [308], it is believed that most of the NEOs came from the inner part of the main asteroid 3:1 resonance region with Jupiter, 40% from the \(\nu_{6}\) secular resonance, and 13% from an intermediate source of orbits crossing the orbit of Mars, while 16% are former Jupiter-family comets. It also follows from the model that asteroids from the outer part of the belt (\(a>2.8\) AU) passing to NEO orbits on average spend only 140 thousand years there, which is 16 times less than in the case of the 3:1 resonance and much less than for other zones within the main asteroid belt. To estimate the fraction of each of the four sources mentioned above, the evolution of the orbits of several real asteroids and test Jupiter-family comets was modeled, and the characteristic elements of the NEO orbits obtained for different ratios were compared with the actual distribution. It was shown that there may be a large number of unobserved extinct comets whose aphelia lie inside Jupiter's orbit and which also serve as a source of NEOs. The median lifetime of NEOs is 10 Myr [308; 341], with more than half of them falling on the Sun, 10-20% falling on planets (mainly on Venus and Earth), and 15% being ejected from the Solar System. Among the bodies that came from the zones of the giant planets, the fraction of those that cross Earth's orbit is an order of magnitude greater than the fraction of bodies residing in orbits that cross only Mars's orbit, and they usually have \(e>0.6\). These results indicate that most asteroids of the Amor group did not come from the zones of the giant planets but from the asteroid belt.
NEO sources, along with the main asteroid belt, also include TNOs (scattered disk and Kuiper belt objects) and comets from the Oort cloud (Figs 7-9). They are closely related to each other genetically and evolutionarily. The connection of short-period comets with the trans-Neptunian belt was established in [342], the migration of TNOs to Neptune's orbit [343; 344; 345] and the migration of bodies from the orbit of Neptune to Jupiter's orbit [345; 346] were studied, and the probability of the transition of objects crossing Jupiter's orbit into Encke-type orbits was found to be less than 0.0023 [347]. The cosmic ages of meteorites, which establish the time interval from the moment of their breakaway from the parent body or the complete breakup of the latter, are spread over a wide range of \(10^{6}\)-\(10^{9}\) years [348].
diameter \(d\geq 1\) km is estimated as \(5\times 10^{9}\)[351] to \(10^{10}\)-\(10^{11}\)[349, 352], and with diameter \(d\geq 100\) km, of the order of 70,000 [346, 349]. Scattered disk objects have \(e_{\rm av}\approx 0.5\) and \(i_{\rm av}\approx 16^{\circ}\), and the total mass of bodies moving in highly eccentric orbits between 40 and 200 AU is estimated as \(0.5m_{\rm E}\) in [353] and \(0.05m_{\rm E}\) in [354]. The number of scattered disk objects with \(d\geq 100\) km is about 30,000 [354]. The probability that a trans-Neptunian body with \(a<50\) AU leaves the belt in a year is (3-5) \(\times\)\(10^{-11}\)[343], although some objects could be more likely to reach the orbit of Neptune due to mutual gravitational influence [273].
The evolution of orbits of small bodies under the gravitational influence of planets was studied in detail by numerical integration of the equations of motion in [274, 271, 355, 356]. In modeling the evolution of the orbits of about a hundred Kuiper belt objects, it was found that the perihelia of the orbits of two such objects decreased in the course of evolution to 1 AU in 25 and 64 Myr. The mean time of the body motion in the orbit of a Jupiter-crossing object (JCO) is 200 thousand years, the fraction of JCOs that reach Earth's orbit during their lifetime is 0.2, and the mean time during which JCOs cross Earth's orbit is about 5000 years. Based on these results, it was concluded that, with \(10^{10}\) Kuiper belt objects 1 km in size and 750 objects 1 km in size that cross Earth's orbit (ECOs), the number of present-day JCOs coming from the trans-Neptunian belt is \(N_{\rm J}=30\),000, and about 170 former TNOs (about 20% of the ECOs) cross the Earth's orbit. Objects that cross the Earth's orbit and have lost cometary activity stay relatively far from the Earth most of the time and are more difficult to observe than typical ECOs.
The number of NEOs and their orbital parameters (for example, mean inclinations, which are greater than those for objects in the main asteroid belt) are difficult to explain if only asteroid sources are considered, as was noted in [158]. In our opinion, the mean orbital inclination of the NEOs that arrived from the main asteroid belt may be the same as for all NEOs (\(\sim 15^{\circ}\)) and may be greater than the mean orbital inclination in this belt (\(\sim 10^{\circ}\)), because these objects could increase their orbital inclinations when they were in resonance.
The migration of comets from Neptune's orbit to the interior of the Solar System was first studied by Kazimirchak-Polonskaya [345]. The evolution of the orbits of real short-period comets under the influence of planets over an interval of 10 Myr was simulated numerically by Levison and Duncan [346]. Calculations showed that 91% of comets were thrown into hyperbolic orbits, 1% and 0.1%, respectively, collided with Jupiter and Saturn, and 6% (including Encke's Comet) evaporated during close encounters with the Sun (Sun-grazers). The median lifetime of comets turned out to be 0.45 Myr. In the model in [346], the evolution of the orbits of objects leaving the trans-Neptunian belt was studied. For such objects, the median lifetime was 45 Myr, during which 30% became visible comets (with \(q<2.5\) AU), 99.7% of them belonging to the Jupiter family. Comparing the observed distribution of the orbits of Jupiter-family comets with the distribution obtained in the calculations, the authors concluded that the physical lifetime of Jupiter-family comets is 12,000 years. According to their estimates, a Jupiter-family comet falls on Jupiter about once every 400 years, and on Earth once every 13 Myr. Twenty-five % of the objects considered were ejected into hyperbolic orbits, 68% acquired \(a>1000\) AU, and 1.5% collided with planets, including 30% with Neptune, 33% with Uranus, 13% with Saturn, and 23% with Jupiter. The median lifetime of Halley-type comets (with a period \(20<P<200\) years), prior to their ejection into hyperbolic orbits, was estimated to be 1 Myr [357]. It can be concluded that TNOs are an excellent source of Jupiter-family comets and that they produce almost no Halley-type comets, which came mainly from the Oort cloud. According to [358], Jupiter-family comets are in resonance with Jupiter for a part of their lifetime. The velocities of collisions with the Moon for bodies that came from Jupiter's and Saturn's feeding zones were mainly in the range from 20 to 23 km s\({}^{-1}\), and with Earth, 3 km s\({}^{-1}\) higher [359].
The time of cometary activity has been estimated by many scientists; on average, it is about \(10^{3}\)-\(10^{4}\) years (according to [360], from \(3\times 10^{3}\) to \(3\times 10^{4}\) years). Some comets that have lost their activity can continue moving for tens and even hundreds of millions of years in orbits that cross Earth's orbit. Therefore, the number of extinct comets can exceed the number of active comets by several orders of magnitude. During close flybys of planets, comets can break apart into several parts, like Comet Shoemaker-Levy 9 during its close approach to Jupiter in July 1992, two years before its collision with Jupiter [361, 362].
Asteroids and comet nuclei are responsible for the formation of craters on the surfaces of planets, their satellites, and small bodies themselves. It is believed that a relatively uniform size distribution of interplanetary impactors of mixed origin was established as early as 4 Gyr ago [363] and that the flow of crater-forming bodies has been approximately constant over the past 3 Gyr; about 4 Gyr ago, during the late heavy bombardment, it was 100-500 times more intense [364, 309]. The rate of NEO collisions with Earth, according to estimates in [365], is 2, 14, 24, and 30 times higher compared to Venus, Mars, the Moon, and Mercury, respectively. It was shown in [366] that collisions with asteroids probably dominated the formation of terrestrial craters with a diameter \(D<30\) km, while comet impacts were responsible for the formation of craters with diameter \(D>50\) km. This feature can be explained by the fact that TNOs can leave the trans-Neptunian belt almost without collisions, in contrast to the main asteroid belt bodies, which have experienced multiple collisions.
Data on lunar craters less than 100 m in diameter suggest that the flow of crater-forming bodies has been approximately constant over the last 100 Myr [309]. It was assumed in [367] that fragments of an asteroid destroyed about 160 Myr ago in the asteroid belt could have formed the Baptistina family, which then caused an increase in the flow of bombarding bodies. An analysis of the ages of Copernican period craters (less than 1.1 Gyr) led Mazrouei et al. [368] to the conclusion that the number of NEO collisions with the Moon per unit time increased 290 Myr ago by about a factor of 2.6. Estimates of the age of the craters were based on an analysis of the thermophysical properties of the material ejected during impacts, according to the Diviner radiometer on the LRO lunar satellite (USA). At the same time, a shortage of terrestrial craters aged between 300 and 650 Myr and their almost complete absence for a later age were noted. We remark that the assumption of a twofold increase in the number of craters over the past 300 Myr was made earlier in [369] based on the study of bright beams of ejecta from craters. Estimates for the age of ray craters on the far side of the Moon are less than 1 billion years.
The number of lunar craters \(>15\) km in diameter with an age \(<1.1\) Gyr was compared with estimates of the number of craters that could have been formed in 1.1 Gyr, assuming that
the number of NEOs \(>1\) km in diameter and their orbital elements remained close to their modern values during that period [370]. The comparison was made for craters on the entire surface of the Moon with data for a region of the Ocean of Storms and maria on the visible side of the Moon. The probabilities of NEO collisions with the Moon and the dependences of crater diameters on the impactor diameters were used. The number of known Copernican craters with diameter \(D\geq 15\) km per unit area of maria was noted to be at least twice that number for the rest of the lunar surface. The results are also consistent with the idea of an increase in the number of NEOs after a possible catastrophic destruction of large main belt asteroids that could have occurred during the past 300 Myr, but they do not prove that such an increase actually took place. In particular, they are consistent with the conclusion in [368] that the number of collisions of near-Earth asteroids with the Moon per unit time increased 290 Myr ago by a factor of 2.6. If the probability of a collision with Earth per year for an object crossing Earth's orbit is \(10^{-8}\) (over long time intervals), estimates of the number of craters presented in [370] correspond to a model in which the number of 15-km Copernican craters per unit area for the entire surface of the Moon would be the same as for the mare region if the data [371] for \(D<30\) km were as complete as for \(D>30\) km. In that case, the rate of crater formation could be approximately constant over the last 1.1 Gyr. The dependences of the depths of lunar craters on the diameter were considered in [372]. It was noted that craters on lunar maria are deeper than on the continents, the crater diameters being less than 30 to 40 km. Having larger diameters, the continental craters are deeper.
## 8 Migration of dust in the Solar System and the formation of the zodiacal cloud
In addition to comets and asteroids, numerous meteoroids fall on the terrestrial planets. They are bodies intermediate in size between interplanetary dust and asteroids (less than a few ten meters in diameter). They, in particular, include the parent body of the Chelyabinsk meteorite mentioned above. The number of meteoroids grows exponentially as their size and mass decrease. It is estimated that about 98 to 99% of such bodies weighing less than 100 g in the vicinity of Earth are mainly of cometary origin.
Particles formed during asteroid collisions and ejected by comets during the sublimation of the icy core matrix are the main source of interplanetary dust. Their size ranges from nanometers to millimeters, and the lower threshold is at the upper bound of the size of molecular clusters. The millimeter-ecntimeter borderline is used as a convention to distinguish between dust particles and meteoroids. The dust particles attending sublimation from a comet nucleus form dust tori surrounding their comet orbits, which are periodically crossed by the Earth and cause meteor showers. The known meteor showers are directly related to the radiants of parent comets projecting onto constellations after which these meteor showers are named. The orbit of the Gemindi meteor shower practically coincides with the orbit of the asteroid 3200 Phaeton [373].
The amount of material contained in dust particles and small meteoroids and falling daily into Earth's atmosphere ranges from 30 to 180 t, of which 32 t is accounted for by bodies smaller than 0.5 m, according to [374]. At the early stage of planetesimal formation, the main role was played by dust particles of micrometer and millimeter sizes; along with planetesimals, particles of that size and larger probably made a significant contribution to the formation of planets.
The migration of dust particles has been considered by a number of authors (see, e.g., [375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 289; 294; 385]). In our numerical models [289; 292; 294; 384; 385], in addition to the gravitational influence of the planets, other factors (radiation pressure, the Poynting-Robertson effect, solar wind) were also taken into account. The relative error at the integration step in the Bulirsch-Stoer algorithm was less than \(10^{-8}\). The initial particle orbits were assumed to be the same as for 500 real asteroids and for various comets and planetesimals from Jupiter's and Saturn's feeding zones. The ratio \(\beta\) of the radiation pressure force to the gravitational force was in the range of 0.0004-0.4. For silicates, according to [378; 379], such values of \(\beta\) correspond to particle diameters from 1000 to 1 um. The planets were considered material points, but the elements of the orbits obtained with a 20-year step were used to calculate (similarly to calculations for small bodies) the average probabilities \(p\) of particle collisions during their lifetime with the planets and the mean times during which the perihelia of the particle orbits were less than the semi-major axes of a planet's orbit.
Obviously, the smaller the particles (and, accordingly, the larger \(\beta\)), the lower the probability of their collision with the Sun, because more particles are carried away by the solar wind, which means a lower probability of particle collisions with the terrestrial planets. The probability \(p_{\rm E}\) of a dust particle colliding with the Earth turned out to be maximum (\(p_{\rm E}\sim 0.001\)-0.02) at particle diameters of \(\sim 100\) um. Such values could be orders of magnitude higher than the probability of a planetesimal colliding with a planet for the same initial orbits of planetesimals and dust grains [292; 294], which is explained by smaller typical orbital eccentricities than planetesimals have and lower relative velocities of such particles when approaching the planet. A large amount of matter, including water and volatiles, could have been delivered by dust particles to the feeding zone of the terrestrial planets immediately after the zone near Jupiter's and Saturn's orbits was cleared of gas but retained a large amount of dust. Peaks in the distribution of the asteroid dust particles over the semimajor axes of their orbits, which correspond to \(n:(n+1)\) resonances with Earth and Venus and gaps associated with 1:1 resonances with these planets, are more pronounced for larger particles. The probability of a collision of a trans-Neptunian particle with diameter \(d<10\) um with the Earth is only several times less than for an asteroid particle of the same size. Dust particles are subject to much less heating at the altitudes of entry into the atmosphere due to the deceleration in it. Therefore, compared with large bodies, dust particles are considered a more likely means of interplanetary and interstellar transport of complex organic molecules, including biogenic elements and compounds that are part of microorganisms (organogenic substances according to V I Vernadsky [289]). These ideas are consistent with the panspermia hypothesis: dust particles could play an important role in the origin of life on Earth, because they experience much less heating when entering the atmosphere at low incidence angles.
The nature of zodiacal light is directly related to the migration of dust particles. Zodiacal light can be seen from the Earth as a bright diffuse glow, in the shape of a triangle, in the west after dusk and in the east before dawn, whose brightness decreases with increasing elongation. It is caused by an interplanetary dust cloud that lies in the ecliptic plane
and reflects sunlight. The first scientific explanation of this phenomenon was given back in 1683 by Dominic Cassini.
Based on the calculated positions and velocities of migrating dust particles that started from various small bodies (asteroids, comets, TNOs), changes in the zodiacal light spectrum, which generally coincides with the solar Fraunhofer spectrum, were considered in the case of scattering by dust particles at various values of the angle, with the apex placed on the Earth, between the directions to the Sun and to the dust particle. The shift and width of the characteristic Mg I line were determined. The calculated data [386] were compared with data from WHAM (Wisconsin H-Alpha Mapper) observations of Doppler shifts and the relevant linewidth in zodiacal light [387]. It was shown that the contributions to the zodiacal light made by the cometary particles formed inside and outside Jupiter's orbit, counting the trans-Neptunian particles, are roughly the same, and the contribution of each of these two components to the zodiacal light is approximately 1/3, with a possible deviation from 0.3 to 0.1-0.2. The fraction of asteroid dust is estimated as \(\sim 0.3-0.5\). The contribution of particles generated by long-period and Halley-type comets does not exceed 0.1-0.15. The same conclusion can be drawn for particles ejected by Encke-type comets (with \(e\sim 0.8\)-0.9). The average eccentricities of the orbits of zodiacal particles located at 1-2 AU from the Sun, which fit the WHAM observations better, have values in the range of 0.2-0.5, with a more probable value around 0.3.
Most recently, it was suggested, based on measurement data from the Juno spacecraft (USA) that one of the sources of replenishment of the zodiac cloud dust could be the dissipation of particles from the Martian atmosphere during planetary-scale dust storms [388]. The authors support this hypothesis with computer simulation results.
## 9 Migration of planetesimals in exoplanetary systems
More than 5000 exoplanets have been discovered so far, most of which belong to planetary systems of their parent stars. The current state and key problems of exoplanet research are discussed in monographs [15, 17] and Marov and Shevchenko's review [16]. The topical issues discussed there include the possible habitability of exoplanets, and in particular criteria for the formation of natural conditions suitable for the origin of life. Among these criteria, by analogy with the Solar System, the processes of planetesimal migration in exoplanetary systems can be important because of their role in ensuring the presence of water and volatiles in habitable-zone exoplanets.
Of paramount interest are exoplanetary systems located at the closest distances from the Earth. These, first of all, include the star Proxima Centauri, the third stellar companion (C) in the Alpha Centauri system. It is a red dwarf of spectral type M, located at a distance of 1.302 pc from the Sun. The star has a mass of 0.1, a radius of 0.15, and a visual luminosity of 0.00005 in terms of solar values. The effective temperature of its surface is \(\sim 3000\) K, which is half the solar temperature. This star has a planet comparable to the Earth in mass, Proxima Centauri b, whose orbital radius is 0.05 AU (8 times less than the orbital radius of Mercury) and the orbital period is 11.2 days. However, with a much lower luminosity of the star, this planet is in the habitable zone, which extends in radius from \(\sim 0.042\) to \(\sim 0.082\) AU. The other planets, Proxima Centauri c and d, are outside this zone.
When modeling the migration of planetesimal exocomets in this stellar system, the semimajor axis \(a_{\rm c}\) of the exoplanet c orbit ranged from 0.06 to 0.3 AU (up to 0.7 AU in some test calculations) in [389], and was set as \(a_{\rm c}=1.489\) AU in [390, 391], in agreement with [392, 393]. In [390, 391], we used a symplectic integrator [300] for integration. In [390], two series of calculations were carried out for the MP model. In this model, the planets are regarded as material points (points masses) when integrating, as we discuss in the next paragraph. In the first series of calculations, the initial values of the semi-major axes of the orbits and masses of two exoplanets were chosen as \(a_{\rm b}=0.0485\) AU, \(a_{\rm c}=1.489\) AU, \(m_{\rm b}=1.27m_{\rm E}\), and \(m_{\rm c}=12m_{\rm E}\). For the exoplanet b orbit, the initial eccentricity \(e_{\rm b}\) and inclination \(i_{\rm b}\) were set equal to zero, and for exoplanet c, \(e_{\rm c}=0\) or 0.1 and \(i_{\rm c}=e_{\rm c}/2\) rad. In the second run of calculations, to take later observational data into account, the values \(a_{\rm b}=0.04857\) AU, \(e_{\rm b}=0.11\), \(m_{\rm b}=1.17m_{\rm E}\), \(a_{\rm c}=1.489\) AU, \(e_{\rm c}=0.04\), \(m_{\rm c}=7m_{\rm E}\), and \(i_{\rm b}=i_{\rm c}=0\) were chosen. In both runs of calculations, the exoplanet b and c densities were considered equal to the respective densities of Earth and Uranus. In each version of the calculations, the initial values of the semi-major axes of the orbits of 250 planetesimals ranged from \(a_{\rm min}\) to \(a_{\rm min}+0.1\) AU, where \(a_{\rm min}\) was varied from 1.2 to 1.7 AU with a step equal to 0.1 AU. The initial eccentricities \(e_{\rm 0}\) of the planetesimal orbits were 0 or 0.15 for the first run and \(e_{\rm 0}=0.02\) or \(e_{\rm 0}=0.15\) for the second run, and the initial inclinations of their orbits were equal to \(e_{\rm 0}/2\) rad. In the C model [391], the planetesimals that collided with planets were excluded from subsequent integration. In these calculations, \(a_{\rm min}\) ranged from 0.9 to 2.2 AU, and the rest of the initial data were the same as for the second run of the MP calculations. The considered time interval was usually equal to several hundred Myr for the C model and at least 50 Myr for the MP model.
In the MP [390] model, planetesimals and exoplanets were regarded as material points, and their collisions were not simulated. The resulting arrays of orbital elements of planetesimals with a step of 100 years were used to calculate the probabilities of their collisions with exoplanets. The probabilities were calculated using a technique similar to that used in [290, 291, 146, 292], but with the suitable exoplanet and star masses. In the case where the collision probability reached 1 (no such cases were observed in calculations for the Solar System), this planetesimal was no longer taken into account in calculating the total probability of collision with an exoplanet. We also calculated the probability \(p_{\rm d}\) for a planetesimal that migrated from the feeding zone of exoplanet c to collide with exoplanet d (\(a_{\rm d}=0.02895\) AU, \(m_{\rm d}=0.29m_{\rm E}\), and \(e_{\rm d}=i_{\rm d}=0\)), although this exoplanet was not considered when integrating the equations of motion.
The orbit of only one of several hundred planetesimals crossed the orbit of exoplanet b, but such a planetesimal collided with this exoplanet quite often. When considering thousands of planetesimals, the value of \(p_{\rm b}\) turned out to be greater than the probability of a planetesimal from the feeding zone of the giant planets in the Solar System colliding with the Earth. In the second run of MP calculations (involving 250 initial planetesimals in each case) with \(e_{\rm 0}=0.02\), the total number of planetesimals was 4500, and in only 5 out of 16 cases were the resultant probabilities of collisions with exoplanet b nonzero. In two variants, \(p_{\rm b}=0.004\) was obtained, while the average value of \(p_{\rm b}\) for one of the 4500 planetesimals was \(4.7\times 10^{-4}\), but this included two planetesimals with \(p_{\rm b}=1\). Of the 16 variants
of the second run of MP calculations, four cases yielded a nonzero value of \(p_{\rm d}\). For the 4500 planetesimals, the average probability of collision with exoplanet d was \(p_{\rm d}=2.7\times 10^{-4}\), but this included one planetesimal with \(p_{\rm d}=1\). For \(e_{\rm 0}=0.15\), nonzero values of \(p_{\rm b}\) were obtained in only three out of six variants of the second run of MP calculations with 250 initial planetesimals (with 1500 planetesimals in total). The mean value for 1500 planetesimals turned out to be \(p_{\rm b}=2.0\times 10^{-3}\) and \(p_{\rm b}=1\) for three of them, and the mean value was \(p_{\rm d}=2.0\times 10^{-3}\), with \(p_{\rm d}=1\) for three planetesimals. In the first run of MP calculations with \(i_{\rm c}=e_{\rm c}=0\) and \(e_{\rm 0}=0.15\), the probabilities \(p_{\rm c}\) of a collision of a planetesimal initially located in the vicinity of exoplanet c with that exoplanet were \(p_{\rm c}=0.06\)-\(0.1\). For \(i_{\rm c}=e_{\rm c}/2=0.05\) and \(e_{\rm 0}=0.15\), this was \(p_{\rm c}=0.02\)-\(0.04\). In the second series of MP calculations, the value of \(p_{\rm c}\) was mainly in the range of 0.1-0.3, except in cases with \(a_{\rm min}=1.4\) AU and \(e_{\rm 0}=0.02\), where \(p_{\rm c}=0.4\)-\(0.8\). After 20 Myr, the increase in \(p_{\rm c}\) was usually small, because few planetesimals were left in elliptical orbits by that time.
Calculations for the C model, in which planetesimals colliding with a planet were excluded from subsequent integration, were done in [391] for initial data similar to those for the second run of MP calculations. In the C model, the probability \(p_{\rm b}\) of a planetesimal from the feeding zone of planet c colliding with planet b was about half as much; on the contrary, the values of \(p_{\rm c}\) were on average almost twice as large as in the MP model. The C-model probability \(p_{\rm b}\) was estimated to be \(2.0\times 10^{-4}\) for \(e_{\rm 0}=0.02\) and \(10^{-3}\) for \(e_{\rm 0}=0.15\). The total mass of planetesimals delivered from the feeding zone of planet c to planet b was \(m_{\rm c-b}=p_{\rm b}m_{\rm dec}\), and the mass of water in these planetesimals was \(m_{\rm ice}=p_{\rm b}k_{\rm ice}m_{\rm ice}\), where \(m_{\rm ice}\) is the total mass of planetesimals beyond the ice line that fell into the feeding zone of Proxima Centauri c and \(k_{\rm ice}\) is the water content in planetesimals. The ratio \(p_{\rm cj}=p_{\rm c}/p_{\rm cj}\) of the probability of a planetesimal colliding with planet c to the probability \(p_{\rm cj}\) of the planetesimal ejection into a hyperbolic orbit at \(e_{\rm 0}=0.02\) and \(e_{\rm 0}=0.15\) was in the respective ranges of 0.8-1.3 and 0.4-0.6 when calculating with the current planet c mass. This ratio was in the ranges of 1.3-1.5 and 0.5-0.6 for the planet c mass equal to half its modern value. An estimate based on the values of \(p_{\rm cj}\) obtained from the energy conservation law showed that the semi-major axis of the planet c orbit could decrease during its formation by at least a factor of 1.5. The calculations showed that planetesimals could collide with planet b, even if the mass of the planet c embryo was one tenth that of the modern planet c mass. Therefore, the total mass of planetesimals ejected into hyperbolic orbits by planet c could be about (3.5-7)\(m_{\rm E}\). For the mass of planet c equal to 7\(m_{\rm E}\), we obtain the mass \(m_{\rm ice}\) of planetesimals in the feeding zone of planet c possibly being at least 10\(m_{\rm ice}\) and 15\(m_{\rm ice}\) for the respective values \(e_{\rm 0}=0.02\) and \(e_{\rm 0}=0.15\). For \(m_{\rm ice}=10m_{\rm E}\) and \(p_{\rm b}=2\times 10^{-4}\), we obtain the minimum estimate \(m_{\rm e-b}=2\times 10^{-3}m_{\rm E}\) (for \(e_{\rm 0}=0.02\)) of the total mass of planetesimals delivered to planet b from the feeding zone of planet c. For \(e_{\rm 0}=0.15\), \(m_{\rm ice}=15m_{\rm E}\), and \(p_{\rm b}=10^{-3}\), this bound was \(m_{\rm e-b}=1.5\times 10^{-2}m_{\rm E}\). Large values of \(e_{\rm 0}\) correspond to an increase in the eccentricities of planetesimal orbits due to their mutual gravitational influence. With large masses of planetesimals, the increase in the eccentricities of the planetesimal orbits could be greater. The estimates of \(m_{\rm e-b}\) made above for the modern mass of planet b could be smaller if the planet b mass was less than its modern mass at the time of the considered bombardment. However, it can be assumed that planet b formed faster than planet c, because planet b is much closer to the star than planet c. The amount of matter delivered to planet d could be slightly less than to planet b. As we showed in [394; 186], the probability of collision with Earth for a planetesimal from the feeding zone of the giant planets is of the order of \(10^{-6}\)-\(10^{-5}\), i.e., much less than the values of \(p_{\rm b}\) and \(p_{\rm d}\). As we can see, the inflow of icy planetesimals to Proxima Centauri exoplanets b and d could be greater than the similar inflow to the Earth.
Some of the material of a planetesimal that collides with an exoplanet is ejected from the exoplanet. It was found in [385] that more than 50% of the impactor's water is lost if the planetesimal collides with the Earth at a speed exceeding the parabolic velocity by more than 1.4 times and the collision angle is greater than \(30^{\circ}\). It was assumed in [396] that solids beyond the ice line should be \(\sim\) 50% water by mass. According to [62], the fraction of ice in comet 67P is in the range of 14-33%. In [397], it was assumed that, although the volume fraction of water in comet 67P and TNOs is about 20%, the bodies formed near the ice line contained more water than the TNOs did. In studies of the maximum water content resulting from late accretion on TRAPPIST-1 planets [398], it was assumed that late impactors contained 10% water by mass. Generalizing the above data, we can assume that the fraction of water in planetesimals in the feeding zone of Proxima Centauri c could be 10 to 50%. The mass of water delivered to Proxima Centauri b could exceed the mass of Earth's oceans.
After the formation of Proxima Centauri c, some planetesimals could continue to move in stable elliptical orbits inside its feeding zone, mostly cleared of planetesimals, although hundreds of millions of years have passed since the beginning of calculations. Such planetesimals typically moved in some resonances with planet c, for example, 1:1 (like Jupiter's Trojans), 5:4, and 3:4, and had low eccentricities. Some planetesimals that moved for a long time (1 to 2 Myr) along chaotic orbits were captured in 5:2 and 3:10 resonances with Proxima Centauri c and remained there for at least tens of millions of years [399].
The mixing of planetesimals in the TRAPPIST-1 exoplanetary system was studied in [400] at the late gasless stage of the formation of nearly formed planets. The TRAPPIST-1 system consists of a star with a mass equal to 0.0898 solar masses and seven planets located relatively close to each other. The motion of planetesimals under the gravitational influence of a star and seven planets (from b to h in the order of their distance from the star) was studied similarly to the C model analysis for the Proxima Centauri system. In each of the calculation versions, the initial orbits of planetesimals were in the vicinity of the orbit of one of the planets (the host planet) and had the same eccentricities, equal to \(e_{\rm 0}\). No more than 3.2% of planetesimals were ejected into hyperbolic orbits. Usually, there was no ejection of planetesimals for the b-d disks. More than half the planetesimals of the b-g disks collided with planets in less than 1000 years, and even in 250 years for the b-d disks. The time of evolution of the b-h disks varied from 11 ky to 63 Myr. The fraction of planetesimals that collided with the host planet was 0.36-0.8 for \(e_{\rm 0}=0.02\) and 0.22-0.75 for \(e_{\rm 0}=0.15\). The fraction of planetesimal collisions with the host planet was typically smaller for disks that were more distant from the star. In each version of the calculations, there was at least one planet for which the number of planetesimal collisions exceeded 25% of the number of planetesimal collisions with the host planet. Planetesimals could collide with all planets for the d-h disks, and at least with the b-e planets for b-c disks. Therefore, the outer layers of neighboring planets in the TRAPPIST-1 system
can include similar material if there were many planetesimals near their orbits at the late stages of planetary formation. For comparison, we concluded in Section 3.2 that, due to the mixing of planetesimals, such a scenario could to a certain degree affect the formation of the terrestrial planets.
## 10 Conclusion
The problems of small body migration are among the most important dynamical properties of the Solar System. They have played a key role in its formation and evolution. We extensively studied these properties using the results of numerical simulations of migration processes. We have discussed the issues of the evolution of the protoplanetary disk, the formation of planets, the formation of the asteroid and trans-Neptunian belts, and the role of planetesimal migration processes in the formation and growth of planetary embryos in the emerging Solar System. Models of isolated formation of the terrestrial planets and their formation models, taking into account the influence of the giant planets, the issues of migration of planetesimals and embryos of planets in the feeding zone of the giant planets, and the time scales of the corresponding processes have been discussed. It is shown that Earth and Venus could acquire more than half of their mass in 5 million years, and their outer layers could accumulate the same material from different parts of the feeding zone of these planets. At the final stages of the formation of the terrestrial planets, planetesimals initially located at a radial distance of 1.1 to 2.0 AU could enter the composition of Earth and Mars in a ratio not much different from the mass ratio of these planets.
Models of asteroid and comet migration from the asteroid belt and zones beyond the orbits of Jupiter and Neptune to the Earth and the terrestrial planets have been discussed. Asteroids and comets, enriched in water and volatiles, could make a decisive contribution to the formation of the hydrosphere and atmosphere of Earth, which emerged in the high-temperature zone of the protoplanetary disk, where volatiles are not retained. At the same time, the considered migration models of bodies approaching or crossing Earth's orbit (the bodies genetically and evolutionarily associated with all zones of the Solar System) contribute to the urgent problem of ACH for our planet.
We critically reviewed modern models of the origin of the Moon, including the popular mega-impact model, the multi-impact model of planetesimal collisions with Earth's embryo, and the model of the formation of Earth's and Moon's embryos as a result of contraction of a rarefied dust cluster. The best substantiated model is the one according to which the angular momentum of the clump necessary for the formation of the Earth-Moon system embryos was acquired during the collision of two initial clumps, and most of the matter that entered the Moon's embryo could be ejected from Earth during numerous collisions of planetesimals with it.
We also analyzed existing models of the migration of the giant planets in the early Solar System. Based on data from the analysis of the composition of the giant planets, it was shown previously that large embryos of Uranus and Neptune formed near Saturn's orbit. The results of numerical calculations indicate that such embryos could migrate to their modern orbits under the influence of gravitational interactions with planetesimals. Most of the planetesimals were then ejected into hyperbolic orbits, and the semimajor axis of Jupiter's orbit decreased (and in the Grand Tack model, increased after some time).
The migration of small bodies is of key importance in studying the possibility of the formation of favorable natural conditions for the origin of life not only in the Solar System but also in exoplanetary systems. Exogenous sources of water on the Earth could include the migration of bodies from the outer part of the main asteroid belt and the migration of planetesimals from beyond the orbit of Jupiter, the feeding zones of the giant planets, and trans-Neptunian space, including the Kuiper belt. The mass of water delivered to Earth from these sources could, according to estimates, be comparable to the volume of Earth's oceans. Per unit mass of the planet, this mass was almost the same for Venus and for Earth, and about two to three times greater for Mars. These estimates support the hypothesis of the possible existence of ancient oceans on Mars and Venus.
We also discussed the migration of dust in the Solar System and sources of the zodiacal cloud. It is shown that the contribution to the zodiacal light made by cometary particles formed inside and outside Jupiter's orbit (counting the trans-Neptunian particles) is approximately the same, and the contribution of each of these two components to zodiacal light is \(\sim 1/3\), with a possible deviation of 0.1-0.3. The fraction of asteroid dust is \(\sim 0.3-0.5\). The contribution of particles from long-period and Halley-type comets does not exceed 0.10-0.15. A similar conclusion can be drawn for particles generated by Encke-type comets. The mean eccentricities of the orbits of zodiacal particles located at 1-2 AU from the Sun have values in the range from 0.2 to 0.5.
The likely processes of planetesimal migration in exoplanetary systems have been discussed. Based on numerical modeling under a number of initial assumptions, it was concluded that the inflow of icy planetesimals to inner exoplanets in the Proxima Centauri system could be greater than the similar inflow to Earth.
###### Acknowledgements.
Sections 1-7 (except Section 3.4) are devoted to studies of the processes of migration and formation of planets and small bodies of the Solar System and were financially supported by the Russian Foundation for Basic Research in the framework of research project 20-12-50142. Studies of the formation of the Earth-Moon system (Section 3.4) were supported by the Russian Science Foundation project 21-17-00120, [https://rscf.ru/project/21-17-00120/](https://rscf.ru/project/21-17-00120/). Studies of dust migration (Section 8) were carried out in the framework of the state assignment 0137-2019-0004 at the Vernadsky Institute for Geochemistry and Analytical Chemistry, Russian Academy of Sciences. Research on the migration of planetesimals in exoplanetary systems (Section 9) was supported by the Ministry of Science and Higher Education of the Russian Federation grant 075-15-2020-780, "Theoretical and experimental studies of the formation and evolution of extrasolar planetary systems and characteristics of exoplanets." The authors are grateful for this support. We are grateful to the anonymous referee for the useful comments that contributed to the improvement of the content of this paper.
|
2302.02273 | Plasma Agriculture: A green technology to attain the sustainable
agriculture goal | The agriculture sector has many issues such as reductions of agricultural
lands, growing population, health issues arising due to the use of synthetic
fertilizers and pesticides, reduction in soil health due to extreme use of
synthetic chemicals during farming, etc. The quality and quantity of foods
required for living things are affected by many factors like scarcity of
nutrient-rich soils, lack of suitable fertilizers, harmful insects and bugs,
climate change, etc. There is a requirement to supply the proper nutrients to
plants/crops for obtaining a high crop yield. Synthetic chemical fertilizers
provide nutrients (macro and micro) to plants for their growth and development
but the excess use of them is not good for a healthy lifestyle as well as for
the environment. In recent years, non-thermal plasma (NTP) is considered as an
advanced green technology for enhancing productivity in agriculture sectors. In
this report, we provided the details of nutrients and their functions in the
growth and development of plants/crops. How plasma technology can resolve many
future challenges in the agriculture sector is discussed in detail. A few
experiments on seed germination and plant growth (root and shoot length) were
performed in the laboratory to explore the effect of plasma-activated water on
the growth and development of plants. These primary results demonstrate the
great potential of plasma technology in the agriculture sector. | Tanvira Malek, Mangilal Choudhary | 2023-02-05T01:07:24Z | http://arxiv.org/abs/2302.02273v1 | # Plasma Agriculture: A green technology to attain the sustainable agriculture goal
###### Abstract
The agriculture sector has many issues such as reductions of agricultural lands, growing population, health issues arising due to the use of synthetic fertilizers and pesticides, reduction in soil health due to extreme use of synthetic chemicals during farming, etc. The quality and quantity of foods required for living things are affected by many factors like scarcity of nutrient-rich soils, lack of suitable fertilizers, harmful insects and bugs, climate change, etc. There is a requirement to supply the proper nutrients to plants/crops for obtaining a high crop yield. Synthetic chemical fertilizers provide nutrients (macro and micro) to plants for their growth and development but the excess use of them is not good for a healthy lifestyle as well as for the environment. Plants need significant amounts of macro-nutrients (nitrogen, phosphorous, potassium, urea, etc.) and some micro-nutrients (iron, sulfur, magnesium, zinc, etc.) for growth and development through various physiological and metabolic processes of the plant system. Along with the nutrients, there is also a demand to control the harmful microbes, insects, pests, etc. during the growth of plants for increasing the crop yield. In recent years, non-thermal plasma (NTP) is considered as an advanced green technology for enhancing productivity in agriculture sectors. The plasma-treated water (PAW) can help in enhancing seeds germination, increasing the rooting speed, stimulating plant growth, deactivating microbes/bugs, etc. The atmospheric pressure plasma (NTP) contains energetic electrons, UV radiation, and various reactive nitrogen and oxygen species. During the plasma-water interaction, these reactive species in the gaseous form get dissolved into water and it becomes rich in nitrogen compounds (N-content). These nitrogen compounds in plasma-treated water act as fertilizer for plants to keep them healthy but PAW does not have some essential plant nutrients like potassium, phosphorus, sulfur, iron, magnesium, etc. Therefore, it is required to add such nutrients in addition to nitrogen compounds in the plasma-treated water to use it as a nutrient-rich fertilizer. In this report, we provided the details of nutrients and their functions in the growth and development of plants/crops. How plasma technology can resolve many future challenges in the agriculture sector is discussed in detail. A few experiments on seed germination and plant growth (root and shoot length) were performed in the laboratory to explore the effect of plasma-activated water on the growth and development of plants. These primary results demonstrate the great potential of plasma technology in the agriculture sector.
**Keywords:** Low-temperature plasma, plasma-agriculture, corona discharge, plasma-activated water, seed germination
## I Introduction
A continued increase in demand for food caused by exponential population growth indicates a serious challenge for humankind of the globe. All the time, plants/crops regularly face various stresses such as shortage of water, water-logging, toxicity, high saltiness, and excessive temperatures in some regions. There would be a significant effect of these stresses on crop yield. Another big issue for the agriculture sector is climate change day by day. Climate change causes a negative impact on the availability of food, reduction in access to food, quality of food due to polluted air and high temperature, etc. Same time we can also realize that agricultural land is continuously being reduced due to industrialization and urbanization. These all factors would be responsible for the shortage of food in the future [1; 2; 3; 4; 5]. There is demand for improving the sustainability of agriculture and at the same time need to reduce the adverse effects of agriculture on the environment. To achieve these goals, new Eco-friendly technologies that can enhance productivity while maintaining food quality and safety are required. With the help of these green technologies in agriculture, it is possible to increase crop yields by enhancing productivity without damaging the environment and compromising human health. Crop productivity can be increased by keeping crops/plants healthy and providing the required nutrients along with water for growth and development. The use of high-quality seeds, fertilizer, pesticides, insecticides, suitable soil for plants/crops, etc. is the major deciding factor for the growth and development of healthy crops/plants. There is also a need to use good-quality seeds for higher crop yields [1; 4]. In recent years, researchers are working on new technologies to modify the seed morphology, increase the protein level, deactivate the seeds microbes, etc. for improving the seed germination rate and healthy and vigorous growth of plants/crops [6; 7; 8]. Along with some new technologies in agriculture, low-temperature or non-thermal plasma technology has been a popular green technology to use in the agriculture sector. The non-thermal plasma technology (NTP) has received considerable attention in recent years due to its increasing applications in the treatment
of seeds and plants for enhancing germination rate and growth rates [9; 10]. The objective of this project was to identify the role of different macro and micro-nutrients in the growth and development of plants/crops and review the plasma technology with challenges in adopting by farmers. In this report, we have discussed the required nutrients for the growth and development of healthy plants/crops in Sec. II. Introduction to plasma and its interaction with water is discussed in Sec. III. Could we use plasma technology in farming? The answer to this question is given in Sec. IV. The challenges in the implementation of plasma technologies from lab to field are discussed in Sec. V. The factors affecting seeds germination and the selection of seeds for conducting the experimental study are presented in Sec. VI. Experimental setup and methods are discussed in Sec. VII. Primary experimental findings on seed germination and plant growth are presented in Sec. VIII. Concluding remarks along with future perspectives on plasma agriculture are given in Sec. IX
## II Nutrients requirement for plants
It is well known that in scaling up the plasma technology from the lab to the farm, the plant/crop physiology needs to be reviewed in detail. There are many processes in the plants such as transportation of minerals and nutrition, photosynthesis, respiration etc. these help for the overall growth and development of plants. Photosynthesis is essential to produce food (ATP and NADPH) for plants. In this process, there is a fixation of \(CO_{2}\) in presence of incident sunlight. The intensity of incident light, carbon dioxide concentration, environment temperature, and water are major affecting factors to modify the photosynthesis rates in green plants/crops. The food synthesised by the leaves, minerals and nutrients from roots has to be moved to all parts of the plant. There are various transportation processes such as diffusion, facilitated diffusion, transpiration stream and active transport in plants to transport water, mineral salts, some organic nitrogen and hormones from roots to the aerial parts of the plants and synthesised food from leaves to other parts of plants. It is fact that all plants/crops need some absolutely essential nutrients for growth and development. These elements are divided into two broad categories based on their quantitative requirements for plants/crops. (I) Macro-nutrients and (II) Micro-nutrients. The macro-nutrients include carbon, hydrogen, oxygen, nitrogen, phosphorous, sulphur, potassium, calcium and magnesium. And Micro-nutrients include iron, manganese, copper, molybdenum, iron, chlorine and nickel. Apart from carbon, hydrogen and oxygen, nitrogen is the most prevalent element in living organisms. Nitrogen is a constituent of amino acids, proteins, hormones, chlorophyll and many vitamins. Nitrogen is absorbed by roots form of \({NO_{3}}^{-}\) and \({NH_{4}^{+}}\) and transported to all parts of the plant for growth and development. Absorption of \({NO_{3}}^{-}\) and \({NH_{4}^{+}}\) are mainly affected by mainly concentration of these ions, temperature, pH of soil etc. phosphorus, potassium, sulphur, calcium, magnesium, zinc, iron, copper etc. all these nutrients are absorbed by the plants from the soil in the form of their ions. All these nutrients are involved in different reactions which are essential for the growth and development of healthy plants [11; 12; 13; 14]. For example, phosphorus is required for phosphorylation reactions, potassium helps to maintain an anion-cation balance in cells and is involved in protein synthesis, calcium is involved in the functioning of the cell membrane and activates certain enzymes to regulate metabolic activities, sulphur is the main constituent of several enzymes and vitamins, magnesium activates the enzymes of respiration, photosynthesis and maintains the ribosome structure, iron is essential for the formation of chlorophyll, chlorine is essential for the water-splitting reaction in photosynthesis. It can be concluded that incomplete macro and micro-nutrients can lead to obstacles to the growth and development of plants and result in low crop yield at higher input costs [11; 12; 15].
## III Non-thermal plasma and its interaction with water
As we know that plasma is one of the four common states of matter. It is an electrically charged gas consisting of charged particles (electrons and ions) that are not free but the motion of these charged particles is affected by electrical and magnetic fields of other moving charges. Plasma is created when the gas atoms are ionized by supplying external energy (electric energy). Based on the average energy of electrons and ions (neutrals), plasma can be characterized as thermal plasma and non-thermal plasma. In thermal plasma, the energy of electrons and ions (neutrals) are very large and nearly equal (\(T_{e}=T_{i}\) or \(T_{n}\)) whereas the average energy of electrons is higher than the energy of ions in non-thermal plasma (\(T_{e}>>T_{i}\)). These non-thermal plasma are in a non-equilibrium state and therefore have many advantages to applying in various sectors [16; 17]. The non-thermal plasma (air or \(N_{2}/O_{2}\)) plasma either in the gaseous phase or plasma-treated water has a great potential to contribute to the agriculture and food industries. Non-thermal plasma discharge, as shown in Fig. 1 (a), is a source of visible and UV radiations, energetic electrons, excited atoms and molecules, various reactive oxygen and nitrogen species (RONS), various radicals, etc [17; 18; 19; 20; 21].
If non-thermal plasma (NTP) interacts with water or liquid solution then reactive oxygen and nitrogen species (RONS), energetic electrons, and radiations generated by NTP or atmospheric pressure plasma in the gaseous phase are transported through the plasma-liquid interface into the water (solution). The water or water solution after interaction with plasma is termed "plasma-activated water (PAW) or solution". The plasma
activated water has a different chemical composition such as superoxide (\(O_{2}^{-}\)'), hydroxyl radical (\(OH\)'), oxides of nitrogen (\(NO_{2}^{-},NO_{3}^{-},NO_{2},NO_{2},NO\)), Hydrogen peroxides (\(H_{2}O_{2}\)), Ozone (\(O_{3}\)), singlet oxygen (\(O\)'), Hypochlorous acid (\(HOCl\)), etc. than untreated water or simple water solution. A schematic image of non-thermal plasma to represent the reactive species before and after the plasma-water interaction is shown in Fig. 1 (b). In other words, we can say that plasma-treated water contains significant amounts of reactive oxygen and nitrogen species [22; 23; 24; 25]
## IV Non-thermal plasma as an alternative in agriculture sector
It has been discussed in the previous section that in presence of sunlight, plants produce foods using carbon dioxide and water. It is fact that maximum photosynthesis takes place in the red and blue light of the visible spectrum and minimum photosynthesis takes place in the green light. Dielectric barrier discharge (DBD) is one of the popular non-thermal plasma sources which can be used to generate a visible spectrum of radiations using a mixture of suitable gases to promote the photosynthesis process in plants/crops. In other words, the non-thermal plasma source can be used as a source of the visible spectrum that is essential for the photosynthesis reactions in plants [18; 19; 20]. Apart from the visible spectrum of light, UV radiation also plays a major role in the growth of plants such as UV-A and UV-B (wavelength 315-400 nm and 280-315 nm respectively) are responsible for healthy and vigorous growth of plants but excess amount of UV-C (100-280 nm) can decrease the photosynthesis process. Non-thermal plasma in the gaseous phase contains a spectrum of UV radiation that can be useful for the growth and development of plants/crops. Thus, atmospheric pressure plasma (mixture of gases) can be used as an artificial source of sunlight to initiate the photosynthesis process which is essential for the growth and development of plants/crops. As we discussed that plasma treated water or solution has many RONS (radicals and non-radicals) which work as fertilizers, pesticides, and sources of macro-nutrients for the better growth of plants. The plasma-activated water is rich in nitrogen content (\(NO_{2}^{-},NO_{3}^{-},NH_{4}^{+}\)) that can be a promising alternative organic fertilizer to conventional chemical nitrogen fertilizers. The plasma-treated water or water solution shows antibacterial and fungicidal properties because of the presence of ozone (\(O_{3}\)) which works as a disinfect and hydrogen peroxide (\(H_{2}O_{2}\)) which works as a pesticide [25; 26; 27; 28; 29; 30]. In summary, the non-thermal plasma or atmospheric pressure plasma has great potential to replace conventional chemical fertilizers/pesticides and sources of light in futuristic agricultural developments.
## V Non-thermal plasma technology and challenges
After a literature survey of plant physiology and non-thermal plasmas (or atmospheric pressure plasma, we found that it is possible to fulfill all the requirements of the plants/crops to grow just using non-thermal plasma as a source of UV radiation, visible light, macro-nutrients, pesticides, fertilizers, etc. In recent years, many research groups around the globe have started working on the application of low-temperature plasma in treating seeds, increasing the germination rate of seeds [31; 32; 33; 34; 35; 36], enhancing the growth of plants/crops [37; 38; 39], deactivating microbes on fruits/vegetables [26; 28; 29; 30; 40], and treating agriculture soils [41; 42]. There is great potential in plasma technology to improve crop yields by implementing it at various stages of the plant/crop life cycle But there are many challenges to scaling up the plasma technology from the lab to the field [43; 10; 27; 45]. As we discussed that plasma-activated water (rich N-content) can be used as liquid fertilizer in place of nitrogenous chemical fertilizer [25; 27; 22; 9] but the same time plants/crops need other macro-nutrients (potassium, phosphorus, sulfur, etc.) and micro-nutrients (magnesium, zinc, iron, calcium, etc.). The plasma-activated water does not have these macro- and micro-nutrients that are essential for the growth and development of plants. Therefore,
Figure 1: (a) A representation of non-thermal plasma and its constituents, (b) Non-thermal air plasma after interaction with water
we need to add these macro- and micro-nutrients in the plasma-activated water to make it a complete liquid fertilizer. The second issue is the high energy cost to treat the water with plasma sources. Farmers may or may not use the costly liquid fertilizer in place of chemical fertilizer until the cost of liquid fertilizer would be reduced. We have started working on resolving some of these issues and will discuss them in an upcoming research article in detail. Using renewable energy sources such as solar cells, wind turbines, etc. to operate the plasma reactors for the treatment of water could be a solution for the high energy cost. The specific design of plasma reactors/use of appropriate discharges to treat the water or water solution can also be beneficial to reduce the energy cost of plasma reactors [44].
## VI Seeeds selection and germination affecting factors
In the present work, our focus was to study the seed germination rate with and without plasma-treated tap water. There were many open questions in mind before starting the experiments such as the selection of seeds, normal germination days, seeds anatomy, etc. Before going to the selection of seed we must have basic knowledge about plant family. Knowledge of plant families can help us to know about the germination of seeds from a particular family. If we know the characteristics of any one plant/crop of a particular family after plasma treatment then it is easy to get the characteristics of the whole family. As per season (May-June), we found that the legume family (moong, clovers, cowpeas, pulses, groundnut, etc.) and Gourd family (cucumber, pumpkin, melons, bottle guard, watermelons, etc.) are suitable for the experiment because of its capacity to grow in the summer season. We have selected the moong (vigna radiata) crop from the gourd family for the experiments due to its less germination time. Water uptake is essential for seed germination. Apart from the uptake of water, moisture, temperature, oxygen, and light are the main affecting factors of seed germination [11; 15] We have performed all experiments by keeping all these information about seed germination in mind.
## VII Experimental setup and methods
In the present study, we used a commercially available high voltage (\(V_{p-p}=6\) kV) and low current (\(<1\) A) power supply to treat the tap water. A schematic diagram of the experimental setup is shown in Fig. 2(a). There was a provision on the power supply to set the on and off time of sparking between the high voltage (H.V.) electrode and the grounded cathode. The high melting point alloy wire of diameter 3 mm as H.V. electrode (anode) and rectangular shaped (40 mm \(\times\) 20 mm) grounded electrode made of aluminum was used (see Fig. 2(b)) as a cathode. A high-voltage probe and a coil loop (typical current transformer) were used to observe the applied voltage and corresponding current profile during the discharge. The applied voltage and corresponding plasma current burst (pulses) are shown in Fig. 3(a) and Fig. 3(b) respectively. A Thermometer was used to measure the temperature and a pH meter for measuring the acidity or basicity (pH value) of the plasma-treated water at different plasma treatment times. The good quality moong seeds (vigna radiata) were purchased from the market. Other equipment and accessories such as beakers, pipettes, petri dishes, weight machines, stopwatch, etc. were used for conducting the primary experiments on seed germination with plasma-treated water.
The effect of plasma-activated water on the seed's germination is verified by measuring the various parameters such as the pH of plasma-activated solutions, the temperature of the water, seeds germination ratio, root and shoot length, etc. The seeds germination ratio is germinated seeds divided by the total seeds taken in a sample. The root and shoot length were measured by taking measurements as shown in Fig. 4 for some germinated seeds. The following steps were taken in performing the experiment and measurements:
* We take an appropriate volume of tap water in the glass beaker
* Two electrodes (cathode and anode) of power supply soaked into a beaker (200 ml) using an insulator feed-through. Cathode (grounded) is dipped into the water while the high voltage electrode (anode) is kept floating in the beaker 2 to 3 mm above the liquid surface.
* Turn on the H.V. power supply before setting the on and off time using the timer knob for water treatment.
* The non-thermal air transient spark discharge plasma is formed between the H.V. electrode and the water surface and it gets diffused into the water. The water is treated by plasma-water interaction and we get plasma-activated water.
* After the desirable plasma treatment, we turn off the power supply and use the plasma-activated or treated solution for further application.
* First we measure the pH of plasma-treated water at different times and then use it to study the seed's germination.
* We put the plasma-treated water (25 ml) into a Petri dish and add 20 seeds of Vigna radiata (moong).
* We track the germination of seeds for different time intervals (hours or days)
* Measure the seeds germination coefficient, root length, and shoot length of growing plants at different time intervals (hours or days)
## VIII Experimental results on seeds germination
In the first set of experiments, 25 ml of tap water was treated with atmospheric pressure air plasma at different times. We prepared 5 samples of plasma-treated water (25 ml each) based on the treatment time. The water (25 ml) treated by plasma for 1 min is named PAW 1min. Similarly, PAW 2min, PAW 4min, PAW 6min, and PAW 8min were prepared and poured into different Petri dishes. Then added 20 seeds of Vigna radiata (moong) into each plasma-activated water-containing Petri dish. The effect of plasma-activated water on seed germination and plant growth (root and shoot length) on different days is shown in Fig. 5. The seed's germination rate was tracked after 24 hours for four to five days. Seeds germination data on different days are given in Table-I. plasma-activated water for 2 min and normal water has more than 95 % germination but plasma-treated water solutions PAW 4min, PAW 6min, and PAW 8min approximately 90 %, 60 %, and 40 % germination respectively. The length of the root and shoot were measured after two days of soaking seeds. Nearly 8 to 10 germinated seeds were taken to measure the root and shoot length on the third day (nearly 72 hours) of soaking seeds. After measuring the length of germinated grown seeds, we have taken an average of all these measured lengths and plotted data in Fig. 6 for different plasma-treated water samples. We observed that seeds grown in PAW2 min have a larger (average) root and shoot length compared to other plasma-activated water samples.
The root and shoot length (both) of seeds soaked into PAW 6min and PAW 8min are very small compared to seeds soaked in normal water and PAW 2min or PAW 4min solution. We can also see the growth of the root and shoot of the plant (crop) on different days (hours) in images shown in Fig. 5. We observe the maximum growth of plants in water that
Figure 4: Measurement technique for root and shoot length of a plant
Figure 3: (a) Applied transient high voltage pulses (b) Plasma current pulses
Figure 2: (a) A Schematic diagram of the experimental setup (b) Image of plasma-water interaction and seeds used in the experimental study
was treated for 2 min and 4 min and the minimum for 8 min treated water sample. It clearly indicates the negative effect of plasma-activated water on seed germination as well as on plant growth if it is treated for a longer time.
We performed another set of experiments with the same volume of water, same duty cycle, and same treatment time to explore the effect of the surrounding environment (temperature and humidity). This set of experiments was performed 15 days later when there was a change in the surrounding environment. We reported a change in temperature by 6 to 8 \({}^{o}\)C between the first and second sets of experiments. The results obtained on seed germination and plant growth in this experiment were slightly different than those previously obtained results. The seed germination progress and plant growth in different plasma-activated water samples with time are depicted in Figure 7. We see an effect of plasma-activated water on the growth of plants (crop) in the images of this figure. The average root and shoot (both) lengths were measured on the fourth day (96 hours) of seed soaking in the plasma-treated water samples. The average root and shoot length data are plotted in Figure 8. The maximum growth of root and shoot was observed in PAW 2min and PAW 4min sample and minimum in PAW 8min.
As we have discussed the role of moisture (humidity) and temperature of the surrounding environment on the seed germination rate. The Difference in surrounding air temperature can also change the pH of water and concentrations of dissolved reactive species, therefore we expect slightly different root and shoot lengths in plasma-activated water in the second experiment. The pH of plasma-treated water decreases with increasing the treatment time as shown in Fig.9. We expect the role of nitrogen and oxygen compounds dissolved in the plasma
Figure 5: Images of growth of Vigna radiata (moong seeds) in plasma-activated water samples at different times
Figure 6: Variation of root and shoot length (both) on the third day (after 72 hours) of soaking seeds in different plasma-activated water samples
treated water to control the seed's germination rate and growth of plants. The amount of nitrogen compound is expected to increase with higher treatment time which reflects in decreasing pH value of the solution.
Figure 8: Average root and shoot (both) length on the fourth day (after 94 hours) of soaking seeds in different plasma-activated water samples
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \hline Sample & Germination ratio & Germination percentage & Germination ratio & Germination percentage \\ name & (Experiment - I) & (\%) & Experiment - II) & (\%) \\ \hline Normal water & 19/20 & 95 & 20/20 & 100 \\ PAW 2 min & 20/20 & 100 & 20/20 & 100 \\ PAW 4 min & 19/20 & 95 & 18/20 & 90 \\ PAW 6 min & 14/20 & 70 & 12/20 & 60 \\ PAW 8 & 8/20 & 40 & 9/20 & 45 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Seeds germination percentage in untreated and plasma treated water
Figure 7: Growth of Vigna radiata (moong seeds) in plasma-activated water samples
Figure 9: Variation of pH of plasma activated water with time.
Conclusion and future prospective
It is possible to summarize the main findings of this project by keeping the previous study and this primary study on plasma agriculture in mind. These points are as follows-
* Low-temperature plasma technology has the potential to increase the productivity of crops instead of using any synthetic chemical fertilizers.
* It is possible to fix air nitrogen by using plasma sources (gas discharges) and can dissolve the nitrogen compounds into tap water to make plasma-activated liquid fertilizer.
* The low-temperature plasma can also be used as a source of visible light, UV radiation, and pesticides for the growth and development of plants.
* The plasma-activated water can be used to increase the seed germination rate and percentages.
* The chemical composition of plasma-activated water strongly depends on the surrounding environment.
We discussed that plasma-activated water contains different reactive nitrogen and oxygen species. The \(H_{2}O_{2}\) is known as a signal molecule in plant cells and plays a significant role in seed germination. It also regulates plant growth and development through various chemical reactions. Proper concentration of \(H_{2}O_{2}\) causes softened the seed coat and allows the seed to absorb more oxygen. This results in increased seed germination speed. A higher concentration of \(H_{2}O_{2}\) may be one of the causes of low seed germination in the present study at larger treatment times (PAW6 min and PAW8 min). Ozone (\(O_{3}\)) works as a disinfectant and reduces cellular toxification. Its concentration also regulates the growth of plants. We know that \(NO_{3}^{-}\) is one of the absorbable forms of nitrogen for plants/crops. Nitric oxide is also responsible to promote seeds germination if its concentration is not exceeded to a proper concentration in plasma-treated water [11; 27; 45]. The concentration of \(H_{2}O_{2}\), \(NO_{3}^{-}\), \(NH_{3}\), \(O_{3}\), etc. strongly depends on the plasma treatment time. Therefore, we expect a higher amount of these compounds in a longer treated plasma-activated water (PAW 6 min or PAW 8 min).
We also noticed the change in pH of plasma treated water with changing temperature of tap water. It could be due to a change in the ionization process with increasing the solution temperature. The seed germination is also strongly affected by the acidic behavior (low pH). Therefore, the germination ratio is decreased with the increment of plasma interaction time. Hence we are getting less seed germination rate and plant growth (root and shoot length) in plasma-treated water for 6 or 8 min.
Plasma-treated water is considered as N-content rich liquid organic fertilizer [9; 10; 27]. It has been discussed that fertilizers are required to promote the growth and development of plants/crops. Therefore, we observed a higher growth rate (root and shoot) in plasma-activated water (PAW 2min and PAW 4min) than in normal water. The higher amount of N-content fertilizer (heavy dose) always affects the growth of plants/crops. A preferable amount of fertilizer is good to stimulate the growth-affecting factors of plants/crops, therefore, we observed lower plant growth in PAW 8min than in PAW 2min in the present work. We did not see a constancy in the observed results performed on different time intervals (gap of one week). There was a 15 days gap between performing different sets of experiments and the surrounding weather was getting changed every week. Therefore, we expect different chemical compositions of plasma-treated water by using the same atmospheric air plasma source while there is a change in the surrounding environment.
The primary experimental findings on plasma agriculture prove the great potential of plasma technology in the agriculture sector at every step from seed treatment to fruit/vegetable storage. However, there is a gap between the laboratory findings and their implementation in the field. There should be a bridge between the scientists and farmers to implement the plasma technology from the lab to the farm. To make plasma technology cost-effective and reliable for farmers, we must work on some important projects like designing and developing an appropriate plasma source to prepare plasma-activated water, need to make plasma-activated water a complete liquid fertilizer by adding required nutrients, need to operate the plasma sources by solar cells to reduce the cost of technology, need to prepare common data sets to use the such technique at every part of the globe, need a wide spectrum of research on the further development of technology, etc. In the future, we would be working on a few such projects with the specific objective to implement low-temperature plasma technology in the agriculture and food sector.
## X Acknowledgement
The authors are very grateful to Dr. Gajendra Singh, Dr.Raviprakash Chandra, Dr. Roli Mishra and Mrs. Nikita, Mr. Nilesh Patel for their assistance in chemical analysis, providing the laboratory facilities and fruitful discussion during the experiments at the Institute of Advanced Research, Gandhinagar, India.
|
2304.07085 | Observational constraints on the metagalactic Ly$Ξ±$ photon
scattering rate at high redshift | The scattering of Ly$\alpha$ photons from the first radiating sources in the
Universe plays a pivotal role in 21-cm radio detections of Cosmic Dawn and the
Epoch of Reionization through the Wouthuysen-Field effect. New data from JWST
show the Ly$\alpha$ photon scattering rate exceeds that required to decouple
the intergalactic hydrogen spin temperature from that of the Cosmic Microwave
Background up to $z\sim14$ and render the neutral hydrogen visible over the
main redshift range expected for the Epoch of Reionization. | Avery Meiksin | 2023-04-14T12:19:17Z | http://arxiv.org/abs/2304.07085v1 | # Observational constraints on the metagalactic Ly\(\alpha\) photon scattering rate at high redshift
###### Abstract
The scattering of Ly\(\alpha\) photons from the first radiating sources in the Universe plays a pivotal role in 21-cm radio detections of Cosmic Dawn and the Epoch of Reionization through the Wouthuysen-Field effect. New data from _JWST_ show the Ly\(\alpha\) photon scattering rate exceeds that required to decouple the intergalactic hydrogen spin temperature from that of the Cosmic Microwave Background up to \(z\sim 14\) and render the neutral hydrogen visible.
cosmology - reionization - intergalactic medium
## 1 Introduction
The reionization of intergalactic H i is the last major phase change in the baryonic component of the Universe. Two avenues have been followed for its discovery: the search for the reionization sources and the direct detection of the Epoch of Reionization (EoR) through radio 21-cm measurements. The two are intimately related through the production of both ionizing radiation and Ly\(\alpha\) photons. The latter are crucial to unpin the H i spin temperature from the Cosmic Microwave Background (CMB) through the Wouthuysen-Field effect (WFE) (Wouthuysen, 1952; Field, 1959), and so render detectable the EoR, and the Cosmic Dawn of the first radiating sources leading up to it, in the radio against the CMB. While theoretical predictions suggest galaxies provide sufficient Ly\(\alpha\) photons for the WFE to be effective at redshifts \(z<20\), and possibly to \(z<30\), direct observational support for the required galaxies has been constrained to \(z<10\)(Madau, 2018). It is shown here that recent deeper _JWST_ observations suggest galaxies provide sufficient numbers of Ly\(\alpha\) photons for the WFE to act at least to \(z\sim 14\), the 3\(\sigma\) upper limit for the EoR from the _Planck_ 2018 data.
## 2 The WFE and the EoR
The condition for the WFE to be effective against the CMB is
\[\frac{P_{\alpha}}{P_{\rm th}}=\frac{1}{18\pi}\frac{f_{\rm\alpha L}f_{\rm LH}} {f_{\rm esc}}n_{\rm H}\lambda_{\alpha}^{3}\frac{A_{\alpha}}{A_{10}}\frac{T_{ \ast}}{T_{\rm CMB}}>1, \tag{1}\]
(Madau et al., 1997) where \(\lambda_{\alpha}\) is the Ly\(\alpha\) photon wavelength, \(A_{\alpha}\) and \(A_{10}\) are the spontaneous decay rates of the Ly\(\alpha\) and 21-cm hyperfine transitions, respectively, \(T_{\ast}=h_{\rm P}\nu_{10}/k_{\rm B}\) where \(\nu_{10}\) is the 21-cm transition frequency, \(h_{\rm P}\) is Planck's constant, \(k_{\rm B}\) is the Boltzmann constant, \(T_{\rm CMB}\) is the CMB temperature, and \(P_{\rm th}=(27/4)A_{10}T_{\rm CMB}/T_{\ast}\) is the thermalization rate. The cosmic number densities of Ly\(\alpha\), \(n_{\alpha}\), and Lyman Limit, \(n_{L}\), photons generated by galaxies are related through \(n_{\alpha}=f_{\rm\alpha L}n_{L}\) with \(f_{\rm\alpha L}\sim 1\). Only a fraction up to \(f_{\rm esc}\sim 0.2\) of Lyman Limit photons escape into the Intergalactic Medium (IGM) (Robertson, 2022). Here, \(f_{\rm LH}=f_{\rm esc}n_{L}/n_{\rm H}\sim 0.01-1\) corresponds to the EoR, so that the combination \((f_{\rm\alpha L}f_{\rm LH}/f_{\rm esc})>0.05\) during the EoR. For a baryon density \(\Omega_{b}h^{2}=0.022\) and \(T_{\rm CMB}=2.725\) K today, during the EoR \(P_{\alpha}/P_{\rm th}>0.002(1+z)^{2}\) exceeds unity by \(z\sim 25\), corresponding to the 21-cm line redshifted to
\(\sim 50\) MHz, making it possible to detect the EoR in the low-frequency radio band (Madau et al., 1997), and motivating radio EoR experiments (Ekers, 2012).
## 3 The Metagalactic Ly\(\alpha\) Photon Scattering Rate
The UV continuum radiation emitted by a galaxy between the Ly\(\alpha\) and Ly\(\beta\) frequencies will be redshifted to the local Ly\(\alpha\) frequency and contribute to the WFE. In terms of the UV luminosity density of galaxies \(\rho_{\rm UV}\), the Ly\(\alpha\) photon scattering rate is (for a flat spectrum, Donnan et al., 2023)\(P_{\alpha}\simeq(5/27)\tau_{\alpha}\rho_{\rm UV}/(h_{\rm P}n_{\rm H})\), where \(\tau_{\alpha}\) is the Gunn-Peterson optical depth, which is also the number of times a Ly\(\alpha\) photon scatters before redshifting away (Field, 1959; Higgins & Meiksin, 2012).
For a Salpeter IMF and expected young galaxy metallicity, the cosmic star formation rate density \(\dot{\rho}_{*}\simeq K_{\rm UV}\rho_{UV}\), where \(K_{\rm UV}\simeq 1.15\times 10^{-28}\,{\rm M}_{\odot}\,{\rm yr}^{-1}/({\rm erg \,s^{-1}\,Hz^{-1}})\)(Madau & Dickinson, 2014). A simple estimate for \(\dot{\rho}_{*}\) is given by the fraction \(F_{\rm gal}\) of haloes that collapse with masses above the threshold required for star formation (eg Barkana & Loeb, 2005): \(\dot{\rho}_{*}=\bar{\rho}_{b}\epsilon_{*}dF_{\rm gal}/dt\), where \(\bar{\rho}_{b}\) is the mean cosmic baryon density and \(\epsilon_{*}\) is the star formation efficiency. The thresholds for star-forming haloes are taken as \(M_{\rm thresh}\simeq 10^{6}[26/(1+z)]^{1/2}\,{\rm M}_{\odot}\) and \(M_{\rm thresh}\simeq 9.1\times 10^{6}\exp[-(1+z)/51]\,{\rm M}_{\odot}\) for molecular hydrogen and atomic hydrogen cooled haloes, respectively (Meiksin, 2011). (The latter applies if molecular hydrogen formation is disrupted by the radiation from an earlier generation of galaxies, Haiman et al., 1997). A common proxy for atomic-cooled haloes is to require their post-shock or viral temperature to exceed \(10^{4}\) K.
Figure 1: Evolution of the galactic UV luminosity density at 1500A (\({\rm ergs\,s^{-1}\,Hz^{-1}\,Mpc^{-3}}\)) (upper panel) and the Ly\(\alpha\) scattering rate \(P_{\alpha}\), normalized by the thermalization rate \(P_{\rm th}\) (lower panel). The curves correspond to the indicated minimum halo mass thresholds required for star formation.
The resulting UV luminosity densities are shown in the upper panel of Fig. 1 for \(\epsilon_{*}=0.01\), adopting the halo mass function from Reed et al. (2007), adapted to _Planck_ 2018 constraints on the cosmological parameters (Planck Collaboration, 2018). The estimate compares well with a more sophisticated model, allowing star-formation only in haloes with virial temperatures above \(10^{4}\) K (Hernquist & Springel, 2003, HS03), updated to the _Planck_ 2018 power spectrum normalization.
These are compared with the measured values from Oesch et al. (2013, 2018) and Donnan et al. (2023) (for \(M_{1500}<-17\)), in the upper panel of Fig. 1. The inferred values for \(P_{\alpha}/P_{\rm th}\) are shown in the lower panel. By \(z<10\), the measured UV emissivity shows \(P_{\alpha}>P_{\rm th}\), so that the hydrogen spin temperature should be well removed from the CMB temperature. The data from Oesch et al. (2013, 2018), however, suggest a rapidly declining emissivity at \(z>9\). The shaded region represents the declining number of collapsed haloes with masses \(9.5<\log_{10}M_{h}/M_{\odot}<10.5\). The trend suggests by \(z=13\), the data no longer ensure \(P_{\alpha}>P_{\rm th}\). The observations of Donnan et al. (2023) using _JWST_ show on the contrary, \(P_{\alpha}/P_{\rm th}>1\) is maintained to \(z\sim 14\). This is sufficient to cover the entire waveband (115-203 MHz) probed by the Low Frequency Array (LOFAR) High-band Antenna EoR experiment (van Haarlem et al., 2013).
|
2301.02723 | CFG2VEC: Hierarchical Graph Neural Network for Cross-Architectural
Software Reverse Engineering | Mission-critical embedded software is critical to our society's
infrastructure but can be subject to new security vulnerabilities as technology
advances. When security issues arise, Reverse Engineers (REs) use Software
Reverse Engineering (SRE) tools to analyze vulnerable binaries. However,
existing tools have limited support, and REs undergo a time-consuming, costly,
and error-prone process that requires experience and expertise to understand
the behaviors of software and vulnerabilities. To improve these tools, we
propose $\textit{cfg2vec}$, a Hierarchical Graph Neural Network (GNN) based
approach. To represent binary, we propose a novel Graph-of-Graph (GoG)
representation, combining the information of control-flow and function-call
graphs. Our $\textit{cfg2vec}$ learns how to represent each binary function
compiled from various CPU architectures, utilizing hierarchical GNN and the
siamese network-based supervised learning architecture. We evaluate
$\textit{cfg2vec}$'s capability of predicting function names from stripped
binaries. Our results show that $\textit{cfg2vec}$ outperforms the
state-of-the-art by $24.54\%$ in predicting function names and can even achieve
$51.84\%$ better given more training data. Additionally, $\textit{cfg2vec}$
consistently outperforms the state-of-the-art for all CPU architectures, while
the baseline requires multiple training to achieve similar performance. More
importantly, our results demonstrate that our $\textit{cfg2vec}$ could tackle
binaries built from unseen CPU architectures, thus indicating that our approach
can generalize the learned knowledge. Lastly, we demonstrate its practicability
by implementing it as a Ghidra plugin used during resolving DARPA Assured
MicroPatching (AMP) challenges. | Shih-Yuan Yu, Yonatan Gizachew Achamyeleh, Chonghan Wang, Anton Kocheturov, Patrick Eisen, Mohammad Abdullah Al Faruque | 2023-01-06T21:45:50Z | http://arxiv.org/abs/2301.02723v1 | # CFG2VEC: Hierarchical Graph Neural Network for Cross-Architectural Software Reverse Engineering
###### Abstract
Mission-critical embedded software is critical to our society's infrastructure but can be subject to new security vulnerabilities as technology advances. When security issues arise, _Reverse Engineers_ (REs) use _Software Reverse Engineering_ (SRE) tools to analyze vulnerable binaries. However, existing tools have limited support, and REs undergo a time-consuming, costly, and error-prone process that requires experience and expertise to understand the behaviors of software and vulnerabilities. To improve these tools, we propose _cfg2vec_, a Hierarchical _Graph Neural Network_ (GNN) based approach. To represent binary, we propose a novel _Graph-of-Graph_ (GoG) representation, combining the information for control-flow and function-call graphs. Our _cfg2vec_ learns how to represent each binary function compiled from various CPU architectures, utilizing hierarchical GNN and the siamese network-based supervised learning architecture. We evaluate _cfg2vec_'s capability of predicting function names from stripped binaries. Our results show that _cfg2vec_ outperforms the state-of-the-art by 24.54% in predicting function names and can even achieve 51.84% better given more training data. Additionally, _cfg2vec_ consistently outperforms the state-of-the-art for all CPU architectures, while the baseline requires multiple training to achieve similar performance. More importantly, our results demonstrate that our _cfg2vec_ could tackle binaries built from unseen CPU architectures, thus indicating that our approach can generalize the learned knowledge. Lastly, we demonstrate its practicability by implementing it as a _Ghira_ plugin used during resolving DARPA _Assured MicroPatching_ (AMP) challenges.
Software Reverse Engineering; Binary Analysis; Cross-Architecture; Machine Learning; Graph Neural Network;
## I Introduction
In mission-critical systems, embedded software is vital in manipulating physical processes and executing missions that could pose risks to human operators. Recently, the _Internet of Things_ (IoT) has created a market valued at 19 trillion dollars and drastically grown the number of connected devices to approximately 35 billion in 2025 [1, 2, 3]. However, while IoT brings technological growth, it unintendedly exposes mission-critical systems to novel vulnerabilities [4, 5, 6]. The reported number of IoT cyberattacks increased by 300% in 2019 [7], while the discovered software vulnerabilities rose from 1.6k to 100k [8]. The consequence can be detrimental, as indicated in [9], the _Hearthped_ bug [10] can lead to a leakage of up to 64K memory, threatening not only personal but also organizational information security. Besides, _Shellshock_ is a bash command-line interface shell bug, but it has existed for 30 years and remains a threat to enterprises today [11, 12]. For mission-critical systems, unexpected disruptions can incur millions of dollars even if they only last for a few hours or minutes [13]. As a result, timely analyzing these impacted software and patching vulnerabilities becomes critical.
However, mission-critical systems usually use software that can last for decades due to the criticality of the missions. Over time, these systems become legacy, and the number of newly-discovered threats can increase (as illustrated in Figure 1). Typically, for legacy software, the original development environment, maintenance support, or source code might no longer exist. To address vulnerabilities, vendors offer patches in the form of source code changes based on the current software version (e.g., ver 0.9). However, the only available data in the legacy system is binary based on its source code (e.g., ver 0.1). Such a version gap poses challenges in applying patches to the legacy binaries, leaving the only solution for applying patches to legacy software as direct binary analysis. Today, as Figure 2 shows, _Reverse Engineers_ (REs) have to leverage _Software Reverse Engineering_ (SRE)
Fig. 1: Legacy software life cycle.
tools such as _Ghidra_[14], _HexRays_[15], and _radare2_[16] to first disassemble and decompile binaries into higher-level representations (e.g., C or C++). Typically, these tools take the debugging information, strings, and the symbol-table and binary to reconstruct function names and variable names, allowing REs to rebuild a software's structure and functionality without access to source code [17]. For REs, these symbols encode the context of the source code and provide invaluable information that could help them to understand the program's logic as they work to patch vulnerable binaries. However, symbols are often excluded for optimizing the binary's footprint in mission-critical legacy systems where memory is limited. Because recovering symbols from _stripped binaries_ is not straightforward, most decompilers assign meaningless symbol names to coding elements. As for understanding the software semantics, REs have to leverage their experience and expertise to consume the information and then interpret the semantics of each coding element.
Recent works tackle these challenges with _Machine Learning_ (ML), aiming to recover the program's information from raw binaries. For example, [18], and [19] associate code features to function names and model the relationships between such code features and the corresponding source-level information (variable names in [19], variable & function names in [18]). Meanwhile, [20] and [21] use an encoder-decoder network structure to predict function names from stripped binary functions based on instruction sequences and control flows. However, none of them support cross-architectural debug information reconstruction. On the other side, there exist works focusing on the cross-platform in their ML models [22, 23, 24]. These works focus on modeling the binary code similarity, extracting a real-valued vector from each control-flow graph (CFG) with attributed features, and then computing the _Structural Similarity_ between the feature vectors of binary functions built from different CPU architectures.
In this paper, as part of a multi-industry-academia joint initiative between Siemens, the Johns Hopkins University Applied Physics Laboratory (JHU/APL), BAE Systems (BAE), and UCI, we propose _cfg2vec_, which utilizes a hierarchical _Graph Neural Network_ (GNN) for reconstructing the name of each binary function, aiming to develop the capacity for quick patching of legacy binaries in mission-critical systems. Our _Cfg2vec_ forms a _Graph-of-Graph_ (GoG) representation, combining CFG and FCG to model the relationship between binary functions' representation and their semantic names. Besides, _cfg2vec_ can tackle cross-architectural binaries thanks to the design of Siamese-based network architecture, as shown in Figure 3. One crucial use case of cross-architectural decompilation is _patching_, where the goal is to identify a known vulnerability or a bug and apply a patch. However, there can be architecture gaps when software with a bug can be compiled into many devices with diverse hardware architectures. For example, it is challenging to patch a stripped binary from an exotic embedded architecture compiled ten years ago that is vulnerable to a known attack such as _Heartbleed_[10]. While the reference patch is available in software, the reference architecture may not be readily available or documented, or the vendor may no longer exist. Under such circumstances, mapping code features across architectures is very helpful. It would allow for identifying similarities in code between a stripped binary that is vulnerable and its reference patch, even if the patch were built for a different type of CPU architecture. For _cfg2vec_, our targeted contributions are as follows:
* We propose representing binary functions in _Graph-of-Graph_ (GoG) and demonstrate its usefulness in reconstructing function names from stripped binaries.
* We propose a novel methodology, _cfg2vec_ that uses a hierarchical _Graph Neural Network_ (GNN) to model control-flow and function-calling relations in binaries.
* We propose using cross-architectural loss when training, allowing _cfg2vec_ to capture the architecture-agnostic representations of binaries.
* We release _cfg2vec_ in a GitHub repository: [https://github.com/AICPS/mindsight_cfg2vec](https://github.com/AICPS/mindsight_cfg2vec).
* We integrate our _cfg2vec_ into an experimental Ghidra plugin, assisting the realistic scenarios of patching DARPA _Assured MicroPatching_ (AMP) challenge binaries.
The paper is structured as follows: Section II discusses related works and fundamentals to provide a better understanding of the paper. Section III describes _cfg2vec_, including problem formulation, data preprocessing, and our main pipeline introduction. Section IV shows our experimental results. Lastly, we conclude the paper in Section V.
## II Related Work
This section introduces software reverse engineering backgrounds, discusses the related works using machine learning to improve reverse engineering, and ultimately covers graph learning for binary analysis.
### _Software Reverse Engineering_
_Software Reverse Engineering_ (SRE) aims at understanding the behavior of a program without having access to its source code, often being used in many applications such as detecting malware [25, 26], discovering vulnerabilities, and patching bugs in _legacy software_[27, 28]. One primary tool that _Reverse Engineers_ (REs) use to inspect programs is _disassembler_ which translates a binary into low-level assembly code. Examples of such tools include _GNU Binutils' objdump_[29], _IDA_[15], Binary Ninja [30], and Hopper [31]. However, even with these tools, reasoning at the assembly level still requires considerable cognitive effort from RE experts.
More recently, REs use _decompilers_ such as _Hex-Rays_[32], or _Ghidra_[14] to reverse the compiling process by further translating the output of disassemblers into the code that ensembles high-level programming languages such as C or C++ to reduce the burden of understanding assembly code. From assembly instructions, these decompilers can use program analysis and heuristics to reconstruct variables, types, functions, and control flow structure of a binary. However, the decompilation is incomplete even if these decompilers generate a higher-level output for better code understanding.
The reason is that the compilation process discards the source-level information and lowers its abstraction level in exchange for a smaller footprint size, faster execution time, or even security considerations. The source-level information such as comments, variable names, function names, and idiomatic structure can be essential for understanding a program but is typically unavailable in the output of these decompilers.
As Figure 2 demonstrated, REs use disassemblers or decompilers to generate high-level source code. Besides, [33] indicates REs will take notes and grant a name to those critical functions related to the vulnerabilities. This will create an annotated source code based on the high-level machine-generated source code. While annotating the source code, REs also analyze the significant part related to the vulnerability and ignore those general instructions or unrelated codes. At the same time, understanding the logic flow among functions is another major task they must focus on resolving their tasks. After classification, annotation, and understanding, REs experiment with several viable remedies to find the correct patch to fix the vulnerability.
### _Machine Learning for Reverse Engineering_
Software binary analysis is a straightforward first step to enhance security as developers usually deploy software in binaries [34]. Usually, experts conduct the patching process or vulnerability analysis by understanding the compilation source, function signatures, and variable information. However, after the compilation, such information is usually stripped or confuscated deliberately (e.g., _obfuscation_). Software binary analysis becomes more challenging in this case as developers have to recover the source-level information based on their experience and expertise. The early recovery work for binaries focuses on manual completion but suffers from low efficiency, high cost, and the error-prone nature of reverse engineering.
As _Machine Learning_ (ML) has significantly advanced in its reasoning capability, applying ML and reconstructing higher-level source code information as an alternative to manual-based approaches has attracted considerable research attention. For example, [35] was the first approach that used neural network-based and graph-based models, predicting the function types to assist the reverse engineer in understanding the binary. [36] also predicted function names with neural networks, aggregating the related features of sections of binary vectors. Then, it analyzes the connections between each function in the source code (e.g., Java) and their corresponding function names for function name prediction. [18], on the other hand, did not use a neural network. It combined a decision-tree-based classification algorithm and a structured prediction with a probabilistic graphical model, then matched the function name by analyzing symbol names, types, and locations. However, [18] can only predict from a predetermined closed set, incapable of generalizing to new names.
As the languages for naming functions are similar to natural language, recent research works start leaning toward the use of _Natural Language Processing_ (NLP) [20, 21, 37]. Precisely, these models predict semantic tokens based on the function names in the library, comprising the function name during inference. The underlying premise is that each token corresponds in some way to the attributes and functionality of the function. [20] uses _Control-Flow Graph_ (CFG) to predict function names. It combined static analysis with LSTM and transformer neural model to realize the name of functions. However, the dataset that consisted of unbalanced data and insufficient features was limited and hindered utter performance. [37] was designed to solve the limitation of the dataset. It provided _UbuntuDataset_ that contained more than 9 million functions in 22K software. [21] demonstrated the framework's effectiveness by building a large dataset. It considers the fine-grained sequence and structure information of assembly code when modeling and realizing function name prediction. Meanwhile, [21] reduced the diversity of data (instructions or words) while keeping the basic semantics unchanged, similar to word stemming and semantics in NLP. However, these works have low precision scores for prediction tasks, exampled by [21], only achieving around 41% in correctly predicting the function name subtokens. Moreover, the metrics for the inference of unknown functions are substantially lower [21], making it difficult for REs to find it helpful in practice.
Although many existing works can reconstruct source-level information, none of them supports reconstructing cross-platform debug information. Cross-compilation is becoming more popular in the development of software. Hardware manufacturers, for instance, often reuse the same firmware code base across several devices running on various architectures [38]. A tool that performs cross-architecture function name prediction/matching would be beneficial if we have a stripped binary compiled for one architecture and a binary of a comparable program compiled for another architecture with debug symbols. We may use the binary with the debug symbols to predict the names of functions in the stripped binary, which significantly aids debugging. A tool that could capture the architecture-agnostic characteristics of binaries
Fig. 2: The RE flow to solve security issues.
would also help in malware detection as the source code of malware can be compiled in different architectures [38, 39]. Comparing two binaries of different architectures becomes more complicated because they will have different instruction sets, calling conventions, register sets, etc. Furthermore, assembly instructions from different architectures cannot often be compared directly due to the slightly different behavior of different architectures [40]. Cross-architecture function name prediction will assist in finding a malicious function in a program compiled for different architectures by learning its features from a binary compiled for just one architecture. The tools mentioned above are not architecture-agnostic; thus, we cannot utilize them for such applications. To address the flaws mentioned above, aid in creating more efficient decompilers, and make reverse engineering more accessible, we propose _cfg2vec_. Incorporating the cross-architectural siamese network architecture, our _cfg2vec_ can learn to extract robust features that encompass platform-independent features, enhancing the state-of-the-art by achieving function name reconstruction across cross-architectural binaries.
### _Graph Learning for Binary Analysis_
Graph learning has become a practical approach across fields [41, 42, 43, 44]. Although conventional ML can effectively capture the features hidden in Euclidean data, such as images, text, or videos, our work focuses more on the application where the core data is graph-structured. Graphs can be irregular, and a graph may contain a variable size of unordered nodes; moreover, nodes can have a varying number of neighboring nodes, making deep learning mathematical operations (e.g., 2D Convolution) challenging to apply. The operations in conventional ML methods can only be applied by projecting non-Euclidean data into low-dimensional embedding space. In graph learning, _Graph Embeddings_ (GE) can transform a graph into a vector (embedding of a graph) or a set of vectors (embedding of nodes or edges) while preserving the relevant and structural information about the graph [41]. _Graph Neural Network_ (GNN) is a model aiming at addressing graph-related tasks in an end-to-end manner, where the main idea is to generate a node's representation by aggregating its representation and the representations of its neighbors [42]. GNN stacks multiple graph convolution layers, graph pooling layers, and a graph readout to generate a low-dimensional graph embedding from high-dimensional graph-structured data.
In software binary analysis, many approaches use _ControlFlow Graphs_ (CFGs) as the primary representations. For example, _Genius_ forms an _Attributed Control-Flow Graph_ (ACFG) representation for each binary function by extracting the raw attributes from each _Basic Block_ (BB), a straight-line code sequence with no branching in or out except at the entry and exit, in an ACFG [22]. _Genius_ measures the similarity of a pair of ACFGs through a bipartite graph matching algorithm, and the ACFGs are then clustered based on similarity. _Genius_ leverages a codebook for retrieving the embedding of an ACFG based on similarity. Another approach, _Gemini_, proposes a deep neural network-based model along with a Siamese architecture for modeling binary similarities with greater efficiency and accuracy than other state-of-the-art models of the time [23]. _Gemini_ takes in a pair of ACFGs extracted from raw binary functions generated from known vulnerability in code and then embeds them with a shared _Structure2vec_ model in their network architecture. Once embedded, _Gemini_ trains its model with a loss function that calculates the cosine similarities between two embedded representations. _Gemini_ outperforms models like _Genius_ or other approaches such as bipartite graph matching. In literature, there exist other works that consider the _Function Call Graph_ (FCG) as their primary data structures in binary analysis for malware detection [45]. Our _cfg2vec_ extracts relevant platform-independent features by combining the usage of CFG and FCG, resulting in a _Graph-of-Graph_ (GoG) representation for cross-architectural high-level information reconstruction tasks (e.g., function name).
## III CFG2vec Architecture
This section begins with problem formulation. Next, as Figure 4 shows, we depict how our _cfg2vec_ extracts the _Graph-of-Graph_ (GoG) representation from each software binary. Lastly, we describe the network architecture in _cfg2vec_.
### _Problem Formulation_
In our work, given a binary code, denoted as \(p\), compiled from different CPU architectures, we extract a graph-of-graph (GoG) representation, \(\mathcal{G}=(\mathcal{V},\mathcal{A})\) where \(\mathcal{V}\) is the set of nodes and \(\mathcal{A}\) is the adjacency matrix (As Figure 3 shows). The nodes in \(\mathcal{V}\) represent functions and the edges in \(\mathcal{A}\) indicate their cross-referencing relationships. That says, each of the node \(f_{i}\in\mathcal{V}\) is a CFG, and we denote it as \(f_{i}=(B,A,\phi)\) where the nodes in \(B\) represent the basic blocks and the edges in \(A\) denote their dependency relationships. \(\phi\) is a mapping function that maps each basic block in the assembly form to its corresponding extracted attributes \(\phi(v_{i})=C^{k}\) where C is a numeric value, and k is the number of attributes for the basic block (BB). Whereas the CFG structure is meant to provide more information at the lower BB level, the GoG structure is intended for recovering information at the overarching function level between the CFGs. Figure 3 is an example of a partial GoG structure with a closer inspection of one of its CFG nodes and another of a single CFG BB node, showing the set of features corresponding to that BB node. The goal is to design an efficient and effective graph embedding technique that can be used for reconstructing the function names for each function \(f_{i}\in\mathcal{V}\).
### _Ghidra Data ToolKit for Graph Extraction_
To extract the structured representation required for _cfg2vec_ we leverage the state-of-the-art decompiler _Ghidra_[14] and the _Ghidra Headless Analyzer_1. The _headless analyzer_ is a command-line version of _Ghidra_ allowing users to perform many tasks (such as analyzing a binary file) supported by _Ghidra_ via a command-line interface. For extracting GoG
from a binary, we developed our _Ghidra Data Toolkit_ (GDT); GDT is a set of Java-based metadata extraction scripts used for instrumenting _Ghidra Headless Analyzer_. First, GDT programmatically analyzes the given executable file and stores the extracted information in the internal Ghidra database. Ghidra provides a set of APIs to access the database and retrieve the information about the analyzed binary. GDT uses these APIs to export information such as Ghirda's PCode and call graph for each function. Specifically, the _FunctionManager_ API allows us to manipulate the information of each decompiled function in the binary and acquire the cross-calling dependencies between functions. For each function, we utilized another Ghidra API _DecompInterface2_ to extract 12 attributes associated with each basic block in a function. These attributes precisely correspond to the total number of instructions, including arithmetic, logic, transfer, call, data transfer, SSA, compare, and pointer instructions, as well as other instructions not falling within those categories and the total number of constants and strings within that BB. Lastly, by integrating all of the information, we form a GoG representation \(\mathcal{G}\) for each binary \(p\). We repeat this process until all binaries are converted to the GoG structure. We feed the resulting GoG representations to our model in batches, with the batch size denoted as B.
Footnote 2: Documentation of _Ghidra API DecompInterface_: [https://ghidra.re/ghidra_docs/api/ghidra/app/decompiler/DecompInterface.html](https://ghidra.re/ghidra_docs/api/ghidra/app/decompiler/DecompInterface.html)
### _Hierarchical Graph Neural Network_
Once \(\mathcal{G}\) is extracted from the GDT, we then feed it to our hierarchical network architecture (inspired from [46]) that contains both _CFG Graph Embedding_ layer and _GoG Graph Embedding Layer_ as Figure 4 shows. For each GoG structure, we denote it as \(\mathcal{G}=(\mathcal{V},\mathcal{A})\) where \(\mathcal{V}\) is a set of functions associated with \(\mathcal{G}\) and \(\mathcal{A}\) indicates the calling relationships between the functions in \(\mathcal{V}\). Each function in \(\mathcal{V}\) is in the form of CFG \(f_{i}=(B,A,\phi)\) where each node \(b\in B\) is a BB represented in a fixed-length attributed vector \(b\in R^{d}\), and \(d\) is the dimension that we have mentioned earlier. \(A\) encodes the pair-wise control-flow dependency relationships between these BBs.
#### Iii-C1 CFG Graph Embedding Layer
Our network architecture first feeds all functions in a batch of GoGs to the _CFG Graph Embedding Layer_ consisting of multiple graph convolutional layers and a graph readout operations. The input to this layer is a function \(f_{i}=(B,A,\phi)\) and the output is the fixed-dimensional vector representing a function. For each BB \(b_{k}\) we let \(b_{k}^{0}=b_{k}\), and we update \(b_{k}^{t}\) to \(b_{k}^{t+1}\) with the graph convolution operation shown as follows:
\[b_{k}^{t+1}=f_{G}(Wb_{k}^{t}+\sum_{b_{m}\in A_{k}}Mb_{m}^{t})\]
where \(f_{G}\) is a non-linear activation function such as ReLU, \(A_{k}\) is the list of adjacent BBs for \(b_{k}\), and \(W\in R^{d\times d}\) and \(M\in R^{d\times d}\) are the weights to be learned during the training. We run \(T\) iterations of such a convolution, which can be a tunable hyperparameter in our model. During the updates, each BB gradually aggregates the global information of the control-flow dependency relations into its representation, utilizing the representation of its neighbor. We obtain the final representation for each BB as \(b_{k}^{T}\). To acquire the representation for the function \(f_{i}\), we apply a graph readout operation such as _sum-readout_, described as follows,
\[g^{(T)}=\sum_{b_{k}\in B}b_{k}^{T} \tag{1}\]
We assign the value of \(g^{(T)}\) (a.k.a. CFG embedding) to \(f_{i}\). The graph readout operation can be replaced with _mean-readout_ or _max-readout_.
#### Iii-C2 GoG Graph Embedding Layer
Once all the functions have been converted to fixed-length graph embeddings, we then feed \(\mathcal{G}\) to the second layer of _cfg2vec_, the _GoG Embedding Layer_. Here, for each function \(f_{i}\) we apply another \(L\) iterations of graph convolution with \(\mathcal{F}\) and \(\mathcal{C}\). The updates can be illustrated as follows,
\[f_{k}^{(l+1)}=f_{GoG}(Uf_{k}^{l}+\sum_{f_{m}\in C_{k}}Vf_{m}^{(l)}) \tag{2}\]
where \(f_{GoG}\) is a non-linear activation function and \(C_{k}\) is the list of adjacent functions (calling) for the function \(f_{k}\) and \(U\in R^{d\times d}\) and \(V\in R^{d\times d}\) are the weights to be learned during the training. Lastly, we take the \(f_{k}^{(L)}\) as the representation that considers both CFG and GoG graph structures. We use these updated representations to perform cross-architecture function similarity learning.
Fig. 3: An example of a _Graph-of-Graph_ (GoG) of a binary compiled from a package Freccell with amd64 CPU architecture.
#### Iii-B3 Siamese-based Cross-Architectural Function Similarity
Given a batch of GoGs \(B=\{GoG_{1},GoG_{2},...,GoG_{B}\}\), we apply the hierarchical graph neural network to acquire the set of updated function embeddings, denoted as \(B_{F}=\{f_{1}^{(T)},f_{2}^{(T)},...,f_{K}^{(T)}\}\). We calculate the function similarity for each function pair with cosine similarity, denoted as \(\hat{y}\in[-1,1]\). The loss function \(J\) between \(\hat{Y}\) and a ground-truth label \(y\), which indicates whether a pair of functions have the same function or not, is calculated as follows,
\[J(\hat{y},y)=\left\{\begin{array}{ll}1-y,&\text{if y=1,}\\ MAX(0,\hat{y}-m),&\text{if y=-1,}\end{array}\right. \tag{3}\]
the final loss \(L\) is then calculated as follows,
\[L=H(Y,\hat{Y})=\sum_{i}(J(\hat{y_{i}},y_{i})), \tag{4}\]
where \(Y\) stands for ground-truth labels (either similarity or dissimilarity), and \(\hat{Y}\) represents the corresponding predictions. More specifically, we denote a pair of functions as similar if they are the same but compiled with different CPU architectures. The \(m\) is a constant to prevent the learned embeddings from becoming distorted (by default, \(0.5\)). To maintain the balance between positive and negative training samples, we developed a custom batching algorithm. The function leverages the knowledge gained by adding a binary of some package to a given batch to find and add a binary for the same package, built for a different architecture, to the provided batch as a positive sample. It will also include a binary from another package as a negative sample. This will give any batch a balanced proportion of positive and negative samples. Finally, we use the loss \(L\) to update all the associated weights in our neural networks with an _Adam_ optimizer. Once trained, we then use the model to perform function name reconstruction tasks.
## IV Experimental Results
In this section, we evaluate _cfg2vec_'s capability in predicting function names. We first describe the dataset preparation and the training setup processes. Then, we present the comparison of _cfg2vec_ against baseline in predicting function names. Although many baseline candidates tackle the same problem [18, 20, 37, 21], some require purchasing a paid version of IDA Pro to preprocess datasets, and some even do not open source their implementations. Therefore, [18] was the only feasible choice, as running other models using our datasets was almost impossible. Next, we also show the result of the ablation study over _cfg2vec_. Besides, we exhibit that our _cfg2vec_ can perform architecture-agnostic prediction better than the baseline. Lastly, we illustrate the real-world use case where our _cfg2vec_ is integrated as a _Ghidra_ plugin application for assisting in resolving challenging reverse engineering tasks. We conducted all experiments on a server equipped with Intel Core i7-7820X CPU @3.60GHz with 16GB RAM and two NVIDIA GeForce GTX Titan Xp GPUs.
### _Dataset Preparation_
Our evaluating data source is the ALLSTAR (_Assembled Labeled Library for Static Analysis Research_) dataset, hosted by _Applied Physics Laboratory_ (APL) [47]. It has over 30,000 Debian Jessie packages pre-built from i386, amd64, ARM, MIPS, PPC, and s390x CPU architectures for software reverse engineering research. The authors used a modified _Dockcross_ script in docker to build each package for each supported architecture. Then, they saved each resulting ELF with its symbols, the corresponding source code, header files, intermediate files (.o,.class,.gkd,.gimple), system headers, and system libraries altogether.
To form our datasets, we selected the packages that have ELF binaries built for the amd64, armel, i386, and mipsel CPU architectures. i386 and amd64 are widely used by general computers, especially in the Intel and AMD products, respectively. MIPS and ARM are crucial in embedded systems, smartphones, and other portable electronic devices [48]. In practice, we excluded the packages with only one CPU architecture in the ALLSTAR dataset. Additionally, due to our limited local computing resources, we eliminated packages that were too large to handle. We checked each selected binary on whether the ground-truth symbol information exists using the _Ghidra_ decompiler and Linux file command and removed the ones that do not have them. Lastly, we assembled our primary dataset, called the _AS-4cpu-30k-bin_ dataset, that consists of 27572 pre-built binaries from 1117 packages and 4 CPU architectures, as illustrated in Table I.
Our preliminary experiment revealed that the evaluation had a data leakage issue when splitting the dataset randomly. Therefore, we performed a non-random variant train-test split with a 4-to-1 ratio on the _AS-4cpu-30k-bin_ dataset, selecting roughly 80% of the binaries for the training dataset and leaving the rest for the testing dataset. We referenced [23] for their splitting methods, aiming to ensure that the binaries
Fig. 4: The architecture of _cfg2vec_ with a supervised hierarchical graph neural network approach.
that belong to the same packages stay in the same set, either the training or testing sets. Such a variant splitting method allows us to evaluate _cfg2vec_ truly.
Next, we converted binaries in the _AS-4cpu-30k-bin_ dataset into their _Graph-of-Graph_ (GoG) representations leveraging the GDT mentioned previously in Section III-B. Notably, we processed a batch of binaries related to one package at one time as developers might define user functions in different modules of the same package while putting prototype declarations in that package's main module. For this case, _Ghidra_ indeed recognizes two function instances while one only contains the function declaration and another has its actual function content. As these two instances correspond to the same function name and one contains only dummy instructions, they can thus create noise in our datasets, thus affecting our model's learning. To cope with this, our GDT also searches from other binaries of the same package for the function bodies. If found, our GDT associates that user function with the function graph node with the actual content data. Besides user functions, library function calls may exist, and searching their function bodies in the same package would fail for dynamically loaded binaries. Under such circumstances, _Ghidra_ would recognize these functions as _ThunkFunctions3_ which only contain one dummy instruction. As a workaround, we removed these _ThunkFunctions_ from our data as they might mislead the model's learning. Applying this workaround indicates that our model works in predicting function names for the user and statically linked functions.
Footnote 3: ThunkFunction Manual: [https://ghidra.re/ghidra_docs/api/ghidra/program/model/listing/ThunkFunction.html](https://ghidra.re/ghidra_docs/api/ghidra/program/model/listing/ThunkFunction.html)
We experimented [18] with our datasets, referencing to their implementation4. As [18] used a dataset with 3,000 binaries for experiments, we followed accordingly, preparing datasets with smaller but similar sizes. We achieved this by downsampling from our primary _AS-4cpu-30k-bin_ dataset, creating the _AS-3cpu-9k-bin_ dataset which has 9,000 binaries for i386, amd64, and armel CPU architectures. Furthermore, as [18] supports only one CPU architecture at a time, we then separated the _AS-3cpu-9k-bin_ dataset into different CPU architectures, generating three training datasets for testing [18]: _AS-i386-3k-bin_, _AS-amd64-3k-bin_, and _AS-armel-3k-bin_. For training, we utilized the strip Linux command, converting our original data into three: the original binaries (_debug_), stripped binaries with debug information (_stripped_), and stripped binaries without debug information (_stripped_wo_symtab_) to follow [18]'s required data format. For evaluation, we sampled 100 binaries from our primary dataset for each CPU architecture, labeled _AS-amd-100-bin_, _AS-i386-100-bin_, _AS-armel-100-bin_, and _AS-mipsel-100-bin_. We also have another evaluation dataset called _AS-noMipsel-300-bin_, which contains roughly 300 binaries produced for the amd64, i386, and armel platforms. Table I summarizes the data statistics for all these datasets, including the numbers of packages and binaries, the average number of function nodes, edges, and BB nodes. The following sections will detail how we utilized these datasets during our experiments.
Footnote 4: Debinβs [18] repository: [https://github.com/eth-sri/debin](https://github.com/eth-sri/debin)
### _Evaluation: Function Name Prediction_
Table II demonstrates the results of _cfg2vec_ in predicting function names. For the baseline, we followed [18]'s best setting where the feature dimension of register or stack offset are both 100 to train with our prepared datasets. For _cfg2vec_, we used three GCN layers and one GAT convolution layer in both graph embedding layers. For evaluation, we calculate the p@k (e.g., precision at k) metric, which refers to an average hit ratio over the top-k list of predicted function names. Specifically, we feed each binary represented in GoG into our trained model, converting each function \(f\in F\) and acquiring its function embedding \(h_{f}\). Then, we calculate pairwise cosine similarities between \(h_{f}\) and all the other function embeddings, forming a top-k list by selecting k names in which their embeddings are top-kth similar to \(h_{f}\). If the ground-truth function name is among the top-k list of function name predictions, we regard that as a hit; otherwise, it is a miss. During experiments, we set the top-k value to be 5, so our model can recommend the best five possible names for each function in a binary.
As shown in Table II, _cfg2vec_, trained with the _AS-3cpu-9k-bin_ dataset, can achieve a 69.75% prediction accuracy (e.g., p@1) in inferring function names. For [18], we had to train their models for each CPU architecture separately as it cannot train in a cross-architectural manner. Even so, for amd64 binaries, [18] only achieves 29.32% precision, while for i386 and armel, it performs 52.64% and 53.65%, respectively. This result indicates that in any case, our _cfg2vec_ outperforms [18]. Besides, while [18] only yields one prediction, our _cfg2vec_ suggests five choices, making it flexible for our users (e.g., REs) to select what they believe best fits the function among the best k predicted names. The p@2 to p@5 in Table II demonstrate that our _cfg2vec_ can provide enough hints of function names for users. For example, p@5 of _cfg2vec_ trained with our _AS-3cpu-9k-bin_ dataset can achieve 70.50% precision across all the CPU architecture binaries. We also experimented our _cfg2vec_ with larger datasets. From Table II, we can observe that _cfg2vec_ can have 5.04% performance gain in correctly predicting function names (e.g., p@1). Moreover, the gain increases to 28% when training _cfg2vec_ with the _AS-4cpu-30k-bin_ dataset. We believe training on a larger dataset implies training with a more diversified set of binaries. This allows our model to acquire more knowledge, thus being capable
of extracting more robust features for binary functions. In summary, this result indicates that compared to the baseline, our model can effectively provide contextually relevant names for functions in the decompiled code to our users.
We also experimented with various ablated network setups to study how each component of _cfg2vec_ contributes to performance. First, we simplified our _cfg2vec_ by stripping one GCN layer from the original experimental setup. As shown in Table III, we called this setup _2GCN-GAT_ which slightly decreased the performance by 0.75%. Then, from _2GCN-GAT_ setup, we further removed the GAT layer, calling it _2GCN_. We again observed a marginal performance decrease (\(<\)1%). Next, we eliminated another GCN layer from _2GCN-GAT_, constructing the _GCN-GAT_ setup. For _GCN-GAT_, we saw a drastic drop (4.2%) which highlights that the number of GCN layers can be an essential factor in the performance. Specifically, we found that going from 1 to 2 GCN layers improves prediction accuracy by more than 4%. However, we do not observe a significant performance gain when increasing the number of GCN layers to more than three. Therefore, we retained the original _cfg2vec_ model with its three GCN layers. All in all, as shown in Table III, all these ablated models, still outperform [18], which we attributed to the GoG representation we made for each binary in the dataset.
### _Evaluation: Architectural-agnostic Prediction_
Table IV demonstrates our _cfg2vec_'s capability in terms of cross-architecture support. As [18] supports training one CPU architecture at a time, we had to train it multiple times during experiments. Specifically, we trained [18] on three datasets: _AS-amd64-3k-bin_, _AS-i386-3k-bin_, and _AS-armel-3k-bin_, calling resulting trained models, [18]-amd64, [18]-i386, and [18]-armed, respectively. For these baseline models, we observe that they perform well when tested with the binaries built on the same CPU architecture but poorly with the ones built on different CPU architectures. For instance, [18]-amd64 achieves 29.3% accuracy for amd64 binaries, but performs worse for i386 and armel binaries (13.8% and 7.1%). Similarly, [18]-i386 achieves 52.6% accuracy for i386 binaries, but performs worse for amd64 and armel binaries (6.2% and 1.1%). Lastly, [18]-armel achieves 53.6% accuracy for armel binaries, but performs worse for amd64 and i386 binaries (11.8% and 8.9%). We used the top-1 prediction generated from _cfg2vec_ (a.k.a., p@1) as the comparing metric as [18] produces only one prediction per each function. From the results, we observe that _cfg2vec_ outperforms [18] across all three tested CPU architectures. The fact that _cfg2vec_ performs consistently well across all CPU architectures indicates that our _cfg2vec_ supports cross-architecture prediction.
To evaluate the capability of generalizing the learned knowledge, we tested all models with the _AS-mipsel-100-bin_ dataset, which has binaries built from another famous CPU architecture, mipsel, that our _cfg2vec_ does not train before. For [18], it has lower performance when testing on binaries built from the CPU architectures that it did not train before, exampled by the highest accuracy of [18] to be 13.84% when trained on _amd64_ binaries and evaluated on i386 binaries. In our work, as Table IV shows, our _cfg2vec_ achieves 36.69% accuracy when trained with amd64, i386, and armel binaries but tested on mipsel binaries. For [18], it does not even support analyzing mipsel binaries. In short, these results demonstrate that our _cfg2vec_ outperforms our baseline in the function name prediction task on cross-architectural binaries and generalizes better to the binaries built from unseen CPU architectures. To further investigate _cfg2vec_'s cross-architecture performance, we trained it on three datasets, each consisting of binaries built for two different architectures. We then gave the resulting trained models names that indicated the architectures from which the binaries were derived: _cfg2vec_-armel-i386, _cfg2vec_-amd64-i386, and _cfg2vec_-armel-amd64. These results show that our model performs well in the function name prediction job across all of these scenarios, including when tested on binaries compiled for unknown CPU architectures.
### _The Practical Usage of CFG2VEC_
In this section, we demonstrate how _cfg2vec_ assists REs in dealing with _Defense Advanced Research Projects Agency_ (DARPA) _Assured MicroPatching_ (AMP) challenges binaries. The AMP program aims at enabling fast patching of legacy mission-critical system binaries, enhancing decompilation and guiding it toward a particular goal of a _Reverse Engineer_ (RE) by integrating the existing source code samples, the original build process information, and historical software artifacts.
#### Iv-D1 The MINDSIGHT project
our multi-industry-academia initiative between Siemens, JHU/APL, BAE, and UCI jointly developed a project, _Making Intelligible Decompiled Source by Imposing Homomorphic Transforms_ (MINDSIGHT). Our team focused on building an automated toolchain integrated with _Ghidra_, aiming to enable the decompilation process with (1) a less granular identification of modular units, (2) an accurate reconstruction of symbol names, (3) the lifting of binaries to stylized C code, (4) a principled and scalable approach to reason about code similarity, and (5) the benchmarking of new decompilation techniques using state-of-the-art embedded software binary datasets. To date, our team has developed an open-source tool, _CodeCut5_, to improve the accuracy and completeness of _Ghidra_'s module identification, providing an automated script-based decompilation analysis toolchain to ease the RE's expert interpretation. Besides, we also developed a _Homomorphic Transform Language_ (HTL) to describe transformations on _Abstract Syntax Tree_ (AST) languages and the rules of their composition. Our tool, integrated with _ghidra_, allows developers to transform the decompiled code syntactically while keeping it semantically equivalent. The key idea is to use this HTL to morph a _Ghidra_ AST into a GCC AST to lift the decompiled binary to a high-level C representation. This process can make it easier for REs to comprehend the binary code. _cfg2vec_ is another tool developed in the MINDSIGHT project, enabling the reconstruction of function names, saving the manual guesswork from REs.
Footnote 5: _CodeCut_βs repository: [https://github.com/DARPAMINDSIGHT/CodeCut](https://github.com/DARPAMINDSIGHT/CodeCut)
#### Iv-D2 The cfg2vec plugin
In _MINDSIGHT_ project, we incorporated _cfg2vec_ into _Ghidra_ decompiler as a plugin application. Our _cfg2vec_ plugin assists REs in comprehending the binaries by providing a list of potential function names for each function without its name. Technically, like all _Ghidra_ plugins, our _cfg2vec_ plugin bases on Java with its core inference modules implemented as a REST API in Python 3.8. Once the metadata of a stripped binary is extracted from _Ghidra_ decompiler, it is then sent to the _cfg2vec_ end-point, which calculates and returns the inferred mappings for all the functions. Figure 5 demonstrates the user interface of our _cfg2vec_ plugin. In this scenario, the user must provide the vulnerable and the reference binary with extra debug information, such as function names. The "Match Functions" button triggers _cfg2vec_ functionality and displays the function mapping results in three tables:
* _Matched Table_: displays the mapping of similar functions.
* _Mismatched Table_: displays the mapping of _dissimilar_ functions and, therefore, candidates for patching.
* _Orphan Table_: displays the mapping of functions with a low confidence score.
The groupings reduce REs' workload. Rather than inspecting all functions, they can focus on patching candidate functions (mismatched functions) and the orphans. The "Explore Functions" button invokes Ghidra's function explorer, where the two functions can be compared side-by-side, as shown in Figure 5. This utility allows the user to switch between C and assembly language, thus assisting in confirming or modifying the mappings from the three tables. Regarding
Fig. 5: The plugin screenshot integrated into Ghidra.
_cfg2vec_'s function prediction, the "Rename Function" button takes the selected row from the tables and imposes the name from the patched binary in the vulnerable binary. When the "Match Functions" button fires, we invoke the FCG and CFG generators for the two programs (vulnerable and patched).
#### Iv-B3 The use-case for AMP challenge binaries
DARPA AMP challenges is about REs to patch a vulnerability regarding a weak encryption algorithm where the encryption of communication traffic was accomplished with a deprecated cipher suite, Triple DES or 3DES [49]. For this challenge, REs have to analyze the vulnerable binary, identify functions and instructions to be patched, _3DES cipher suite_ in this case, and patch 3DES-related function calls and instructions with the ones for AES [50]. All these steps happen at the decompiled binary level, and the vulnerable binaries are optimized by a compiler and stripped of the debugging information and function names. Furthermore, these binaries are sometimes statically linked against libraries such as GNU C Library [51] or OpenSSL, which introduce many extra functions to the binary (some of which will never be called/used). Given these complications, it becomes a non-trivial task for an RE to make sense of all these functions, find the problem, and successfully patch the problem. The direct usage of our _cfg2vec_ plugin was to pick a function of interest with stripped information and see predictions of potential function names or matching functions from the available reference binary to confirm that whether this function is in the critical path during RE's problem solving. As Figure 5 shows, our plugin allows users to see possible matches between functions from a stripped vulnerable binary and functions from a patched (reference) binary with extra information. REs may then leverage such information and make appropriate notes for that function, allowing them to complete their jobs more efficiently. The main feedback we received from REs who used the tool was that this is the functionality REs would like to have. However, the accuracy and usability of the tool were not high enough to truly utilize the tool's potential.
## V Conclusion
This paper presents _cfg2vec_, a Hierarchical Graph Neural Network-based approach for software reverse engineering. Building on top of _Ghidra_, our _cfg2vec_ plugin can extract a _Graph-of-Graph_ (GoG) representation for binary, combining the information from Control-Flow Graphs (CFG) and Function-Call Graphs (FCG). _Cfg2vec_ utilizes a hierarchical graph embedding framework to learn the representation for each function in binary code compiled into various architectures. Lastly, our _cfg2vec_ utilizes the learned function embeddings for function name prediction, outperforming the state-of-the-art [18] by an average of 24.54% across all tested binaries. By increasing the amount of data, our model achieved 51.84% better. While [18] requires training once for each CPU architecture, our _cfg2vec_ still can outperform consistently across all the architectures, only with one training. Besides, our model generalizes the learning better [18] to the binaries built from untrained CPU architectures. Lastly, we demonstrate that our _cfg2vec_ can assist the real-world REs in resolving _Darpa Assured MicroPatching_ (AMP) challenges.
## Acknowledgment
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) and Naval Information Warfare Center Pacific (NIWC Pacific) under Contract Number N66001-20-C-4024. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
|
2310.14600 | NFT formalised | Non-fungible tokens, NFT, have been used to record ownership of real estate,
art, digital assets, and more recently to serve legal notice. They provide an
important and accessible non-financial use of cryptocurrency's blockchain but
are peculiar because ownership by NFT confers no rights over the asset. This
work shows that it is possible to specify that peculiar property by combining
functional and epistemic properties. Suitability of the specification is
evaluated by proof that the blockchain implementation conforms to it, and by
its use in an analysis of serving legal notice. | Martha N. Kamkuemah, J. W. Sanders | 2023-10-23T06:13:16Z | http://arxiv.org/abs/2310.14600v1 | # NFT formalised
###### Abstract
Non-fungible tokens, NFT, have been used to record ownership of real estate, art, digital assets, and more recently to serve legal notice. They provide an important and accessible non-financial use of cryptocurrency's blockchain but are peculiar because ownership by NFT confers no rights over the asset. This work shows that it is possible to specify that peculiar property by combining functional and epistemic properties. Suitability of the specification is evaluated by proof that the blockchain implementation conforms to it, and by its use in an analysis of serving legal notice.
**Keywords:** NFT, non-fungible token, blockchain, epistemic logic, serving notice.
## 1 Introduction
We begin by motivating the peculiar property that ownership by NFT confers no right over the asset. The combined evolutions of affluence and education has resulted in a trade-off between ownership of a unique good and its democitization. Recall, for example, the evolving ownership of reading matter.
The oldest known documents, from the 35\({}^{\mathrm{th}}\) century BC (see [14]: List of oldest documents), were mostly religious or historical and for use by those in power. The Egyptian _Books of the Dead_ were religious documents written for the pharaoh or queen by royal scribes; the Chinese _Shu_ was an historical and religious collection compiled for the ruler. Contemporary populations were illiterate and poor, so that their owning such unique documents simply did not arise.
In the Middle Ages illiteracy was still widespread in Europe. Manuscripts were handwritten and often sumptuously illuminated, like the religious _Books of Hours_. Each was unique and extremely valuable, and libraries were limited to religious orders or the very rich. Such manuscripts lay beyond the knowledge or interest of most of the population. Ownership was severely limited and documents unique.
In Europe, increased education and the printing press (Gutenberg, \(\sim\)1450) led ultimately to the era of newspapers (Strasbourg, 1605) and then in the last century to paperback books (Lane, Penguin Books, 1935) for literature, making reading matter affordable to virtually all, but at the expense of anything like the
uniqueness of documents. The market in first editions, and the continued popularity of author-signed copies, indicate the pleasure people derive from owning something distinctive and, where possible, unique (compare with the popularity of personalised car registration plates).
Digital books began in the 1960s and 70s and in the early 2020s are typified by the commercially successful Amazon _Kindle_ service [9]. A digital file is in principle able to be copied without limit. So ownership of anything digital is very far from owning something distinctive or unique. Documents have evolved from being unique but beyond ownership to being far from unique but owned.
That evolution, and digital assets in particular, have required legal rights to keep pace. The _public domain_ is composed of all creative assets to which no exclusive intellectual property rights apply. _Copyright_ gives its owner exclusive right over a creative asset to copy, distribute, adapt, display, and perform it. To cope with copyable assets, _Copyleft_ grants certain freedom over copies of assets with the requirement that the same rights be preserved in derivatives to use for any purpose, to modify, copy, share, and redistribute it. (Condensed from [14]: Public domain; Copyright; Copyright.)
Since the advent of cryptocurrencies in 2008, and their distributed ledger the blockchain, the concept of _non-fungible token_, or NFT, has offered a return to a kind of unique ownership though without any of those rights. The token itself is digital, appearing on a blockchain. But the asset to which it applies may be traditional, like real estate and sculpture, or digital, like computer art. Ownership, without rights and of something which can be copied without limit, is an interesting and subtle concept.
NFT have captured popular imagination by allowing ownership of Cryptokities [4]; membership of the Bored Ape Yacht Club [1]; ownership of digital art [6]; or of the first tweet on Twitter [8]. They also facilitate a history of ownership which is important for assets like diamonds [5] where an origin in conflict is to be avoided. More recently in the United States and Britain, delivery of an NFT to an electronic wallet has become an acceptable form of serving legal notice [11].
Multifarious future uses of NFT are likely. So it seems sensible to have a specification of NFT to which its blockchain implementation is shown to conform. That is the purpose of this paper, which is arranged as follows. Ownership is studied in Section 2 and used to specify NFT, by predicate \(\mathcal{N}\), in Section 3. Account is taken of its epistemic properties phrased in terms of 'public certifiability,' predicate \(\mathit{PC}\). In Section 4 the blockchain implementation is formalised and shown to satisfy the specification \(\mathcal{N}\). Finally in Section 5 the NFT formalism, and particularly its public certifiability, are considered with the case study of serving legal notice.
As with many recent developments in Information Technology, traditional references are not always appropriate. For example the elegant and hugely influential whitepaper [10] by Satoshi Nakamoto which began Bitcoin, the first cryptocurrency (of which there are now over 20,000) is not a refereed publication. References in this paper include both online documents and sites.
Background on NFT is covered by [2, 13] and [14]: Non-Fungible Tokens.
## 2 Ownership
Suppose _Agent_ denotes the type containing all owners and potential owners, assumed to be individuals; group ownership is considered at the end of this section. Suppose _Asset_ denotes the type of all things regarded as assets, physical or digital, existing or future, and _Asset\({}_{\exists}\)_ denotes the subset of those assets currently in existence.
Time is assumed to have type \(\mathbb{T}\) isomorphic to the natural numbers: there is an initial time, and the difference between consecutive times is constant.
Definition 1: (Ownership) The Boolean-valued function
\[\mathit{Owns}:\mathit{Agent}\times\mathit{Asset}\times\mathbb{T}\ \rightarrow\ \mathbb{B}\]
is interpreted: \(\mathit{Owns}(a,\alpha,t)\) holds iff agent a owns asset \(\alpha\) at time \(t\).
At any time not every agent need own an asset and not every asset \(\alpha:\mathit{Asset}\) need be owned; but for each agent-asset pair, ownership is well defined.
The set \(\mathit{Asset}_{\exists}\) of existing assets is temporal, so \(\mathit{Asset}_{\exists}\) has an implicit time variable. Furthermore often the time variable of _Owns_ will be left implicit, and temporal operators used to express its temporal invariants, with \(\raisebox{-1.72pt}{\includegraphics[]{10.png}}\) for 'now and in future', and \(\raisebox{-1.72pt}{\includegraphics[]{10.png}}\) for 'next time' (well defined by our assumption on \(\mathbb{T}\)). The usual laws of linear temporal logic hold [12].
Each asset is _minted_ at some time, the point at which its default original owner identifies it by an ownership transaction on the blockchain. It may subsequently be put up for sale on one of the various sites, depending on the nature of the asset.
Subsequently the asset may change ownership many times, involving standard transactions on the blockchain. Ownership at any time is established by searching back through the blockchain, resulting in the list of owners (see Display (10)), just as for validation of a financial transaction.
Elimination of an asset is not considered here, though it is straightforward.
Group ownership may be covered by replacing _Agent_ by the set of all nonempty sets of agents. Then ownership by a set \(S\) of agents may be thought of as ownership by the parallel composition of the members of \(S\). The laws of ownership hold with individual owners replaced by nonempty sets of owners, as do: Definition 3 of NFT; the extensions of the operations _Mint_ and _TxO_; and the results. (In fact the case of empty \(S\) may be used as owner of a non-existing asset, to obviate the need for \(\mathit{Asset}_{\exists}\).)
### Laws
Properties of _Owns_ are as follows. Each is stated by abstracting the time variable, and quantifying it instead with \(\raisebox{-1.72pt}{\includegraphics[]{10.png}}\) in order to focus on the property which remains invariant with time. The first three laws are fundamental.
1. Each existing asset has an owner at any time: \[\raisebox{-1.0pt}{\includegraphics[]{10.eps}}\,(\forall\,\alpha:\mathit{Asset}_{ \exists}\,\cdot\,\exists\,a:\mathit{Agent}\,\cdot\,\mathit{Owns}(a,\alpha))\,.\] (1)
2. At any time, each existing asset has at most one (hence exactly one) owner: \[\raisebox{-1.0pt}{\includegraphics[]{10.eps}}\,(\forall\,\alpha:\mathit{Asset}_{ \exists}\,\cdot\,\forall\,a,a^{\prime}:\mathit{Agent}\,\cdot\,\left(\begin{array} []{c}\mathit{Owns}(a,\alpha)\,\wedge\\ \mathit{Owns}(a^{\prime},\alpha)\end{array}\right)\Rightarrow\ a=a^{\prime})\,.\] (2) An agent may own many assets, or none.
3. A non-existing asset does not have an owner, since until it comes into existence it is not assumed to have an identity: \[\raisebox{-1.0pt}{\includegraphics[]{10.eps}}\,(\forall\,\alpha:\mathit{ Asset}\setminus\mathit{Asset}_{\exists}\,\cdot\,\neg\,\exists\,a:\mathit{Agent}\,\cdot\,\mathit{Owns}(a, \alpha))\,.\] (3)
4. \(\mathit{Asset}_{\exists}\) increases with time, since elimination of assets is not considered: \[\raisebox{-1.0pt}{\includegraphics[]{10.eps}}\,(\mathit{Asset}_{\exists}\, \subseteq\,\raisebox{-1.0pt}{\includegraphics[]{10.eps}}\,\mathit{Asset}_{ \exists})\,.\] (4)
5. Consequently, the size of \(\mathit{Owns}\) increases with time (even though that need not be true of the assets owned by any individual): \[\raisebox{-1.0pt}{\includegraphics[]{10.eps}}\,(\#\mathit{Owns}\ \leq\,\raisebox{-1.0pt}{\includegraphics[]{10.eps}}\,\#\mathit{Owns})\,.\] (5)
6. For any asset, the list of its past owners grows (as a result, the commission earnt for its originator grows). In terms of one list being a prefix of another: \[\raisebox{-1.0pt}{\includegraphics[]{10.eps}}\,(\,[a:\mathit{Agent}\mid\mathit{Owns }(a,\alpha)]\,\,\,\mbox{prefixes}\,\,\raisebox{-1.0pt}{\includegraphics[]{10.eps}}\,[a: \mathit{Agent}\mid\mathit{Owns}(a,\alpha)]\,)\,.\] (6)
## 3 NFT specified
An NFT is a token on a blockchain. What properties should it have?
Firstly the token represents ownership of some asset so at any time it must satisfy properties (1) to (6), and in particular the fundamental (1) to (3). But furthermore, that ownership must be 'publicly certified.'
_Certified_ is interpreted to mean that all are aware of it. Without that, differences of awareness may occur; certifiability imposes uniformity of awareness. _Publicly_ certified means that all are aware that others are aware of what is certified. Without that, ignorance may be feigned; being public ensures a kind of authenticity. (Common knowledge, which public certifiability approximates to depth 2, is not freshly achievable in any realistic distributed system.)
For example a decision by a governing body is abided by not because the body is fortified by might or the law, but because of prior agreement and the fact that the board and its decision are publicly certified.
Not every certification is publicly so. I have various friends some of whom know each other and some of whom know no others. If I email my website to my friends, one-by-one, then each friend knows it. But none knows that the others know it. So amongst my friends my website is certified but not publicly certified.
To express public certifiability, recall that the epistemic temporal formula \(K_{x}\,\phi\) means that agent \(x\) knows predicate \(\phi\) (see [12]). For variable \(v:\mathbb{V},\ K_{x}\,v\) is shorthand for \(\exists\,w:\mathbb{V}\,\cdot\,\,K_{x}\,(v=w)\).
In epistemic logic only truths can be known: if \(\vdash\,K_{x}\,\phi\,\) then \(\vdash\,\phi\).
In terms of epistemic logic,
Definition 2: (Public certifiability) Fact \(\phi\) is _publicly certified_ amongst a set \(A\) of agents:
\[\mathit{PC}(A,\phi)\ :=\ \forall\,x,y:A\,\cdot\,K_{x}K_{y}\,\phi\,.\]
In particular, by the laws of epistemic logic, both \((\forall\ y:A\,\cdot\,K_{y}\,\phi)\) and \(\phi\) hold.
The final ingredient required is that of a token function to represent the relation \(\mathit{Owns}(a,\alpha,t)\). Suppose \(\tau\) is an injection
\[\tau:\{(a,\alpha,t):\mathit{Agent}\times\mathit{Asset}\times\mathbb{T}\ |\ \mathit{Owns}(a,\alpha,t)\}\rightarrow\mathbb{B}^{*}\]
to bitstrings. The token function and its inverse are common knowledge and are both quick to compute. The function \(\tau\) incorporates the identities of agent \(a\) and asset \(\alpha\), and assumes some global convention for time \(t\), thus saving those from having to be considered in greater detail here.
With that preparation:
Definition 3: (Specification \(\mathcal{N}\)) A non-fungible token, or NFT, is a publicly certified statement that agent \(a\) owns asset \(\alpha\) at time \(t\):
\[\mathit{PC}(\mathit{Agent},\tau(\mathit{Owns}(a,\alpha,t)))\,,\]
where the Owns relation satisfies (1) to (3). That is referred to as property \(\mathcal{N}\).
The success of using a token function is due to:
\[\mathit{K}_{x}\tau(\mathit{Owns}(a,\alpha,t))\ \Leftrightarrow\ K_{x}\mathit{Owns }(a,\alpha,t)\,.\]
Furthermore by the laws of epistemic logic, Property \(\mathcal{N}\) implies that each agent knows the value \(\mathit{Owns}(a,\alpha,t)\).
Although ownership is tokenised, ownership of a token for ownership is not considered, as that would entail an infinite regress.
It is worth repeating: ownership by NFT does not confer copyright, so is an unusual kind of ownership. (What would be gained by buying the NFT for the pdf of this paper?)
## 4 Blockchain implementation
NFT has been specified in Definition 3 by property \(\mathcal{N}\), independent of implementation.
An implementation which is centralised, and hence of little practical interest, would be provided by a publicly accessible bulletin board maintaining an up-to-date list of which agent owns what asset. Being centralised and openly accessible
its contents are publicly certified. But it would suffer the usual disadvantages of centralisation (including inefficiency of access due to bottlenecks, fragility, and susceptibility to corruption).
A distributed solution is of course provided by blockchain. Different NFT platforms offer different functionalities [7]. Assume a blockchain, _bc_, which supports the standard financial transactions and is sustained by a network _Net_ of nodes each running the blockchain software and holding a copy of _bc_ itself.
Agents interact with _Net_ using e-wallets in the standard manner. For simplicity it is assumed that all _Net_ nodes (and wallets) are honest and execute the same _bc_ software; each knows that and moreover knows that the others know it. In other words:
\[\mathit{PC}(\mathit{Net},\text{`all nodes are honest and run {bc} software'})\,. \tag{7}\]
So _Net_ nodes know they have the same version of _bc_ at any time.
Public certifiability is closed under subset of agents: if \(\mathit{PC}(A,\phi)\) and \(\mathit{B}\subseteq A\) then \(\mathit{PC}(\mathit{B},\phi)\). However it is not closed under union. So in extending \(\mathit{PC}(\mathit{Net},\phi)\) to \(\mathit{PC}(\mathit{Agent},\phi)\) in Theorem 3.1 below the following result will be helpful.
Lemma 1: (Extending PC) _Let A be a set of agents and agent \(x\not\in A\). If \(\mathit{PC}(A,\phi)\) and all agents are honest (tell only the truth), then \(\mathit{PC}(A\cup\{x\},\phi)\) provided for some \(\mathit{v}_{x}\in A\),_
\[\mathit{v}_{x} \to x:\mathit{PC}(A,\phi)\] \[\mathit{v}_{x} \to A:\mathit{K}_{x}\phi\,.\]
Proof.: Establishing \(\mathit{PC}(A\cup\{x\},\phi)\) requires, by Definition 2,
\[\forall\,v,w:A\cup\{x\}\,\cdot\,\mathit{K}_{v}\mathit{K}_{w}\,\phi\,.\]
The cases \(v\neq x\) and \(\mathit{w}\neq x\) are covered by assumption.
If \(v=x\) then from the first assumed communication, \(\forall\,w:A\,\cdot\,\mathit{K}_{x}\mathit{K}_{w}\,\phi\).
If \(\mathit{w}=x\) then from the second assumed communication and liveness, \(\forall\,v:A\,\cdot\,\mathit{K}_{v}\mathit{K}_{x}\,\phi\).
For simplicity it is also assumed of _bc_ that each block consists of a single transaction, either standard or an ownership token with its associated standard transactions, and that appending a block takes one time unit. In practice transaction fees are also included to ensure quick inclusion in a block. Those are neglected here, and relegated to the protocol between an e-wallet and the memory pool of its local _Net_ node.
However _Net_ communication is publicly certified and can be thought of as a distributed clock which is self-regulated to tick every ten minutes on average.
An NFT is implemented by a token on _bc_, stating ownership of an asset by an agent at the time of the token's inclusion in _bc_. To manage NFT, what properties must _bc_ maintain?
### Minting
Currently there seems to be no restriction on the asset or minter of an NFT.
Operation \(\mathit{Mint}(\mathit{orig},\alpha,t+1)\), by which agent \(\mathit{orig}\) establishes original ownership of asset \(\alpha\) at time \(t+1\), inputs \(\mathit{orig}:\mathit{Agent}\), \(\alpha:\mathit{Asset}\) and \(t:\mathbb{T}\). If \(\alpha\) is not already owned at \(t\), it appends to \(\mathit{bc}\) a block with token \(\tau(\mathit{Owns}(\mathit{orig},\alpha,t+1))\) representing original ownership of \(\alpha\) by \(a\). But if \(\alpha\) is owned at \(t\) then an error message is returned and \(\mathit{bc}\) is left unchanged.
The precondition for \(\mathit{Mint}\) to update \(\mathit{bc}\) is:
\[\neg\,\exists\,a:\mathit{Agent}\,\cdot\,\exists\,u:\mathbb{T}\,\cdot\,u\leq t \ \land\ \mathit{Owns}(a,\alpha,u)\]
in which case \(\mathit{bc}\) is updated:
\[\mathit{Append}\,(\,\tau(\mathit{Owns}(\mathit{orig},\alpha,t+1)),\,\mathit{ bc}\,)\,,\]
where the operation \(\mathit{Append}\,(\mathit{b},\mathit{bc})\) appends block \(\mathit{b}\) to chain \(\mathit{bc}\). If the precondition fails then \(\mathit{bc}\) is unchanged.
### Standard transaction, \(\mathit{Tx}\)
A standard transaction \(\mathit{Tx}\,(\mathit{b},s,c,t)\) on \(\mathit{bc}\) involves buyer \(\mathit{b}:\mathit{Agent}\), seller \(s:\mathit{Agent}\), cost \(c:\mathbb{R}\) and time \(t:\mathbb{T}\). The asset changing hands in a standard transaction is of no concern to the \(\mathit{bc}\) ledger, which is occupied entirely with the financial validity of the transaction. That requires a user's sales to exceed its purchases by at least the cost \(c\) of the current transaction, a fact which is determined by a scan through \(\mathit{bc}\):
\[\sum\{\mathit{d}\mid\exists\,b\cdot\mathit{Tx}\,(\mathit{b},\mathit{u}, \mathit{d})\ \textbf{in}\ \mathit{bc}\}\,-\,\sum\{\mathit{d}\mid\exists\,s\cdot\mathit{Tx}\,(\mathit{u}, s,\mathit{d})\ \textbf{in}\ \mathit{bc}\}\ \geq\,c\,. \tag{8}\]
In that case (only), the transaction is appended to \(\mathit{bc}\).
Deposit, and reward for adding a block to \(\mathit{bc}\), are considered to be sales; and withdrawal is considered to be a purchase.
### Change of ownership transaction, \(\mathit{TxO}\)
An ownership transaction \(\mathit{TxO}\) extends a standard transaction \(\mathit{Tx}\) by making the asset, \(\alpha\), explicit. It inputs the current owner \(\mathit{old}\) and buyer \(\mathit{new}\), the asset \(\alpha:\mathit{Asset}\), cost \(c:\mathbb{R}\) and time \(t:\mathbb{T}\). Its precondition is that \(\mathit{new}\) can afford the cost \(c\) (in standard manner) and \(\mathit{old}\) does indeed own \(\alpha\) at that time:
\[\mathit{Owns}(\mathit{old},\alpha,t)\ \land\ \mathit{balance}(\mathit{new})\geq c\]
where \(\mathit{balance}(\mathit{new})\) equals the left-hand side of Inequality (8).
In that case a standard transaction is invoked for the sale. It is concatenated to the ownership token and so included in the block which is appended to \(\mathit{bc}\):
\[\mathit{Append}\,(\,\tau(\mathit{Owns}(\mathit{new},\alpha,t+1))\,\textbf{ concat}\,\mathit{Tx}(\mathit{new},\mathit{old},\,c,t+1),\,\mathit{bc}\,)\,.\]
Again, \(\mathit{TxO}\) takes one time unit to append the block containing that catenation.
A block in \(\mathit{bc}\) contains either a standard transaction or a \(\mathit{Mint}\) or \(\mathit{TxO}\) transaction (which includes a standard transaction). To check the precondition of a standard transaction all of \(\mathit{bc}\) must be searched. But for ownership only those blocks involving tokens need be checked, information which may be included in the block header.
Royalties are considered in Section 4.5.
### Correctness
\(\mathit{Asset}_{\exists}\) is evidently represented in the blockchain implementation as follows. At any time \(t:\mathbb{T}\):
\[\mathit{Asset}_{\exists}=\left\{\alpha:\mathit{Asset}\ |\ \exists\,u: \mathbb{T}\,\cdot\,u\leq t\ \ \wedge\ \exists\,a:\mathit{Agent}\,\cdot\,\mathit{Mint}(a,\alpha,u)\right\}. \tag{9}\]
As strong as the epistemic condition of public certifiability might seem, it holds for the distributed implementation using blockchain:
Theorem 4.1: (Correctness) _The blockchain implementation of NFT, assuming Net nodes are honest and incorporating \(\mathit{Mint}\), \(\mathit{TxO}\) and the communications of Lemma 1, satisfies property \(\mathcal{N}\) of Definition 3._
Proof.For any asset \(\alpha:\mathit{Asset}\), a search of \(\mathit{bc}\) by any node in \(\mathit{Net}\) returns the same list of owners, ordered chronologically in the same way as \(\mathit{bc}\),
\[\mathit{as}(\alpha):=\left[\,a:\mathit{Agent}\ |\ \tau[\mathit{Owns}(a, \alpha)]\ \mathbf{in}\ \mathit{bc}\,\right]. \tag{10}\]
Thus at any height of \(\mathit{bc}\), all nodes have the same value for \(\mathit{Owns}(a,\alpha)\).
The proof establishes \(\mathit{bc}\models\mathcal{N}\) by induction on \(\mathit{bc}\). First consider just properties (1) to (3) of \(\mathit{Owns}\).
For the base case, \(\mathit{bc}=\left[\ \right]\), \(\mathit{Owns}=\varnothing\) which vacuously satisfies the three properties.
For the step case suppose that \(\mathit{bc}=\mathit{Append}(\mathit{b},\mathit{bc}_{0})\) with block \(\mathit{b}\) containing a token transaction. Either \(\#\mathit{bc}_{0}=0\) or \(\#\mathit{bc}_{0}>0\). In the first case, \(\mathit{Mint}\) must have occurred since with \(\mathit{bc}=\left[\ \right]\) it is the only token action whose precondition holds. Thus \(\mathit{b}\) contains \(\tau[\mathit{Owns}(\mathit{orig},\alpha)]\), and \(\mathit{Owns}\) satisfies (1) to (3). In the second case, \(\mathit{bc}=\mathit{Append}(\mathit{b},\mathit{bc}_{0})\) where \(\#\mathit{bc}_{0}>0\) satisfies the three properties. The last token action to occur in \(\mathit{bc}_{0}\) was \(\mathit{Mint}\) if \(\#\mathit{bc}_{0}=1\) and otherwise \(\mathit{TxO}\). In either case the three properties hold by the invariant maintained by definition of both operators.
The proof of public certifiability is the same for both the base and step cases of that induction, so is given once. From (7), Equations (9) and (10) imply that the value of \(\mathit{Owns}(a,\alpha)\) is publicly certified throughout \(\mathit{Net}\): \(\mathit{PC}(\mathit{Net},\mathit{Owns})\). That is extended from \(\mathit{Net}\) to \(\mathit{Agent}\) as follows. For each \(x:\mathit{Agent}\) its e-wallet connects to some \(\mathit{v}_{x}\in\mathit{Net}\). Since the conditions of Lemma 1 hold, \(\mathit{PC}(\mathit{Net}\cup\{x\},\mathit{Owns})\). Iterating, \(\mathit{PC}(\mathit{Agent},\mathit{Owns})\), so \(\mathit{bc}\models\mathcal{N}\) as required.
### Royalties transaction, _TxOr_
Some NFT platforms offer the originator a royalty with each change in ownership. If \(c\) is the total cost of an ownership change then the royalty, written \(\%c\), is paid by _new_ to _orig_ with a standard transaction, just like its payment of the deficiency (\(c-\%c\)) to _old_.
The transaction for change of ownership, but incorporating royalties, is written \(\mathit{TxOr}\). The originator of \(\alpha:\mathit{Asset}\) is not an input since it is identified by any node as the first element of \(\mathit{as}(\alpha)\) in (10).
_TxOr_ has the same precondition as \(\mathit{TxO}\) (Section 4.3) and the same update of \(\mathit{bc}\) with an ownership token. However its successful occurrence invokes an additional standard transaction for the royalty, concatenated with the standard transaction of \(\mathit{TxO}\):
\[\mathit{Tx}\left(\mathit{new},\mathit{old},c-\%c,t+1\right)\mathbf{concat} \mathit{Tx}\left(\mathit{new},\mathit{orig},\%c,t+1\right).\]
In fact those standard transactions may occur in either order. The block then contains the concatenation with the token.
The correctness of the modified system follows as for the Correctness Theorem.
Theorem 4.2: (Corollary) _The inclusion of royalties in ownership transactions, replacing TxO by TxOr, conforms to \(\mathcal{N}\)._
## 5 Case study: serving legal notice
The serving of a legal notice is a formal action requiring stringent conditions about which there seems to be approximate international agreement (see [3] and [14]: Service of a Process). There are several possible standard methods, including service by:
\begin{tabular}{l l} \((\alpha)\) & an officer of the court directly to the recipient; \\ \((\beta)\) & registered post to the recipient; \\ \((\gamma)\) & email to the recipient's email address; \\ \((\delta)\) & publication in a newspaper's public notices; \\ \end{tabular}
Typically, \(\alpha\) is used if possible. When that is infeasible, \(\beta\) is acceptable but \(\gamma\) is not. If the court does not have an address for the recipient, \(\delta\) may be used.
Recently [11], courts in both the US and UK have allowed service by:
\begin{tabular}{l l} \((\varepsilon)\) & NFT to an e-wallet. \\ \end{tabular}
Method \(\varepsilon\) provides a quite different use of NFT. It also diverges from the acceptable methods \(\alpha\), \(\beta\) and \(\delta\) of serving notice and resembles closely the unacceptable method \(\gamma\). So it is included here as a case study; see Theorem 4.2.
What role do functional and epistemic properties play in the serving of notice? The standard methods are included and considered first, for comparison with \(\varepsilon\).
Consider a population \(\mathit{Pop}\) with court \(\mathit{cc}\), and a notice \(\mathit{nn}\) to be served to recipient \(\mathit{rr}:\mathit{Pop}\). Let
\[\phi_{0} := cc\mbox{ has authority}\] \[\phi_{1} := (\mbox{notice}=\mbox{ }\mbox{\it nn})\] \[\phi_{2} := \mbox{ the serving officer has the authority of }cc\] \[\phi_{3} := rr\mbox{ has been served with }\mbox{ }\mbox{\it nn}.\]
A condition required by a successful court system can be expressed as \(\phi_{0}\) being publicly certified amongst _Pop_, namely \(\mbox{\it PC}(\mbox{\it Pop},\phi_{0})\).
The following four derived properties are used to answer the previous question.
The recipient \(rr\) knows the authority of the method of being served notice:
\[K_{rr}\,\phi_{2}\,. \tag{11}\]
The court knows that \(rr\) has been served with _nn_:
\[K_{cc}\,\phi_{3} \tag{12}\]
and recipient \(rr\) knows that fact:
\[K_{rr}\,K_{cc}\,\phi_{3}\,. \tag{13}\]
Furthermore the court knows that _nn_ has been served privately. The only way for others to know about _nn_ is by \(rr\) telling them. So ruling out \(rr\)'s private communications after receiving _nn_,
\[K_{cc}((K_{x}\,\phi_{3})\ \Rightarrow\ x=rr)\,. \tag{14}\]
The standard methods satisfy these properties:
Lemma 2: (Standard methods) _Service of notice by method \(\alpha\) or by method \(\beta\) achieve Properties (11) to (14). Method \(\gamma\), by email, satisfies just Properties (11) and (14). Method \(\delta\), by newspaper publication, achieves none of (11) to (14)._
\begin{tabular}{|c||c|c|c|c|} \hline Method & (11) & (12) & (13) & (14) \\ \hline \hline \(\alpha\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \(\beta\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \(\gamma\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \(\delta\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \(\varepsilon\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \end{tabular}
Proof & discussion.Consider the four methods individually.
(\(\alpha\)) In the method of delivery by officer of the court directly to \(rr\), the serving officer shows publicly certified documentation indicating \(\phi_{2}\). In the distributed setting such documentation consists of the public key of the officer (his or her ID) signed by the private key of \(cc\). Either way, (11) is achieved. The serving of _nn_ directly to \(rr\) means that \(cc\) knows the notice has been served, (12), and
\(rr\) knows that _cc_ knows that fact (ruling out any pretence by \(rr\) at ignorance), (13); finally it maintains privacy in the sense of (14).
(\(\beta\)) The method of registered post satisfies (11) because the system of registered post is itself publicly certified and the sender, _cc_, is revealed on delivery. (Formalisation of the system of registered post and verification that it is publicly certified requires no new concepts and is omitted.) Properties (12) and (13) hold because \(rr\) has to sign for receipt, a fact which is on record. (14) holds because the delivery is made to only \(rr\).
(\(\gamma\)) For the method of service by email, presumably \(nn\) contains reference to \(rr\) and is encrypted with both _cc_'s private key and \(rr\)'s public key. So \(rr\) knows the message originated with _cc_ and and cannot have been read and relayed (as in a man-in-the-middle attack). So Properties (11) and (14) hold. Property (12) fails because of communication delays and failures. Since all users know that, the second-order (13) also fails.
(\(\delta\)) The method of newspaper publication meets none of (11) to (14). It may be thought of as a way of being seen publicly to be attempting to serve notice in the absence of any better method. \(\square\)
The properties of \(\varepsilon\) follow using the reasoning established in Lemma 2.
**Theorem 3**: (Service by NFT) _Method \(\varepsilon\), by NFT to an e-wallet, achieves Properties (11) to (14), with qualification against (12) and (13)._
Proof & discussion.Consider service by NFT--representing ownership by _cc_ of asset _nn_--to an e-wallet \(rr\). Now (11) holds due to the public certifiability of _cc_, \(\mathit{PC}(\mathit{Pop},\phi_{0})\), and properties \(\mathcal{N}\).
Condition (12), that _cc_ knows that \(rr\) has been served with _nn_, might be thought to fail for the same reasons it does with \(\gamma\): network communication is asynchronous and subject to uncertified failure. However in the case of \(\varepsilon\), communication is from e-wallet to e-wallet _via_ the blockchain _Net_. The reliability of such communications is publicly certified, as observed in Section 4.
A further qualification is necessary. An e-wallet, identified by a public key, is unique. But the identity of the owner(s) is in general unknown. In such cases it is the e-wallet itself which is being served (for instance in _D'Aloia v (1) Persons Unknown and (2) Binance Holdings Limited and others_, [11]). Since there is no alternative when it is the anonymous owners of the e-wallet who are actually being served. With those qualifications, (12) holds. (Aside: supposing the e-wallet 'knows' of the service of notice, does any human?)
Condition (13), that \(rr\) knows that _cc_ knows, holds as above because communication on _Net_ is publicly certified. Condition (14), that privacy is preserved, holds as in \(\gamma\) assuming all those with access to the e-wallet \(rr\) are being served. \(\square\)
Ownership documented by an NFT and its public certifiability, provides a convenient way to ensure authority. In that case study, it is the authority of the court or its representative.
Conclusion
**In summary** this work has specified the idea of NFT by combining functional and epistemic properties: unique ownership of an asset and public certifiability _PC_ (common knowledge approximated to depth 2) of that ownership, \(\mathcal{N}\). It has formalised the blockchain implementation and shown it to conform to specification \(\mathcal{N}\), and shown that service of a legal notice can be reasoned about using those ideas.
**In conclusion** the combination of functional and epistemic properties seems appropriate, and has the advantage of being supported by algebraic reasoning without the need for semantic arguments.
**Further work** includes the wider application of _PC_ as a specification technique in community-based distributed systems. And investigation of what might be called SNFT, or _smart non-fungible tokens_, representing ownership which changes under certain conditions (a smart contract). A simple example would be an asset whose ownership requires an annual payment (like a personalised car number plate).
|
2309.01408 | Leveraging Self-Supervised Vision Transformers for Segmentation-based
Transfer Function Design | In volume rendering, transfer functions are used to classify structures of
interest, and to assign optical properties such as color and opacity. They are
commonly defined as 1D or 2D functions that map simple features to these
optical properties. As the process of designing a transfer function is
typically tedious and unintuitive, several approaches have been proposed for
their interactive specification. In this paper, we present a novel method to
define transfer functions for volume rendering by leveraging the feature
extraction capabilities of self-supervised pre-trained vision transformers. To
design a transfer function, users simply select the structures of interest in a
slice viewer, and our method automatically selects similar structures based on
the high-level features extracted by the neural network. Contrary to previous
learning-based transfer function approaches, our method does not require
training of models and allows for quick inference, enabling an interactive
exploration of the volume data. Our approach reduces the amount of necessary
annotations by interactively informing the user about the current
classification, so they can focus on annotating the structures of interest that
still require annotation. In practice, this allows users to design transfer
functions within seconds, instead of minutes. We compare our method to existing
learning-based approaches in terms of annotation and compute time, as well as
with respect to segmentation accuracy. Our accompanying video showcases the
interactivity and effectiveness of our method. | Dominik Engel, Leon Sick, Timo Ropinski | 2023-09-04T07:29:51Z | http://arxiv.org/abs/2309.01408v2 | # Leveraging Self-Supervised Vision Transformers for Neural Transfer Function Design
###### Abstract
In volume rendering, transfer functions are used to classify structures of interest, and to assign optical properties such as color and opacity. They are commonly defined as 1D or 2D functions that map simple features to these optical properties. As the process of designing a transfer function is typically tedious and unintuitive, several approaches have been proposed for their interactive specification. In this paper, we present a novel method to define transfer functions for volume rendering by leveraging the feature extraction capabilities of self-supervised pre-trained vision transformers. To design a transfer function, users simply select the structures of interest in a slice viewer, and our method automatically selects similar structures based on the high-level features extracted by the neural network. Contrary to previous learning-based transfer function approaches, our method does not require training of models and allows for quick inference, enabling an interactive exploration of the volume data. Our approach reduces the amount of necessary annotations by interactively informing the user about the current classification, so they can focus on annotating the structures of interest that still require annotation. In practice, this allows users to design transfer functions within seconds, instead of minutes. We compare our method to existing learning-based approaches in terms of annotation and compute time, as well as with respect to segmentation accuracy. Our accompanying video showcases the interactivity and effectiveness of our method.
transfer functions, volume rendering, deep learning
## I Introduction
Visualizing volumetric scientific data relies on a mapping of the underlying data to optical properties. In volume rendering, we call this mapping a _transfer function_ (TF) [1]. On scalar data, the simplest way to define a TF is by directly mapping the intensity of the input modality to optical properties, such as color and opacity. While such 1D TFs are simple to define and modify, they are inherently local and fail to extract semantically coherent regions that do not share a specific voxel value. Similarly, such simple TFs fail to separate different structures that share a value range.
A plethora of work improves on this by extending the input space of the TF to 2D, including gradient magnitude [2] or other possibly more complex local features [3, 4, 5], usually at the cost of increasing the complexity of the TF definition and the user interface. Another line of work proposes the collection of _annotations_ within slices, before training classifiers on the collected examples to predict which structures the remaining voxels belong to [6, 7, 8]. Such an approach keeps the TF definition and user interface simple, but typically comes at the cost of losing interactivity, as these approaches require fitting of the annotated data points and inference for the remaining volume, which is prohibitively slow for existing approaches [6, 7, 8]. As a result, these approaches feel more like a three-step process with an annotation phase, fitting & inference phase, and a viewing phase.
In this work, we adopt the annotation-driven TF design paradigm, but enable an interactive process that gives immediate feedback upon user annotations. To achieve this, we leverage the features of a self-supervised Vision Transformer (ViT) to identify structures matching the users annotations. Such networks are trained on millions of images with the goal of learning meaningful representations for all kind of different structures seen in those images. The sheer scale of the data and compute used in these pre-trainings leads to networks that produce meaningful features for all kinds of inputs [9], including scientific data like CT or MRI. As a result these ViTs have been shown to perform very well in object discovery [10, 11] and generally learn representations that are easily discriminated [9]. Using the semantically relevant features from the ViT, we identify the remaining voxels of a structure using feature similarity to compute a similarity map \(\mathcal{S}\). This approach is fast and can even run on CPU while maintaining interactivity.
To utilize these self-supervised pre-trained ViTs in the 3D domain brings several challenges that we address in our paper. First, these networks are trained on 2D data, so we need a strategy to extract meaningful features from 3D volumetric data. Second, as a result of the input patching in ViTs, the features we extract are of comparatively low resolution that prohibit high visual quality when rendered directly. We address those issues by extracting features slice-wise along multiple axes, before merging the resulting 2D features to a 3D feature volume. To combat the low resolution we propose a refinement step that increases the resolution of our similarity maps and adapts to the underlying intensity volume. To achieve this we propose a 3D extension to the Fast Bilateral Solver [12].
In summary our method enables the following workflow: We start with a short pre-processing stage (\(\approx 2-3\) minutes) to extract the feature maps. After feature extraction our method is interactive and allows users to explore the volume structures through annotation. Once a structure of interest is fully discovered, users can enable the refinement step (\(\approx 2-3\) seconds) to increase the resolution and visual quality in the 3D rendering.
To achieve this, we make the following contributions:
* We propose a simple and fast, yet effective solution to leverage only neural network features to select and visualize volume structures from very few annotations.
* We enable an interactive annotation-guided transfer function design process with instant feedback after each annotation.
* To extract robust and discriminative features from volume data that serve as a basis for our annotation process, we leverage a frozen self-supervised Vision Transformer. We further propose a merging scheme to combine the extracted 2D feature maps into a 3D feature volume.
* We introduce a 3D extension to the Fast Bilateral Solver [12] for refinement of our annotated similarity volumes.
We make the source code to our approach publicly available.1
Footnote 1: URL will be added upon acceptance
## II Related Work
### _Transfer function design._
There has been a lot of work on designing transfer functions using different features, from simple 1D transfer functions based on intensity [13], over 2D TFs based on gradients [2] or segmentation maps [14, 15]. For example, Hladuvka _et al_. [3] propose the use of curvature-based TFs, which is later built upon by Kindlmann _et al_. [16] and Hadwiger _et al_. [17]. Other works incorporate statistics about a voxel's local neighborhood [4] or local frequency distribution [5, 18, 19]. Another line of work uses dimensionality reduction to utilize high-dimensional features in common 1D or 2D widgets [20, 21, 22]. An extensive overview of these methods can be found in the survey by Ljung _et al_. [1].
### _Learning-assisted transfer functions._
The line of work on transfer functions most related to our approach deals with approaches that employ machine learning methods during the design process. Tzeng _et al_. [6] pioneered the idea of collecting annotations from the users to offload the classification to a machine learning model. In their work they propose to first let users annotate slices of raw data, before training simple models like small neural networks and support vector machines (SVM) to classify the acquired data. In a similar fashion, Soundararajan and Schultz [7] provide a comparison of different classifiers for such a framework. Specifically they compared Gaussian Naive Bayes, k Nearest Neighbor, SVMs, neural nets and Random Forests (RF), where they found Random Forests to perform best. As features to their model they combine voxel intensity, intensity of neighboring voxels, gradient magnitude and voxel position to a feature vector of length \(11\), for each voxel.
Zhou and Hansen [23] propose probing of volume data using slice annotations to automatically generate 2D transfer functions using kernel density estimation. They use dimensionality reductions to project multivariate data and let users control the transfer function through a 2D Gaussian widget and a parallel coordinates plot. In a later work [24], they further introduce selection using a lasso tool to probe the slice views.
De moura Pinto and Freitas [25] propose the first unsupervised method, Kohonen Maps, to reduce the dimensionality of the high-dimensional TF space to enable TF design through common widgets.
Later, Cheng _et al_. [8] proposed to train convolutional neural networks (CNN) to extract high-level features. The CNN is trained for voxel-wise classification, and its predictions are used as input to marching cubes to generate a geometry. The extracted features are further ordered, so that users could define TFs based on characteristic features in a 1D TF widget. Their approach, however, requires labeled volumes to train the CNN, which drastically increases the computational cost.
Hong _et al_. [26] train a generative adversarial network [27] to predict rendered views from a view point, a rendering from this viewpoint that uses a trivial density to opacity mapping, and a goal image that conveys the style of the rendering (i.e. the mapping aspect of the TF). This approach however needs to be trained very costly for each volume and can barely be considered interactive even when deployed on their 8-GPU multiprocessing node.
Compared to this prior work, our approach brings several advantages. In contrast to the proposed supervised approaches that require large amounts of labeled training data, we leverage the generalized feature extraction capabilities of self-supervised pre-trained models and require no further training. This saves both the time needed for extensive annotation and training time, while enabling off-the-shelf application on a wide range of domains. The annotation requirements in our approach are lightweight in comparison, since the only annotations we need are collected during the interactive transfer function design process, where the user brushes on the structures they would like to see in the rendering. Contrary to the annotation process of the other methods, our annotations are instantly followed up with feedback showing which structures were selected, eliminating the guess work for the amount of necessary annotations and the waiting time to evaluate the resulting selection.
### _Self-supervised pre-training._
Recently, several methods have made progress towards enabling the pre-training of vision models with unlabeled data [28, 29, 30, 31, 32, 33, 9]. Chen _et al_. [31] introduce an effective augmentation strategy to create multiple alternating versions of an image that are consequently fed through an encoder network and a projection head. Using this output, they compute a contrastive loss that learns to map images containing the same object closer together in the latent space. To tackle the problem of batch-size dependency for approaches of this kind, Caron _et al_. [29] propose an intermediate clustering of the latent representations by computing image codes and assigning them to cluster prototypes using the Sinkhorn-Knopp [39] algorithm. Following the proposal of Vision Transformers [40], Caron _et al_. [9] have introduced DINO, a self-supervised model trained with a student-teacher knowledge distillation process. In their publication, they discover that ViTs can learn
semantically-relevant structures in their intermediate features when pre-trained on unlabeled data with their method. In Section III, we detail how we exploit this property to propose our ViT-based transfer function. Contrary to contrastive approaches, Bao _et al_. [28] and He _et al_. [34] paved the way for self-supervised vision pre-training with masked-image-modeling approaches. In general, their approaches mask a portion of the input patches to the ViT and try to predict the masked patches and reconstruct the full input image, resulting in learned representations highly effective for model fine-tuning on several relevant tasks. Most recently, Assran _et al_. [41] have proposed an image-based joint-embedding predictive architecture (I-JEPA). Their approach provides the model with a context block, from which it is tasked to predict several target blocks in a single image. The learned representations have proven to be especially valuable for linear evaluations.
## III Method
An overview of our approach is illustrated in Figure 1. As a first step, our method extracts a feature volume \(\mathcal{F}\) using the pre-trained DINO ViT [9] during pre-processing. This takes around two to three minutes on a consumer GPU and only needs to be performed once for a given volume \(\mathcal{V}\). During transfer function design, this feature volume \(\mathcal{F}\) is sampled at the locations that the user annotates. The sampled feature vectors are then compared to the full feature volume using _cosine similarity_ to obtain a similarity volume \(\mathcal{S}_{\text{L}}\). When the user is satisfied with \(\mathcal{S}_{\text{L}}\), it can be further refined using our 3D bilateral solver to obtain a high resolution similarity volume \(\mathcal{S}_{\text{H}}\). The following subsections explain each of these steps, as well as the rendering procedure and user interface, in detail.
### _Feature Extraction_
Typically, transfer function design uses low-level and local features, like raw intensity, gradient magnitudes or local histograms. While these local features can be helpful in the separation of region of interest, they lack semantic meaning and may fail to capture the entirety of a region, putting the burden on the user through difficult interaction. To combat this locality of the features, we propose the use of ViTs that by design relate different locations in the input to each other in their feature extraction. Specifically, we make use of self-supervised pre-trained ViTs.
In our method, we use the DINO [9] ViT to extract representations. This network is originally trained on the RGB image domain. In order to feed our volumetric data through this 2D network, we first slice the volume along its three principal axes, then we replicate the slices to RGB and input them separately to DINO to extract representations. The resulting 2D representations are then again merged to form the 3D feature volume \(\mathcal{F}\). In the following, we first detail exactly what features we retrieve from the network, before describing the 2D to 3D process.
Specifically, we make use of the attention mechanism in the DINO ViT. Within the self-attention layers of the ViT, the feature maps from the previous block are fed through three linear layers, producing the _key_ (\(K\)), _query_ (\(Q\)) and _value_ (\(V\)) maps. In the attention mechanism, the \(K\) and \(Q\) are used to compute the attention matrix \(A\) that determines the influence of the values \(V\) for a specific attention head, that is finally passed on to the next layer:
\[A=\text{softmax}(QK^{T}/\sqrt{d})\]
where \(d\) is the feature dimension of the \(Q,K,V\) maps divided by the number of heads in the attention layer. In our method
Fig. 1: **Method Overview. In the Feature Extraction Pre-Processing step, the volume data \(\mathcal{V}\) is _sliced_ along each axis and fed separately through the pre-trained DINO network. The resulting features are _merged_ into a feature volume \(\mathcal{F}\). Then, the user starts with Annotation in a slice viewer. Whenever the user annotates new voxels, we immediately Compute Similarity (blue highlights) of the annotated _samples_ (orange circles) with the feature volume \(\mathcal{F}\) (see Fig. 2 for a step-by-step visualization). With the immediate feedback, the user can focus on the few regions that are missing after the initial annotations. Once the user is satisfied with \(\mathcal{S}_{\text{L}}\), they can enable the _bilateral solver (BLS)_ as a Post-Process to obtain \(\mathcal{S}_{\text{H}}\) with increased resolution. The whole process typically takes less than one minute in practice and is repeated for each class. Please watch the supplemental video for a demonstration.**
we save the keys \(K\) of the last self-attention layer in the ViT as feature map, as they represent semantic features that are designed to be matched to queries, which is exactly what we intend to do.
In order to obtain the _feature volume_\(\mathcal{F}\), we slice the input volume \(\mathcal{V}\in\mathbb{R}^{W\times H\times D}\) along each principal axis and feed the slices separately through the ViT network. The resulting feature maps each have their un-sliced dimensions reduced by the patch size \(p\) of the ViT, while keeping the sliced dimension unchanged, resulting in:
\[\mathcal{F}_{X}\in\mathbb{R}^{W\times H/p\times D/p\times F},\] \[\mathcal{F}_{Y}\in\mathbb{R}^{W/p\times H\times D/p\times F},\] \[\mathcal{F}_{Z}\in\mathbb{R}^{W/p\times H/p\times D\times F}\]
In the following we call those reduced dimensions \(W/p=W^{\prime},H/p=H^{\prime}\) and \(D/p=D^{\prime}\). Having extracted the three stacks of feature maps, we need to merge them to one feature volume \(\mathcal{F}\). To obtain the merged \(\mathcal{F}\), these three features are first average pooled to the target dimensions and then averaged, resulting in a final resolution of \(\mathcal{F}\in\mathbb{R}^{W^{\prime}\times H^{\prime}\times D^{\prime}\times F}\) with \(F\) being the feature dimension, determined by the attention layers of the vision transformer.
Since the feature maps have their spatial resolutions reduced by the patch size of the ViT, the resulting feature resolution may be quite low, depending on the input size. To enable control over the final dimensions \(W^{\prime},H^{\prime},D^{\prime}\), we optionally up-sample the images before we feed them to the ViT. This lets us choose arbitrary feature dimensions, but is restricted by the available GPU memory, as larger inputs to the ViT result in higher memory usage. In practice, we resize input images to around \(640\times 640\), resulting in feature maps with a spatial dimension of \(80\), which has proven to be a sufficient granularity for many structures (compare Section IV-D).
In our approach, we use the DINO [9] ViT-S/8 network, which has a patch size of \(p=8\) and produces a \(F=384\) - dimensional feature vector for each voxel in the feature grid. We choose this network as it fits on a consumer GPU (RTX 2070, 8GB VRAM) and we can typically extract feature volumes of the size \(\mathcal{F}\ \in\ \mathbb{R}^{80\times 80\times 80\times 384}\). Larger transformer models like a ViT-B or ViT-L quickly require a prohibitive amount of GPU memory. They also typically come with an even larger patch size, thus decreasing the spatial resolution of the feature maps significantly. Similarly, newer models like the DINOv2 [42] only come with a larger patch sizes and are therefore not considered for practical reasons.
### _Computing Similarity Maps_
After the feature volume \(\mathcal{F}\) is extracted and the user has made a first annotation (more details on the annotation interface in Section III-E), we compute how similar the annotated voxel is to each feature voxel in \(\mathcal{F}\). Intuitively, this can be thought of as querying the feature volume using singular features, closely matching the attention mechanism used during training of the network. Given a set of annotations \(\mathcal{A}^{\mathcal{C}}\ \in\ \mathbb{R}^{N\times 3}\) for class \(\mathcal{C}\), we compute the similarity as follows:
\[\mathcal{S_{\text{L}}}^{\mathcal{C}}=\max\left(\frac{1}{|\mathcal{A}^{ \mathcal{C}}|}\sum_{a\in\mathcal{A}^{\mathcal{C}}}\sum_{\mathcal{F}_{i}\in \mathcal{F}}\frac{\mathcal{F}_{a}\cdot\mathcal{F}_{i}}{\left\|\mathcal{F}_{a }\right\|_{2}\left\|\mathcal{F}_{i}\right\|_{2}},0\right) \tag{1}\]
where the resulting similarity \(\mathcal{S_{\text{L}}}^{\mathcal{C}}\in[0,1]^{W^{\prime}\times H^{\prime}\times D ^{\prime}}\) has the same spatial dimensions as \(\mathcal{F}\). This similarity computation is lightweight and only takes a few milliseconds on either CPU or GPU. This allows for immediate feedback to the user, thus we show an updated \(\mathcal{S}_{\text{L}}\) right after an annotation is placed, enabling an interactive annotation process, where the user can make informed decisions about where to place further annotations.
Depending on the structure of interest, our similarity map may detect multiple occurrences of a structure withing a volume, i.e. two kidneys in a human CT, even when only one of them is annotated. This behavior follows directly from the global nature of the attention-based features. This aspect is especially useful to explore similar structures within a volume, however it often forbids the selection of just a single occurrence. To combat this, the user can optionally enable a _connected components_ filter, which identifies the largest connected region in the similarity map using connected components [43], allowing to select more local structures if desired (see kidneys in Figure 1).
### _Post-Processing Similarity Maps_
As the initially computed low resolution similarity maps \(\mathcal{S}_{\text{L}}\) lack the voxel-precise details required for a high visual fidelity when rendering, we propose a post-processing refinement step to 1) up-sample the similarity map and 2) adapt it to the raw intensities in \(\mathcal{V}\). To achieve this, we implement a 3D version of the Fast Bilateral Solver (BLS) [12]. The BLS is an edge-aware smoothing technique, similar to a bilateral filter, that considers a separate reference image to determine the degree of smoothing. We extend the approach to 3D by adding a z-component to each vertex in the bilateral grid. We use the 3D BLS to smooth over our predicted similarity map, while respecting the edges of the underlying volume. Specifically, we first up-sample \(\mathcal{S}_{\text{L}}\) tri-linearly to match the resolution of \(\mathcal{V}\), then we crop the regions where \(\mathcal{S}>\tau\) to discard low-similarity regions, before solving for a smoothed \(\mathcal{S}_{\text{H}}\) using the according region from \(\mathcal{V}\) as reference for edge-awareness. As a threshold for cropping, we empirically choose \(\tau=0.25\).
Note that the spatial resolution of \(\mathcal{S}_{\text{H}}\) can be chosen anywhere between the resolution of \(\mathcal{F}\) and \(\mathcal{V}\), enabling a trade-off between resolution/quality and speed. We typically choose the resolution of \(\mathcal{S}_{\text{H}}\) at around \(512^{3}\), depending on the class and the actual size of the structure, as this determines the size of the crop and therefore the running time. Our current implementation of the solver runs on CPU and takes around \(2.5\) seconds to process a \(512^{3}\) volume on an Intel i7-8700K. Since this post-processing is only run once after all annotations are placed, we can maintain an interactive experience. The effect of this post-processing can be seen in the right two columns of Figure 2.
### _Rendering of Similarity Maps_
In order to visualize the volumetric data, we perform iso-surface raycasting on the similarity volumes \(\mathcal{S}\). During the interactive annotation, we only display \(\mathcal{S}_{\text{L}}\), which can then be switched to \(\mathcal{S}_{\text{H}}\) after post-processing when the annotation process is complete. The raycasting approach steps through the volume until the similarity is above the iso-value defined for the according class \(\mathcal{C}\). Once the similarity increases over the iso-value, we perform a binary search to find the exact intersection of the ray and the iso-surface. After the surface is found, we blend its color onto the output buffer using forward compositing, before continuing with the raycasting until an early ray termination threshold is reached. Each point on the surface is shaded using the Phong shading model, together with a shadow ray cast towards the light source.
### _Annotation Interface_
The Annotation Interface is shown in Figure 2 and consists of a slice viewer for the three axes, as well as a canvas displaying the 3D rendering. The user can set annotations within the slice views, either by brushing lines or selecting individual points. After each annotation, all views are immediately updated, showing where previous annotations were set (orange points), as well as the current similarity map \(\mathcal{S}_{\text{L}}\) to indicate which regions are already well recognized. This allows the user to make an informed decision about where to put further annotations, enabling users to quickly mark all regions of interest with just a few annotations, typically less than 10 per class, resulting in a fast TF design process. Misplaced annotations can be removed using a delete brush.
In addition to the slice viewer and 3D rendering, the user has an interface that allows adding and removing classes. For each class, the user can select a color and opacity used for rendering, as well as a few parameters. These include a range slider to scale the range of the similarity maps overlaid in the slice viewers, as well as an iso-value used for the iso-surface rendering. Further, users have a checkbox to enable connected components [43] filtering to restrict the similarity map to a single connected region, as well as a checkbox to enable the 3D bilateral solver, i.e. the post-processing. With the bilateral solver come several parameters that are optionally configurable, namely \(\sigma_{\text{spatial}},\sigma_{\text{chroma}},\sigma_{\text{luma}}\) from the original
Fig. 2: **Annotation Interface. The user is presented with a slice viewer and a 3D rendering. Annotations can be either brushed using the mouse or set using individual points. After an annotation is set, the similarity map \(\mathcal{S}_{\text{L}}\) is computed and displayed (blue) together with the annotation positions (orange circles). The 3D view displays an iso-surface rendering of \(\mathcal{S}_{\text{L}}\). The similarity map informs the user where further annotations are required to fully segment the desired region. After just 3 annotations, the lung is mostly detected, and we can refine this result using the bilateral solver to obtain \(\mathcal{S}_{\text{H}}\).**
approach, which rarely need adjustment. Further, we show a contrast slider to increase the contrast of the underlying volume data \(\mathcal{V}\) before it is used in the bilateral solver. An increased contrast can improve results when dealing with structures that have very little contrast in the original data. The full interface can be seen in our accompanying video.
## IV Experiments
In the following subsections, we perform several experiments to evaluate our approach. First, we look at qualitative results, where we show results on different datasets and modalities, as well as a visual comparison with related work. Then we present a quantitative evaluation based on the CT-ORG [44] segmentation dataset, where we also compare our approach to related work. In those experiments, we show how our approach compares to other methods, even when using three orders of magnitude fewer annotations. Lastly we investigate the relevance of the resolution of the extracted feature volume \(\mathcal{F}\), as well as how much our refinement is dependent on a good initial similarity map \(\mathcal{S}_{\mathrm{L}}\).
For the comparisons, we re-implemented the best performing approaches by Soundararajan and Schultz [7], specifically their support vector machine (SVM) and random forests (RF). We chose this work for comparison, because it is reproducible due to their use of the classifiers by scikit-learn [45]. It is also the most related to our approach, as they actively collect annotations from slice views, similar to our approach. Note that since their approach relies on direct classification of voxels, it requires a background class. When using our interactively collected annotations in their approach, we additionally draw samples at random from the background, matching the number of annotations of our most annotated class.
We also apply our approach on animal scans, as shown in Figures 5,6 and 8.
Results for the Bonsai and Tooth dataset are also reported by Soundararajan and Schultz [7]. Since they require thousands of annotations, we could not feasibly reproduce their exact results here for a direct comparison, however they can be viewed in their work. When using their approach with the few annotations we require, all their models fail to produce a meaningful result, as the surrounding air is falsely predicted to belong to one of the classes, occluding any structure of interest.
As can be seen in these figures our approach manages to define meaningful transfer functions from just very few annotations and works for a variety of structures and modalities.
### _Visual Comparison to Soundararajan et al. [7]_
We compare our approach to the aforementioned SVM and RF approaches on the CT-ORG [44] dataset. This dataset has high-resolution CT scans of human torsos, as well as ground truth segmentations for the liver, bladder, lung, kidney and bones. Figure 7 compares the ground truth segmentation to our approach using on average \(\bar{\mathcal{A}}=5.2\) annotations per class, as well as results from Soundararajan _et al._[7]. For their approach, we show the models trained with \(8192\) samples per class, as this large amount of annotations produced the best results for their approach. When using just he \(\bar{\mathcal{A}}=5.2\) annotations per class that we use for our approach, their methods fail to produce a meaningful result. In order to choose this large number of annotations to train their approach, we randomly sample \(8192\) annotations per class from the ground truth labels. In Figure 7 their methods use around \(1500\times\) the amount of annotations compared to ours.
### _Quantitative Comparison to Soundararajan et al. [7]_
We further compare our method quantitatively to the SVM and RF approach by Soundararajan _et al._[7] on the CT-ORG [44] dataset. This experiment reports segmentation metrics that match the visual results in Figure 7. To compute such metrics, we need to convert our similarity maps \(\mathcal{S}_{\text{H}}\) to classification decisions for each voxel. For this, we threshold the similarity maps for each class using the iso-value used for rendering, and in the case that a voxel would be assigned multiple classes, we choose the one with the highest similarity value.
Table I shows results for the Precision, Recall, F1-Score and Intersection over Union (IoU) for the different classes using our set of interactively collected annotations.
Table II further shows results for an increasing amount of samples for the SVM and RF approach. **Ours** in this table still only uses the \(\bar{\mathcal{A}}=5.2\) annotations per class, and the table shows that our approach is superior to the classifier-based approach even when they receive an unreasonably large amount of annotations.
Figure 9 further shows how our approach performs in terms of mean IoU, compared to the increasing amount of annotations used to train the RF and SVM.
### _Impact of feature volume resolution_
As described in Section III-A, we can control the resolution of the feature volumes \(\mathcal{F}\) that we extract from the ViT. By
Fig. 8: **Qualitative Results for the Jarv (wolverine) dataset.**
Fig. 7: **Visual Comparison to the SVM and RF approach by Soundararajan _et al._[7] on CT-ORG. This visualization matches the predictions in Table II and shows the RF and SVM with 8192 training samples per class, while Ours only uses the interactively collected annotations (on average \(\bar{\mathcal{A}}=5.2\) annotations per class).**
resizing the slices fed into the network, the resulting feature resolution can be increased at the cost of increased computational demand and memory footprint. Generally a higher resolution feature map allows for more granularity in the initial similarity maps \(\mathcal{S}_{\text{L}}\), and could allow for better detection of fine structures. In order to understand the importance of the resolution of \(\mathcal{S}_{\text{L}}\) we annotate the rips in the CT-ORG dataset with 9 annotations and compute similarity maps from feature volumes of different resolutions. We then tune similarity thresholds individually, before applying the bilateral solver for refinement. Figure 10 shows renderings of the resulting similarity maps for features of resolution \(64^{3}\), \(80^{3}\) and \(96^{3}\) and their according refined similarities.
### _Dependency of the Refinement on the initial similarity \(\mathcal{S}_{\text{L}}\)_
To better understand the importance of the refinement step and its dependency on its input, the initial low resolution similarity \(\mathcal{S}_{\text{L}}\), we produce an initial similarity map with insufficient annotation. This similarity map only captures the center region of the liver in the CT-ORG [44] dataset, as shown in Figure 11. We now apply the refinement step on this incomplete similarity map to find out if the bilateral solver can complete the structure, and therefore if it can compensate for a lack of detection in \(\mathcal{S}_{\text{L}}\).
## V Discussion
### _Visual Results_
As shown in Figures 3-8 our approach is able to design meaningful transfer functions using only a few annotations. Our method could separate different structures well and works on different kinds of data, like CT and MRI scans of very different objects. Some structures show small visual artifacts, caused by the iso-surface rendering of not fully completed structures, as described in Section V-D.
### _Segmentation Performance_
In order to get a quantitative measure of our method's performance, we applied it on the CT-ORG dataset, which has segmentation ground truth that we can use to compute segmentation metrics. Table I and II shows that our method was very capable to detect the five different types of organs with only few annotations. Organs like the bladder and kidney that have quite similar densities in a CT scan were the most difficult to segment. We also found that while the bone class looks very well segmented in the resulting renderings, its recall was comparatively low. We found that this is due to the fact that our approach most strongly recognized the outer surface of the bones and misses some voxels inside of the bones, which has no visual impact. Furthermore, our method did not detect the intervertebral discs of the spine, which are included in the ground truth. We did, however, not annotate those when designing the transfer function.
Compared to the SVM and RF proposed by Soundararajan _et al._[7] we find our segmentation performance favorable, even when increasing the amount of annotations for the SVM and RF by three orders of magnitude. Figure 9 shows that the SVM and RF approaches improve with an increased amount of annotations, although they plateau well below our mean IoU of \(0.865\). The SVM and RF approach are also quite slow in comparison, as summarized in Table III.
### _Impact of feature volume resolution_
As shown in Figure 10, the resolution of \(\mathcal{F}\) has a visible impact on the un-refined similarity maps. We can see that higher feature resolutions provide less visual artifacts in the form of blockiness. However, all of the similarity maps managed to capture so much of the rips, that the refinement step is able to completely select them in all cases, leaving the final refined results very similar. This makes clear that very high resolution feature maps are not necessary to obtain voxel-precise predictions. We found that as long as a structure is detected in \(\mathcal{S}_{\text{L}}\), the refinement step can typically extract the structure of interest and is not very prone to the resolution
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \(\tilde{\mathcal{A}}\) & Method & Accuracy & Precision & Recall & F1 & mIoU \\ \hline \multirow{4}{*}{5.2} & **Ours** & **0.988** & **0.961** & 0.892 & **0.923** & **0.865** \\ & SVM & 0.669 & 0.181 & 0.405 & 0.180 & 0.139 \\ & RF & 0.722 & 0.329 & 0.509 & 0.296 & 0.218 \\ \hline \multirow{4}{*}{512} & SVM & 0.708 & 0.332 & 0.844 & 0.386 & 0.272 \\ & RF & 0.827 & 0.435 & 0.917 & 0.543 & 0.398 \\ & SVM & 0.724 & 0.340 & 0.859 & 0.399 & 0.283 \\ & RF & 0.855 & 0.472 & 0.931 & 0.587 & 0.440 \\ & SVM & 0.750 & 0.356 & 0.883 & 0.425 & 0.306 \\ & RF & 0.870 & 0.512 & 0.943 & 0.630 & 0.483 \\ & SVM & 0.774 & 0.372 & 0.895 & 0.448 & 0.326 \\ & RF & 0.888 & 0.541 & 0.952 & 0.660 & 0.515 \\ & SVM & 0.796 & 0.390 & 0.909 & 0.473 & 0.348 \\ & RF & 0.901 & 0.567 & **0.962** & 0.686 & 0.545 \\ \hline \hline \end{tabular}
\end{table} TABLE II: **Segmentation Metrics by Annotation Amount.**\(\tilde{\mathcal{A}}\) denotes the number of annotations per class. We compare our method on CT-ORG with \(\tilde{\mathcal{A}}=5.2\) interactively collected annotations to the SVM and RF approach by Soundararajan _et al._[7] using varying amounts of annotations.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Metric & Method & Liver & Bladder & Lung & Kidney & Bone \\ \hline \multirow{3}{*}{Precision} & **Ours** & **0.946** & **0.914** & **0.989** & **0.942** & **0.985** \\ & SVM & 0.0 & 0.019 & 0.0 & 0.0 & 0.134 \\ & RF & 0.173 & 0.034 & 0.608 & 0.033 & 0.157 \\ \hline \multirow{3}{*}{Recall} & **Ours** & **0.927** & 0.887 & **0.938** & **0.745** & **0.856** \\ & SVM & 0.0 & **0.999** & 0.0 & 0.0 & 0.707 \\ & RF & 0.046 & 0.953 & 0.378 & 0.124 & 0.781 \\ \hline \multirow{3}{*}{F1 Score} & **Ours** & **0.936** & **0.900** & **0.963** & **0.832** & **0.916** \\ & SVM & 0.0 & 0.038 & 0.0 & 0.0 & 0.225 \\ & RF & 0.073 & 0.066 & 0.466 & 0.053 & 0.262 \\ \hline \multirow{3}{*}{IoU} & **Ours** & **0.881** & **0.819** & **0.929** & **0.712** & **0.845** \\ & SVM & 0.0 & 0.019 & 0.0 & 0.0 & 0.127 \\ & RF & 0.038 & 0.034 & 0.304 & 0.027 & 0.151 \\ \hline \hline \end{tabular}
\end{table} TABLE I: **Segmentation Metrics by class on CT-ORG.** We compare to the SVM and RF method by Soundararajan _et al._[7] using the annotations gathered during interactive annotation. On average, each class has \(\tilde{\mathcal{A}}=5.2\) annotations. Both the SVM and RF predict most voxels to belong to the Bladder or Bone class.
of \(\mathcal{S}_{\text{L}}\). In practice that enables our method to be useful on consumer GPUs, as 8GB of VRAM suffice to extract features of resolution \(80^{3}\), whereas higher resolutions would quickly demand a prohibitive amount of VRAM to extract.
### _Dependency of the Refinement on the initial similarity \(\mathcal{S}_{\text{L}}\)_
Inspecting the results of Figure 10 raises the question how much our refinement step, the bilateral solver, is actually dependent on our initial similarity maps \(\mathcal{S}_{\text{L}}\). In our testing we found that the refinement step is typically good at aligning the low resolution feature maps to the raw input, especially at the borders of the structure, while not being able to complete structures far beyond what is detected in \(\mathcal{S}_{\text{L}}\). If a structure is not sufficiently detected in \(\mathcal{S}_{\text{L}}\), the refinement step is unable to complete the structure. This is illustrated in Figure 11, where we refine a similarity map that has insufficient annotations and misses the borders of the liver. The refined similarity falls off smoothly towards the liver surface, resulting in block artifacts when rendered as an iso-surface. The block artifacts arise from the \(\sigma\) parameters of the bilateral solver, that control the window used for blurring. We find that those block artifacts only occur, when the structure is not sufficiently detected in the low resolution similarity map. We conclude that the refinement step requires a sufficient detection of the full structure already in the low resolution similarity map, in order to produce surfaces without artifacts. This highlights the importance of both the initial similarities \(\mathcal{S}_{\text{L}}\) and the refinement step.
### _Limitations_
One limitation is that our pre-processing step, the feature extraction, can be quite memory intensive. Vision transformers require lots of memory, especially when we try to achieve high resolutions for \(\mathcal{F}\). To obtain a certain feature resolution, the input to the ViT must be scaled by the patch size. In practice this quickly exceeds the memory budget on consumer GPUs, as all the feature maps need to be saved for all three slicing directions and lastly be pooled to the desired feature size. While we have shown that our approach does not heavily rely on high resolutions of \(\mathcal{S}_{\text{L}}\), this high memory requirement also prevents us currently from using larger transformer models,
Fig. 11: **Refinement Artifacts with insufficient \(\mathcal{S}_{\text{L}}\). Our refinement step requires an input similarity map \(\mathcal{S}_{\text{L}}\) with sufficient detection of the relevant structures to be effective, and results in block artifacts otherwise.**
Fig. 10: **Comparison of Feature Resolutions. Top row shows un-refined similarity maps at the given resolution, bottom row shows the results after refinement.**
Fig. 9: **Intersection over Union on CT-ORG. We compare the IoU of our approach using the interactively collected annotations (\(\bar{\mathcal{A}}=5.2\)) with the SVM and RF approach by Soundararajan _et al._[7]. Our approach has superior IoU with just \(5.2\) annotations per class on average, even compared to thousands of annotations for SVMs and RFs.**
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Pre-Processing & Training & Inference \\ \hline
**Ours** & 180s & **0s** & **0.8s** \\
**Ours** +BLS & 180s & **0s** & 12.4s \\ SVM 5.2 & **0s** & 0.001s & 323s \\ SVM 512 & **0s** & 0.06s & 5646s \\ SVM 4096 & **0s** & 9s & 71500s \\ RF 5.2 & **0s** & 0.03s & 172s \\ RF 512 & **0s** & 0.14s & 192s \\ RF 4096 & **0s** & 2s & 432s \\ \hline \hline \end{tabular}
\end{table} TABLE III: **Time Measurements. Numbers reported on CT-ORG with \(5\) classes. Our approach requires feature extraction once in the beginning, but needs no training and is inferred in \(160\)ms per class during annotation, followed by a \(2.5\)s post-processing, whereas other approaches require inference times in the minutes when used with sufficient samples.**
like the ViT-B or ViT-L or transformer models with higher patch sizes.
We further found that when selecting a structure within a volume, it may occur that our method recognizes more structures of similar appearance, that we may not want to select. An example for this is the bladder in the CT-ORG dataset. When annotated, other structures like the kidneys or surrounding tissue is often deemed similar, which is a common problem for many approaches, due to the similar intensities in a CT. While we can circumvent this to some extent by placing more annotations in the actual region of interest, this results in precisely choosing thresholds for the similarity map. We also implemented the option to use a connected components filter to discard disconnected components that are falsely detected to combat this problem, which works well for separated structures, like two kidneys (compare Figure 1), but fails when the structures to be separated are too close to each other.
Lastly we find that when structures cannot be perfectly detected at their surfaces, the resulting renderings may show the block artifacts described in Section V-D.
### _Future Work_
In the future, we see several additions and improvements to an approach like ours. Firstly, the use of larger pre-trained transformers, as well as the option to retrieve higher resolution feature maps, would probably improve the method's performance significantly.
Another interesting direction to look into is using neural nets that are pre-trained to learn joint image and text embeddings, like CLIP [46], BLIP [47] or OpenCLIP [48]. Those networks are trained to produce similar features for images and matching text, and could enable our approach to use natural language queries to selected structures as part of the transfer function design process, in addition to spatial annotations.
## VI Conclusion
To conclude, we have presented a novel method for transfer function design, leveraging self-supervised pre-trained Vision Transformers. We show that the features of such a network can be used to design transfer functions by querying the feature map by singular feature vectors obtained through annotation. By giving the user immediate feedback on the obtained similarities for the current set of annotations, users can easily find regions that require further annotation to ultimately reduce the need for a large number of annotations. This enables users to create transfer functions for a structure of interest in seconds to minutes, and hence allows for quick visualization and exploration of volume datasets. In comparison to prior machine learning based transfer function approaches, our interface and annotation process is kept to a minimum, and we can avoid actually training a model, by just utilizing the features of the pre-trained network. Further, our method is quick enough to design transfer functions interactively, without requiring a separate annotation phase. To increase the visual quality of rendering our similarity maps, we propose a 3D extension to the fast bilateral solver [12] that lets us up-sample similarity maps to a high resolution. Our approach can be easily extended in the future through the use of newer and larger networks, or even networks that produce features that can be queried by natural language.
## Acknowledgments
The annotation interface is implemented in the Inviwo [49] visualization framework, and renderings were produced using Inviwo.
|
2308.11532 | A free from local minima algorithm for training regressive MLP neural
networks | In this article an innovative method for training regressive MLP networks is
presented, which is not subject to local minima. The Error-Back-Propagation
algorithm, proposed by William-Hinton-Rummelhart, has had the merit of
favouring the development of machine learning techniques, which has permeated
every branch of research and technology since the mid-1980s. This extraordinary
success is largely due to the black-box approach, but this same factor was also
seen as a limitation, as soon more challenging problems were approached. One of
the most critical aspects of the training algorithms was that of local minima
of the loss function, typically the mean squared error of the output on the
training set. In fact, as the most popular training algorithms are driven by
the derivatives of the loss function, there is no possibility to evaluate if a
reached minimum is local or global. The algorithm presented in this paper
avoids the problem of local minima, as the training is based on the properties
of the distribution of the training set, or better on its image internal to the
neural network. The performance of the algorithm is shown for a well-known
benchmark. | Augusto Montisci | 2023-08-22T15:59:25Z | http://arxiv.org/abs/2308.11532v1 | # A free from local minima algorithm for training regressive MLP neural networks
###### Abstract
In this article an innovative method for training regressive MLP networks is presented, which is not subject to local minima. The Error-Back-Propagation algorithm, proposed by William-Hinton-Rummelhart, has had the merit of favouring the development of machine learning techniques, which has permeated every branch of research and technology since the mid-1980s. This extraordinary success is largely due to the black-box approach, but this same factor was also seen as a limitation, as soon more challenging problems were approached. One of the most critical aspects of the training algorithms was that of local minima of the loss function, typically the mean squared error of the output on the training set. In fact, as the most popular training algorithms are driven by the derivatives of the loss function, there is no possibility to evaluate if a reached minimum is local or global. The algorithm presented in this paper avoids the problem of local minima, as the training is based on the properties of the distribution of the training set, or better on its image internal to the neural network. The performance of the algorithm is shown for a well-known benchmark.
Multi Layer Perceptrons, Training algorithm, Local minima, Internal representation, Non-aggregate loss function
## 1 Introduction
Even if Machine Learning (ML) includes a multiciplity of paradigms, much different among them, most part can be considered as an evolution of Error Backpropagation Algorithm (EBP) of Multi Layer Perceptron (Rumelhart et al. (1986)). The merit of this algorithm consists in the fact that for the first time it was possible to train networks with an intermediate layer, and therefore reproduce non-linear input-output relationships. This was as true for classification problems as it was for regression problems. Concerning the latter ones, Cybenko (1989) had demonstrated that a Perceptron with a single hidden layer is a universal approximator, however leaving open the problem of determining both the number of neurons needed to solve a specific problem, and how to determine the connection weights. The EBP algorithm offered a tool to determine the value of the parameters, while the determination of the optimal number of neurons is still an open problem. Successively, methods have been presented to address both questions (Delogu et al. (2008), Ploj (2014), Ploj et al. (2014), Fernandez-Delgado et al. (2014), Fernandez-Delgado et al. (2011), Ploj et al. (2011), Curteanu and Cartwright (2011), Carcangiu et al. (2009a)), but the EBP paradigm, with an important set of variations, still represents the standard of machine learning. This paradigm consists in finding the minimum of a loss function, which is typically given by the output
mean squared error with respect to the target value. All the minimization techniques also developed in contexts other than that of ML have been proposed to solve this problem, but the standard is represented by the use of first and second order descent methods (Marquardt (1963)), in whose category the EBP itself falls. First order algorithms, such as EBP, have made a comeback with the advent of Deep Learning (LeCun et al. (2015)), as the huge number of parameters makes second order methods impractical, even in cases in which approximate expressions of the Hessian are used. The methods based on the derivatives of the loss function have the advantage of being simple to implement, but lack a criterion that allows to establish whether a stationarity point of the loss function represents a global minimum or not. However, there is another limitation that derives from this type of black-box approach. In fact, the fact of considering the loss function as the only indicator of the network performance leads to not distinguishing the different functions of the single parts of the network, as well as the relevance of single examples of the training set.
As a premise to the description of the algorithm that will be presented in the next sections, it is worth rethinking the structure of the MLP network, going beyond the black-box interpretation. First of all, the need for the hidden layer derives from the fact that the input-output relationship is nonlinear. If this were not the case, a linear hidden layer would be completely useless, because the cascade of the two connecting layers would be completely equivalent to their product. Furthermore, since the algebraic relationship between the output of the hidden layer, called _Feature Space_ (Cortes and Vapnik (1995)) and the output of the neural network is linear, we can think that the function of the hidden layer is to make linear the relationship with the output. In other words, for each output neuron, the points of the training set in the product space \(\{\text{feature space}\}\times\{\text{output space}\}\) must be coplanar. In fact, any residual nonlinearity downstream of the feature space could not be corrected by the last layer of connections. Hence also the fact that there is no reason to include in the loss function the weights of the connections preceding the output, since given the points in the feature space, the optimal set of weights is the one that corresponds to the linear regression plane. Another important consideration concerns the different role of the hidden layer and the output layer. Indeed, if the former represents the degrees of freedom of the network, because as the size of the feature space increases, it becomes easier to make the points of the training set coplanar in the product space \(\{\text{feature space}\}\times\{\text{output space}\}\), the latter constitutes a constraint, because the same distribution in the feature space must satisfy the coplanarity constraint with respect to different outputs.
Figure 1: Multilayer Perceptron scheme
This paper presents a training algorithm based on this interpretation of the algebraic structure of the MLP. For sake of simplicity and without prejudice to the general validity of the results, in this article the case of regressive MLP networks with only one output (MISO) will be treated. The paper is structured as follows: in the first section, the analytical basis of the procedure is presented. The second section briefly describes the choices adopted for implementation in the Matlab environment. The third shows the results obtained with a well-known benchmark. At the end, conclusions are provided.
## 2 Description of the algorithm
\[\begin{cases}\mathbf{W}\cdot\mathbf{x}+\mathbf{d}=\mathbf{y}\\ \mathbf{h}=\sigma(\mathbf{y})\\ \mathbf{V}\cdot\mathbf{h}+\mathbf{b}=\mathbf{u}\end{cases} \tag{1}\]
In this section the training algorithm will be described, referring to the scheme shown in Fig. 1. The choice of having only one output serves to simplify the discussion, but does not affect the generality of the problem. The MLP network implements an algebraic structure represented by the system of equations (1), where \(\boldsymbol{x}\) is the input to the network, \(\boldsymbol{W}\) is the weight matrix of the first connection layer, \(\boldsymbol{d}\) is the bias of the first layer, \(\boldsymbol{y}\) is the input vector to the hidden layer, \(\sigma\) is the activation function of the hidden layer, \(\boldsymbol{h}\) is the image of the input in the feature space, \(\boldsymbol{V}\) is the vector of the weights of the connections with the output, \(\boldsymbol{b}\) is the bias of the second layer. The first layer of connections linearly transforms the points from the input space of the network into the input space of the hidden layer. The hidden layer activation function is the only nonlinearity of the network. Through the first two transformations of the (1), the distribution of the training set points in the input space is transformed into a distribution in the feature space. The first layer of connections must ensure that the points of the training set in the product space \(\{\text{feature space}\}\times\{\text{output space}\}\) are all coplanar. If this does not happen, the second layer of connections will not be able to compensate for these errors. The solution that minimizes least squares is the regression plan. For this reason, even if a conventional training algorithm is applied, it is not convenient to include the weights of the second layer of connections within the loss function. In fact, it is better to define the loss as a function of the weights of the first layer and, by freezing this, to define the second layer by calculating the regression plane.
The algorithm presented in this paper, instead of finalizing the search to minimize a loss function, aims to construct the feature space in such a way as to ensure the coplanarity of the points, or equivalently that the level curves of the function in the input space are transformed into straight lines in the feature space (Fig. 2).
The condition for linearization can be expressed analytically as follows:
\[\mathbf{s^{T}}\cdot\sigma(\mathbf{W}\cdot\mathbf{x_{i}}+\mathbf{d})=\mathit{a }\cdot\mathbf{u_{i}}\quad\forall\mathit{i}=1\ldots\mathit{N} \tag{2}\]
where \(\boldsymbol{s}\) is a vector of linear combination coefficients of the points in the feature space, while \(\mathit{a}\) is a scalar of arbitrary value, which determines the distance between the straight lines of level, and consequently the slope of the regression plan. Finally, \(\mathit{N}\) is the number of points in the training set. Dividing the equation (2) by \(\mathit{a}\) eliminates the uncertainty of the solution:
\[\mathbf{\hat{s}^{T}}\cdot\sigma(\mathbf{W}\cdot\mathbf{x_{i}}+\mathbf{d})=\mathbf{u_ {i}}\quad\forall i=1\ldots N \tag{3}\]
where \(\mathbf{\hat{s}}\) is the normalized variable. Globally the equation (3) is a system of \(N\) nonlinear equations in the variables \(\boldsymbol{W}\), \(\boldsymbol{d}\), \(\mathbf{\hat{s}}\). Fixing the values of \(\boldsymbol{W}\) and \(\boldsymbol{d}\) the same system becomes a system of linear equations overdetermined in the single variable \(\mathbf{\hat{s}}\).
The proposed algorithm starts from a first order approximation of the first member of the equation (3). Let \(\boldsymbol{W}_{0}\), \(\boldsymbol{d}_{0}\), \(\mathbf{\hat{s}}_{0}\) be an initial solution of the system (3), and \(\delta\boldsymbol{W}\), \(\delta\boldsymbol{d}\), \(\delta\mathbf{\hat{s}}\) the respective increments of the three variables that provide the solution:
\[(\mathbf{\hat{s}_{0}}+\delta\mathbf{s})^{T}\cdot\sigma[(\mathbf{W_{0}}+ \delta\mathbf{W})\cdot\mathbf{x_{i}}+(\mathbf{d_{0}}+\delta\mathbf{d})]= \mathbf{u_{i}}\quad\forall i=1\ldots N \tag{4}\]
Now replace the left-hand side of the equation with a first-order approximation of the incremented function:
\[(\mathbf{\hat{s}_{0}}+\delta\mathbf{s})^{T}\cdot[\sigma_{\mathbf{0}}+\nabla \sigma(\mathbf{W},\mathbf{d})\odot\delta_{\mathbf{W},\mathbf{d}}\cdot\mathbf{ 1}]\approx\mathbf{u_{i}}\quad\forall i=1\ldots N \tag{5}\]
where \(\boldsymbol{\sigma_{0}}\) is the vector in the feature space corresponding to a zero increment, \(\nabla\sigma(\boldsymbol{W,d})\) is the gradient calculated with respect to the weights of the first layer connections, \(\delta_{\boldsymbol{W,d}}\) is the increment of the variables, \(\odot\) is the elementwise multiplication, \(\mathbf{1}\) is a column vector of only \(1\)s. Reordering the equation and neglecting the infinitesimals of higher order we get:
\[\sigma_{\mathbf{0}}^{T}\cdot\delta\mathbf{s}+\mathbf{\hat{s}_{0}^{T}}\cdot \nabla\sigma(\mathbf{W},\mathbf{d})\odot\delta_{\mathbf{W},\mathbf{d}}\cdot \mathbf{1}\approx\mathbf{u_{i}}-\mathbf{\hat{s}_{0}^{T}}\cdot\sigma_{ \mathbf{0}}\quad\forall i=1\ldots N \tag{6}\]
The equation (6) defines a linear, overdetermined system of equations, whose unknowns are the increments of the parameters. By combining (3) with (6) it is possible to imple
Figure 2: Linearizzation performed by the hidden layer
ment an iterative procedure which provides the weights of the first layer of connections. Taking into account the fact, as previously indicated, that the second layer of connections is uniquely determined once the weights of the first layer have been assigned, the iterative procedure actually completes the training procedure.
The procedure takes place as follows. Given the initial value of the weights of the first layer of connections, the system (3) is resolved into the single variable \(\boldsymbol{\hat{s}}\). Since the system is overdetermined, it can only be solved by minimizing the root mean square error (Jodar et al. (1991)). Using the obtained solution, together with the same weights of the first layer used previously, the coefficients and the known term of the system (6) are obtained. This system is also overdetermined, and therefore it too will have to be solved according to least squares. Only the increments of the weights of the first layer \(\delta_{\boldsymbol{W,d}}\) of the solution of the system are used, while \(\boldsymbol{\hat{s}}\) is updated by solving the system (3) after updating the coefficients based on the new values of \(\mathbf{W}\) and \(\mathbf{d}\). The iterative process ends when the error in the system (3) is less than a pre-set threshold value. Alternatively, since the second layer of weights are uniquely determined by the first layer of weights, the error value of the current solution can be computed directly on the neural network output. Since a first-order approximation is being used, the question remains open as to what is the appropriate step length to take at each iteration. For this reason, the Line Search method is adopted (Bazaraa et al. (2013)), sampling the segment from the current solution to the increased one, and evaluating for each point the value of the output error. At each iteration, the assigned increment will be the one that corresponds to the minimum error.
As can be deduced, the number of operations to be performed for each iteration is much higher than that of a conventional method, but the proposed method has the advantage of not being subject to local minima problems, since the objective is not more that of minimizing a loss function, rather than solving a system of equations, and therefore the loss function that drives the search for the solution coincides with the goal of the training, rather than, as usually happens, being constituted by an aggregate target function.
It is worth noting that the proposed method, precisely because it is not based on the minimization of an aggregate loss function, allows the use of different criteria in the application of the line search, such as for example the minimization of the maximum error in the training set. The complete analysis of the potential of the method goes beyond the objectives of this paper and will be the subject of future developments.
## 3 Results
A well-known benchmark for testing optimization methods (Schwefel (1981)) is used as an application example. In cases where the search for the optimal solution is carried out with the aid of a neural network (Carcangiu et al. (2009b)), the accuracy of the network and its generalization capability assume fundamental importance. The general expression of the function is:
\[f(\mathbf{x})=418.9829\cdot d-\sum_{i=1}^{d}x_{i}\sin(\sqrt{|x_{i}|}) \tag{7}\]
where \(d\) is the size of the space where the function is defined. In this case a dimension \(d=3\) was used, and a range of variables [-500,500]. Fig. 3 shows the Schwefel function used as a
test. The value of the function is represented by the color gradient. It is apparent that the function has a large number of maxima and minima, which implies that a large number of hidden neurons will be required to adequately fit it.
An MLP network with 200 hidden nodes has been trained for 4000 epochs. Fig. 4 shows the trend of the mean squared error during training. Note that the error in the diagram refers to the output data previously normalized between -1 and 1. The total number of training examples was equal to 5152. As can be seen, in the first iterations there is a considerable reduction of the error, while subsequently the descent speed is considerably reduced both in absolute and relative terms. In some segments it can be noted that the average error appears constant, and in some cases it can be slightly increasing, but this does not prevent, after a suitable number of iterations, from resuming the decreasing trend. This means that although the mean (squared) error is constant, in reality the iterative process is modifying the configuration of the weights, which allows the network to identify regions in the parameter space where it is possible to obtain performance improvements. As can be seen, the error value after 4000 epochs had not yet stabilized, although the decrease had become much slower.
As mentioned earlier, it is possible to redefine the search criterion based on the maximum error in the training set rather than the mean value, but this, like other issues affecting convergence acceleration, is beyond the scope of this paper and will be the subject of future works.
Figure 3: Schwefelβs function defined in _d_=3
## 4 Some implementation notes
The algorithm was developed in Matlab environment. This has led, as far as possible, to exploit matrix calculation, rather than resorting to _for_ loops. This applies in particular to the construction, at each iteration, of the matrix of the system coefficients, which has made it possible to enormously contain the time required for a single iteration.
It is also worth making some considerations on the application of the Line Search, with which the length of the step to be taken at each iteration is established. The number of samples that are explored for each iteration is a macro-parameter that must be set by the designer, and significantly affects the calculation time of the single iteration. In the case under examination, the increment obtained as a solution of the linear system is divided into 1000 points distributed with a logarithmic law, so as to better detail the trend of the function for small increments. It may happen that the minimum of the error corresponds to the first sample, which implies that the minimum could correspond to a smaller increment than the minimum foreseen. In this case the search for the minimum is further refined using the bisection method applied to the interval between 0 and the smallest increment. If the minimum falls in correspondence with a zero increase, the training procedure stops.
The procedure requires solving a large number of linear systems. The use of the Matlab library functions in the case where the matrix of the coefficients is ill-conditioned involves difficulties, both in terms of calculation times and instability of the iterative process. For this reason, also taking into account that the system is approximated in its definition, we use a solution method based on successive projections (Cannas et al. (2012)), rather than on the inversion of the coefficient matrix. The obtained solution is anyway affected by the misconditioning of the coefficient matrix, and this generally results in a less precise solution, but the algorithm has the merit of quickly providing a reasonable solution.
Again regarding the problem of the misconditioning of coefficient matrices, it has been observed that a large number of examples of the training set helps to mitigate the problem. Increasing the training set does not significantly affect the calculation time, if not indirectly
Figure 4: Performance diagram of the training
due to memory occupation. It has also been observed that the calculation time of the single iteration does not significantly depend on the number of hidden neurons, therefore it is observed that as the number of hidden neurons increases, the total training time decreases, because the number of epochs necessary to reach the target error is smaller.
As stopping criterion a double check was adopted on the number of epochs and on the minimum variation of the error between two successive epochs. In all the tested cases, irrespective of the number of examples, the algorithm stopped when the maximum number of epochs was reached. In future developments it is planned to use a stopping criterion based on the stabilization of the weights of the connections, rather than on the outputs. This will prevent the procedure from stopping due to a momentary stabilization of the error trend, like the one found in the example in the previous section. If, in fact, it is observed that the weights of the connections continue to change, even if this does not translate into a reduction in the average error of the network, the training must still proceed. Analyzing weight trends during training offers the possibility of accelerating learning. If, in fact, this trend is regular, it is generally possible to extrapolate the values to predict which value it is tending to. This possibility will be the subject of future developments.
## 5 Conclusion
This paper describes a training algorithm for Multi Layer Perceptron neural networks whose main feature is that it is not subject to the problem of local minima, typical of conventional algorithms, in which the search for the minimum of the loss function is driven by derivatives of the first and second order of the loss function. Even at a preliminary implementation, the algorithm shows good performance and limited computational complexity as both the training set and the network size increase, which makes it a possible candidate for the treatment of big data problems using neural networks. The performance of the algorithm has been tested on a benchmark that is difficult to treat with the most common training algorithms, obtaining very encouraging results. Finally, the paper introduces lines of development that will be the subject of future publications.
|
2310.04413 | Beyond Uniform Sampling: Offline Reinforcement Learning with Imbalanced
Datasets | Offline policy learning is aimed at learning decision-making policies using
existing datasets of trajectories without collecting additional data. The
primary motivation for using reinforcement learning (RL) instead of supervised
learning techniques such as behavior cloning is to find a policy that achieves
a higher average return than the trajectories constituting the dataset.
However, we empirically find that when a dataset is dominated by suboptimal
trajectories, state-of-the-art offline RL algorithms do not substantially
improve over the average return of trajectories in the dataset. We argue this
is due to an assumption made by current offline RL algorithms of staying close
to the trajectories in the dataset. If the dataset primarily consists of
sub-optimal trajectories, this assumption forces the policy to mimic the
suboptimal actions. We overcome this issue by proposing a sampling strategy
that enables the policy to only be constrained to ``good data" rather than all
actions in the dataset (i.e., uniform sampling). We present a realization of
the sampling strategy and an algorithm that can be used as a plug-and-play
module in standard offline RL algorithms. Our evaluation demonstrates
significant performance gains in 72 imbalanced datasets, D4RL dataset, and
across three different offline RL algorithms. Code is available at
https://github.com/Improbable-AI/dw-offline-rl. | Zhang-Wei Hong, Aviral Kumar, Sathwik Karnik, Abhishek Bhandwaldar, Akash Srivastava, Joni Pajarinen, Romain Laroche, Abhishek Gupta, Pulkit Agrawal | 2023-10-06T17:58:14Z | http://arxiv.org/abs/2310.04413v2 | # Beyond Uniform Sampling: Offline Reinforcement Learning with Imbalanced Datasets
###### Abstract
Offline policy learning is aimed at learning decision-making policies using existing datasets of trajectories without collecting additional data. The primary motivation for using reinforcement learning (RL) instead of supervised learning techniques such as behavior cloning is to find a policy that achieves a higher average return than the trajectories constituting the dataset. However, we empirically find that when a dataset is dominated by suboptimal trajectories, state-of-the-art offline RL algorithms do not substantially improve over the average return of trajectories in the dataset. We argue this is due to an assumption made by current offline RL algorithms of staying close to the trajectories in the dataset. If the dataset primarily consists of sub-optimal trajectories, this assumption forces the policy to mimic the suboptimal actions. We overcome this issue by proposing a sampling strategy that enables the policy to only be constrained to "good data" rather than all actions in the dataset (i.e., uniform sampling). We present a realization of the sampling strategy and an algorithm that can be used as a plug-and-play module in standard offline RL algorithms. Our evaluation demonstrates significant performance gains in 72 imbalanced datasets, D4RL dataset, and across three different offline RL algorithms. Code is available at [https://github.com/Improbable-AI/dw-offline-rl](https://github.com/Improbable-AI/dw-offline-rl).
## 1 Introduction
Offline reinforcement learning (RL) [23; 27] aims to learn a decision-making policy that maximizes the expected return (i.e., the sum of rewards over time) using a pre-collected dataset of trajectories, making it appealing for applications where data collection is infeasible or expensive (e.g., recommendation systems [28]). Without loss of generality, it can be assumed that the dataset is generated from an _unknown_ policy \(\pi_{\mathcal{D}}(a|s)\), also known as the _behavior_ policy [24]. The goal in offline RL is to learn a policy, \(\pi_{\theta}(a|s)\) with parameters \(\theta\), that exceeds the performance of the behavior policy. In offline RL, a widely recognized issue is the overestimation of \(Q\)-values for out-of-distribution state-action pairs, leading to suboptimal policies [8; 20; 22]. This stems from incomplete coverage of the state-action space in the dataset, causing the learning algorithm to consider absent states and actions during optimization.
Most state-of-the-art offline RL algorithms [7; 9; 20; 22; 25] mitigate the issue of OOD Q-values by constraining the distribution of actions of the learned policy \(\pi_{\theta}(a|s)\), to be close to the distribution of actions in the dataset. This results in a generic objective with the following form:
\[\max_{\pi_{\theta}}J(\pi_{\theta})-\alpha\mathbb{E}_{(s,a)\sim\mathcal{D}} \left[\mathcal{C}(s,a)\right],\]
where \(J(\pi_{\theta})\) denotes the expected return of the policy \(\pi_{\theta}\), \(\mathcal{D}\) denotes the dataset, \(\mathcal{C}\) is a regularization term that penalizes the policy \(\pi_{\theta}\) for deviating from the state-action pairs in the dataset, and \(\alpha\) is the
hyper-parameter balancing the conflicting objectives of maximizing returns while also staying close to the data distribution. This prevents offline RL algorithms from learning behaviors that produce action distributions that diverge significantly from the behavior policy.
An easy-to-understand example of choice for \(\mathcal{C}\) is the squared distance between the policy and the data [7], \(\mathcal{C}(s,a):=\|\pi_{\theta}(s)-a\|_{2}^{2}\), where \(\pi_{\theta}(s)\) denotes the mean of the action distribution \(\pi_{\theta}(.|s)\) and \(a\) is an action sampled from the dataset. When the collected dataset is good, i.e., mostly comprising high-return trajectories, staying close to the data distribution is aligned with the objective of maximizing return, and existing offline RL algorithms work well. However, in scenarios where the dataset is skewed or imbalanced, i.e., contains only a few high-return trajectories and many low-return trajectories, staying close to the data distribution amounts to primarily imitating low-performing actions and is, therefore, detrimental. Offline RL algorithms struggle to learn high-return policies in such scenarios [12]. We present a method that overcomes this fundamental limitation of offline RL. Our method is _plug-and-play_ in the sense that it is agnostic to the choice of the offline RL algorithm.
Our key insight stems from the observation that current methods are _unnecessarily conservative_ by forcing the policy \(\pi_{\theta}\) to stay close to _all_ the data. Instead, we would ideally want the policy \(\pi_{\theta}\) to be close to the _best_ parts of the offline dataset. This suggests that we should constrain \(\pi_{\theta}\) to only be close to state-action pairs that would be generated from a policy that achieves high returns, for instance, the (nearly) optimal policy \(\pi^{*}\). In offline scenarios, where collecting additional data is prohibited, to mirror the data distribution of \(\pi^{*}\) as much as possible, we can re-weight existing data (i.e., importance sampling [17]). We instantiate this insight in the following way: Represent the distribution induced by a better policy \(\pi_{\mathcal{D}_{w}}\) (initially unknown) by re-weighting data points in the dataset with importance weights \(w(s,a)\) and denote this distribution as \(\mathcal{D}_{w}(s,a)\). Under this weighting, the offline RL algorithm's training objective can be written as:
\[\max_{\pi_{\theta}}J(\pi_{\theta})-\alpha\mathbb{E}_{(s,a)\sim\mathcal{D}} \left[w(s,a)\mathcal{C}(s,a)\right],\]
Solving for \(\pi_{\theta}\) using the re-weighted objective constrains the policy to be close to the better policy \(\pi_{\mathcal{D}}\), and, therefore, allows learning of performant policies. The key challenge is determining the weights \(w\) since the state-action distribution of the better policy \(\pi_{\mathcal{D}_{w}}\) is initially unknown. To address this, we employ off-policy evaluation techniques [32, 26, 44] to connect the data distribution \(D_{w}\) with the expected return of the policy \(\pi_{\mathcal{D}_{w}}\) that would generate it. This allows one to optimize the importance weights with respect to its expected return as follows:
\[\max_{w}J(\pi_{\mathcal{D}_{w}})=\mathbb{E}_{(s,a)\sim\mathcal{D}_{w}}\left[r( s,a)\right]\approx\mathbb{E}_{(s,a)\sim\mathcal{D}}\left[w(s,a)r(s,a) \right].\]
Here \(r(s,a)\) denotes reward of state-action pair \((s,a)\). By exploiting this connection, we optimize the importance weights \(w\) to maximize the expected return \(J(\pi_{\mathcal{D}_{w}})\) of its corresponding policy \(\pi_{\mathcal{D}_{w}}\) subject to necessary constraints (i.e., Bellman flow conservation constraint [35, 32]). This enables us to obtain a better importance weights \(w\).
We evaluate our method with state-of-the-art offline RL algorithms [22, 18] and demonstrate performance gain on \(72\) imbalanced datasets [12, 6]. Our method significantly outperforms prior work [12]
Figure 1: The dots represent actions in the dataset, where imbalanced datasets have more low-return actions. **(a)** Regularized offline RL algorithms [22, 7, 18] equally regularize the policy \(\pi_{\theta}\) on each action, leading to imitation of low-return actions and a low-performing \(\pi_{\theta}\). The color under the curves shows the policyβs performance \(J(\pi_{\theta})\), with red indicating higher performance and blue indicating lower performance. **(b)** Re-weighting the dataset based on actionsβ returns allows the algorithm to only regularize on actions with high returns, enabling the policy \(\pi_{\theta}\) to imitate high-return actions while ignoring low-return actions.
in challenging datasets with more diverse initial states and fewer trajectories (\(20\times\) smaller than existing datasets). These datasets pose greater challenges, yet they are crucial for practical applications since real-world datasets are often small and exhibit diverse initial states (e.g., robots with different starting positions, where data comes from human teleoperation).
## 2 Preliminaries
**Typical (online) RL.** Reinforcement learning is a formalism that enables us to optimize an agent's policy in a Markov decision process (MDP [35]). The agent (i.e., decision-maker) starts from an initial state \(s_{0}\) sampled from an initial state distribution \(\rho_{0}(.)\). At each timestep \(t\), the agent perceives the state \(s_{t}\), takes an action \(a_{t}\sim\pi(.|s_{t})\) with its policy \(\pi\), receives a reward \(r_{t}=r(s_{t},a_{t})\) from the environment, and transitions to a next state \(s_{t+1}\) sampled from the environment dynamics \(\mathcal{T}(.|s_{t},a_{t})\) until reaching terminal states. The goal in RL is to learn a policy \(\pi\) to maximize the \(\gamma\)_-discounted_ expected, infinite-horizon return \(J^{\gamma}(\pi)=\mathbb{E}_{s_{0}\sim\rho_{0},a_{t}\sim\pi(.|s_{t}),s_{t+1}\sim \mathcal{T}(.|s_{t},a_{t})}\Big{[}\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t}) \Big{]}\). Typical (on-line) RL algorithms estimate the policy \(\pi\)'s expected return (policy evaluation) from trajectories \(\tau=(s_{0},a_{0},r_{0},s_{1},a_{1},r_{1}\cdots)\) generated by rolling out \(\pi\), and update the policy \(\pi\) toward increasing \(J^{\gamma}(\pi)\) (policy improvement), and repeat the processing by performing rollouts with the updated policy.
**Offline RL.** With no interaction with the environment allowed during the course of learning, offline RL algorithms aim to learn a policy \(\pi\) that maximizes return, entirely using a fixed dataset \(\mathcal{D}\) that was collected by an arbitrary and unknown "behavior policy" \(\pi_{\mathcal{D}}\) (e.g., humans or pre-programmed controllers). These methods typically aim to estimate the return of a policy \(\pi\) via techniques such as Q-learning or actor-critic, only using batches of state-action pairs \((s_{t},a_{t})\) uniformly drawn from \(\mathcal{D}\). We will denote this estimate value of the return as \(\widehat{J}^{\gamma}_{\mathcal{D}}(\pi)\). The dataset \(\mathcal{D}\) consists of \(N\) trajectories rolled out by \(\pi_{\mathcal{D}}\):
\[\mathcal{D}:=\{\tau_{i}=(s_{0},a_{0},r_{0},s_{1},a_{1},r_{1}\cdots s _{T_{i}})_{i}\}_{i=1}^{N}, \tag{1}\]
where \(T_{i}\) denotes the length of \(\tau_{i}\). In practice, a limit on trajectory length is required since we can only collect finite-length trajectories [34]. When the states that the policy \(\pi\) would encounter and the actions that \(\pi\) would take are not representative in the dataset \(\mathcal{D}\), the estimated return \(\widehat{J}^{\gamma}_{\mathcal{D}}(\pi)\) is typically inaccurate [8; 20; 22]. Thus, most offline RL algorithms learn the policy \(\pi\) with pessimistic or conservative regularization that penalizes shift of \(\pi\) from the behavior policy \(\pi_{\mathcal{D}}\) that collected the dataset. Typically, implicitly or explicitly, the policy \(\pi\) learned by most offline RL algorithms can be thought of as optimizing the following regularized objective:
\[\max_{\pi}\widehat{J}^{\gamma}_{\mathcal{D}}(\pi)-\alpha\mathbb{E }_{(s_{t},a_{t})\sim\mathcal{D}}\left[\mathcal{C}(s_{t},a_{t})\right], \tag{2}\]
where \(\mathcal{C}\) measures some kind of divergence (e.g., Kullback-Leibler divergence [7]) between \(\pi\) and \(\pi_{\mathcal{D}}\), and \(\alpha\in\mathbb{R}^{+}\) denotes the strength of regularization.
## 3 Problem Statement: Unnecessary Conservativeness in Imbalanced Datasets
In this section, we describe the issue of offline RL in imbalanced datasets. While algorithms derived from the regularized offline RL objective (Equation 2) attain good performance on several standard benchmarks [6], recent work [12] showed that it leads to _"unnecessary conservativeness"_ on imbalanced datasets [12] due to the use of constant regularization weight on each state-action pairs \((s,a)\) in Equation 2. To illustrate why, we start by defining _imbalance_ of a dataset \(\mathcal{D}\) using the positive-sided variance of the returns of the dataset (RPSV [12] defined in Definition 3.1). In essence, RPSV measures the dispersion of trajectory returns in the dataset. It indicates the room for improvement of the dataset. Figure 2 illustrates the distribution of trajectory returns in imbalanced datasets with high and low RPSV. Datasets with a low RPSV exhibit a pronounced concentration of returns around the mean value, whereas datasets with a high RPSV display a return distribution that extends away from the mean, towards higher returns. Intuitively, a dataset with high RPSV has trajectories with far higher returns than the average return of the dataset, indicating high chances of finding better data distribution through reweighting. Throughout this paper, we will use the term _imbalanced datasets_ to denote datasets with high RPSV.
**Definition 3.1** (Dataset imbalance).: RPSV of a dataset, \(\mathbb{V}_{+}[G(\tau_{i})]\), corresponds to the second-order moment of the positive component of the difference between trajectory return: \(G(\tau_{i}):=\sum_{t=0}^{T_{i}-1}\gamma^{t}r(s^{i}_{t},a^{i}_{t})\) and its expectation, where \(\tau_{i}\) denote trajectory in the dataset:
\[\mathbb{V}_{+}[G(\tau_{i})]\doteq\mathbb{E}_{\tau_{i}\sim\mathcal{D}}\left[ \left(G(\tau_{i})-\mathbb{E}_{\tau_{i}\sim\mathcal{D}}[G(\tau_{i})]\right)_{+} ^{2}\right]\quad\text{with}\quad x_{+}=\max\{x,0\}, \tag{3}\]
Imbalanced datasets are common in real-world scenarios, as collecting high-return trajectories is often more costly than collecting low-return ones. An example of an imbalanced offline dataset is autonomous driving, where most trajectories are from average drivers, with limited data from very good drivers. Due to the dominance of low-return trajectories, state-action pairs \((s,a)\) from these trajectories are oversampled in Equation 2. Consequently, optimizing the regularized objective (Equation 2) would result in a policy that closely imitates the actions from low-performing trajectories that constitute the majority of dataset \(\mathcal{D}\), but ideally, we want the policy to imitate actions only on state-action pairs from high-performing trajectories. However, current offline RL algorithms [21; 18; 7] use constant regularization weight \(\alpha\) (Equation 2). As a result, each state-action pairs is weighted equally, which leads the algorithm to be unnecessarily conservative on all data (i.e., imitating all actions of state-action pairs in the dataset). Further analysis of imbalanced datasets can be found in Appendix A.1.
## 4 Mitigating Unnecessary Conservativeness By Weighting Samples
In this section, we seek to develop an approach to address the unnecessary conservativeness issue (Section 3) of regularized offline RL algorithms in imbalanced datasets. Adding more experiences from high-performing policies to the dataset would regularize the policy to keep close to high-performing policies and hence easily mitigate the unnecessary conservativeness issue. Though collecting additional experiences (i.e., state-action pairs \((s,a)\)) from the environment is prohibited in offline RL, importance sampling [32] can emulate sampling from another dataset \(\mathcal{D}_{w}\) since the weighting can be regarded as the density ratio shown below:
\[w(s,a)=\frac{\mathcal{D}_{w}(s,a)}{\mathcal{D}(s,a)}, \tag{4}\]
where \(\mathcal{D}_{w}(s,a)\) and \(\mathcal{D}(s,a)\) denote the probability density of state-action pairs \((s,a)\) in dataset \(\mathcal{D}_{w}\) and \(\mathcal{D}\). Note that \(\mathcal{D}_{w}\) is unknown but implicitly defined through a given weighting function \(w\). This allows us to adjust the sampling distribution that we train the policy \(\pi\) to, as suggested in the following equivalence:
\[\max_{\pi}\widehat{J}_{\mathcal{D}}^{\gamma}(\pi)-\alpha\mathbb{E}_{(s_{t},a _{t})\sim\mathcal{D}}\left[w(s,a)\mathcal{C}(s_{t},a_{t})\right]\Longleftrightarrow \max_{\pi}\widehat{J}_{\mathcal{D}}^{\gamma}(\pi)-\alpha\mathbb{E}_{(s_{t},a_ {t})\sim\mathcal{D}_{w}}\left[\mathcal{C}(s_{t},a_{t})\right]. \tag{5}\]
The remaining question is: _how can we determine the weighting \(w(s,a)\) so that we emulate sampling from a better dataset \(\mathcal{D}_{w}\) collected by a policy that achieves higher return than the behavior policy \(\pi_{\mathcal{D}}\) that collected the original dataset \(\mathcal{D}\)?._
### Optimizing the Weightings: Emulating Sampling from High-Performing Policies
Our goal is to discover a weighting function \(w\) that can emulate drawing state-action samples from a better dataset \(\mathcal{D}_{w}\) that is collected by an _alternative behavior policy_\(\pi_{\mathcal{D}_{w}}\) with higher return than the behavior policy \(\pi_{\mathcal{D}}\) that collected the original dataset \(\mathcal{D}\) (i.e., \(J^{\gamma}(\pi_{\mathcal{D}_{w}})\geq J^{\gamma}(\pi_{\mathcal{D}})\)). We make use of density-ratio-based off-policy evaluation methods [32; 29; 44] to determine if a weighting function corresponds to a high-return policy. Note that we do not propose a new off-policy evaluation approach but rather apply the existing off-policy evaluation technique in our problem. By using these
Figure 2: Return distribution of datasets with high and low RPSV. Low RPSV datasets have returns centered at the mean, while high RPSV datasets have a wider distribution extending towards higher returns. See Appendix A.4 for details.
techniques, we can relate the weighting \(w\) to the expected return of the alternative behavior policy \(J(\pi_{\mathcal{D}_{w}})\) via importance sampling formulation as follows:
\[J^{\gamma}(\pi_{\mathcal{D}_{w}})\approx\mathbb{E}_{(s,a)\sim \mathcal{D}_{w}}\left[r(s,a)\right]=\mathbb{E}_{(s,a)\sim\mathcal{D}}\left[w(s,a)r(s,a)\right]. \tag{6}\]
In Equation 6, \(J^{\gamma}(\pi_{\mathcal{D}_{w}})\) evaluates the quality of a given weighting function \(w\). It also provides a feasible objective to optimize \(w\), since it only requires obtaining samples from the original dataset \(\mathcal{D}\). However, it is important to note that Equation 6 measures \(\gamma\)-discounted return only when the the dataset \(\mathcal{D}_{w}\) represents a stationary state-action distribution that satisfies Bellman flow conservation constraint [35, 32, 44] in the MDP, as shown in the following equation:
\[\mathcal{D}_{w}(s^{\prime})=(1-\gamma)\rho_{0}(s^{\prime})+\gamma \sum_{s,a}\mathcal{T}(s^{\prime}|s,a)\mathcal{D}_{w}(s,a)\;\forall s^{\prime} \in\mathcal{S},\quad\mathcal{D}_{w}(s^{\prime}):=\sum_{a^{\prime}\in\mathcal{ A}}\mathcal{D}_{w}(s^{\prime},a^{\prime}) \tag{7}\]
where \(\mathcal{S}\) and \(\mathcal{A}\) denote the state and action spaces, respectively, and the discount factor \(\gamma\) determines what discount factor corresponds to in \(J^{\gamma}(\pi_{\mathcal{D}_{w}})\) in Equation 6. We slightly abuse the notation, denoting state marginal as \(\mathcal{D}_{w}(s^{\prime})\).
To estimate \(J^{\gamma}(\pi_{\mathcal{D}_{w}})\) from the weighting function \(w\), it is required to impose Bellman flow conservation constraint (Equaton 7) on \(w\). However, it is difficult to impose this constraint due to the dependence of initial state distribution \(\rho_{0}\) in Equaton 7. Estimating \(\rho_{0}\) from the first state of each trajectory in the dataset is an option, but it is infeasible when the trajectories do not consistently start from initial states sampled from the distribution \(\rho_{0}\). While we could make the assumption that all trajectories begin from initial states sampled from \(\rho_{0}\), it would limit the applicability of our method to datasets where trajectories start from arbitrary states. We thus choose not to make this assumption since current offline RL algorithms do not require it.
Instead, since the Bellman flow conservation constraint (Equation 7) only depends on the initial state distribution \(\rho_{0}\) when \(\gamma\neq 1\), it is possible to bypass this dependence, if we maximize the undiscounted return \(J(\pi_{\mathcal{D}_{w}})=J^{\gamma=1}(\pi_{\mathcal{D}_{w}})\) (i.e., setting \(\gamma=1\) in Equation 6) of the alternative behavior policy \(\pi_{\mathcal{D}_{w}}\). While it deviates from the RL objective presented in Equation 2, undiscounted return is often more aligned with the true objective in various RL applications, as suggested in [13]. Many RL algorithms resort to employing discounted return as an approximation of undiscounted return instead due to the risk of divergence when estimating the undiscounted return using Q-learning [39]. Thus, we constrain the weighting function \(w\) to satisfy the Bellman flow conservation constraint with \(\gamma=1\) as shown below:
\[\mathcal{D}_{w}(s^{\prime})=\sum_{s,a}\mathcal{T}(s^{\prime}|s,a )\mathcal{D}_{w}(s,a)\;\forall s^{\prime}\in\mathcal{S}. \tag{8}\]
To connect the constraint in Equation 8 to the objective in Equation 6, we rewrite Equation 8 in terms of weightings \(w\) according to [29, 32]2, as shown below:
Footnote 2: See Equation 24 in [29]
\[w(s^{\prime})=\sum_{s,a}\mathcal{T}(s^{\prime}|s,a)w(s,a)\; \forall s^{\prime}\in\mathcal{S}, w(s):=\sum_{a\in\mathcal{A}}\frac{\mathcal{D}_{w}(s,a)}{\mathcal{D}(s,a)} \tag{9}\]
where \(w(s)\) denotes state marginal weighting. Putting the objective (Equation 6) and the constraint (Equation 9) together, we optimize \(w\) to maximize the undiscounted expected return of the corresponding alternative behavior policy \(\pi_{\mathcal{D}_{w}}\), as shown in the following:
\[\max_{w}J(\pi_{\mathcal{D}_{w}})=\mathbb{E}_{(s,a)\sim\mathcal{D}} \left[w(s,a)r(s,a)\right]\] (10) subject to \[w(s^{\prime})=\sum_{s,a}\mathcal{T}(s^{\prime}|s,a)w(s,a)\; \forall s^{\prime}\in\mathcal{S}.\]
As the weightings \(w\) can be viewed as the density ratio (Equation 4), we call our method as **Density-ratio Weighting (DW)**. We then re-weight offline RL algorithm, as shown in Equation 5. Note that while these weights correspond to \((s,a)\) in the dataset, this is sufficient to reweight policy optimization for offline RL.
### Practical Implementation
**Optimizing weightings.** We begin by addressing the parameterization of the weighting function \(w(s,a)\) and its state marginal \(w(s^{\prime})\) in Equation 10. Though state marginal \(w(s^{\prime})\) can derived from summing \(w(s,a)\) over action space \(\mathcal{A}\), as defined in Equation 9, it can difficult to take summation over a continuous or infinite action space. Thus we opt to parameterize the weightings \(w(s,a)\) and its state marginal \(w(s^{\prime})\) separately. By using the identities [32]\(\mathcal{D}_{w}(s,a)=\mathcal{D}_{w}(s)\pi_{\mathcal{D}_{w}}(a|s)\) and \(\mathcal{D}(s,a)=\mathcal{D}(s)\pi_{\mathcal{D}}(a|s)\), we can represent \(w(s,a)\) as the product of two ratios:
\[w(s,a)\doteq\frac{\mathcal{D}_{w}(s,a)}{\mathcal{D}(s,a)}=\frac{\mathcal{D}_{ w}(s)\pi_{\mathcal{D}_{w}}(a|s)}{\mathcal{D}(s)\pi_{\mathcal{D}}(a|s)}=\frac{ \mathcal{D}_{w}(s)}{\mathcal{D}(s)}\times\frac{\pi_{\mathcal{D}_{w}}(a|s)}{ \pi_{\mathcal{D}}(a|s)}. \tag{11}\]
Michel et al. [31] showed that ratios can be parameterized by neural networks with exponential output. Thus, we represent state-action weighting \(w(s,a)\) as \(w_{\phi,\psi}(s,a)\) and its state marginal as \(w_{\phi}(s)\), as shown below:
\[w_{\phi,\psi}(s,a)=\exp\phi(s)\exp\psi(s,a), w_{\phi}(s)=\exp\phi(s) \tag{12}\]
where \(\phi\) and \(\psi\) are neural networks. Next, we present how to train both neural network models. As the dataset often has limited coverage on state-action space, it is preferable to add a KL-divergence regularization \(D_{KL}(\mathcal{D}_{w}|\mathcal{D})\) to the objective in Equation 10, as proposed in Zhan et al. [43]. This regularization keeps the state-action distribution \(\mathcal{D}_{w}\) induced by the learned weighting \(w\) close to the original dataset \(\mathcal{D}\), preventing \(w_{\phi,\psi}(s,a)\) from overfitting to a few rare state-action pairs in \(\mathcal{D}\). Note that this does not prevent the learned weightings to provide a better data distribution for regularized offline RL algorithms. See Appendix A.2 for the detailed discussion. Another technical difficulty on training \(w\) is that it is difficult to impose Bellman flow conservation constraint in Equation 10 at every state in the state space since only limited coverage of states are available in the dataset. Thus, we instead use penalty method [3] to penalize the solution of \(w_{\phi,\psi}\) on violating this constraint in expectation. As a result, we optimize \(w_{\phi,\psi}\) for Equation 10 using stochastic gradient ascent to optimize the following objective (details can be found in Appendix A.3):
\[\max_{\phi,\psi}\mathbb{E}_{(s,a,s^{\prime})\sim\mathcal{D}}\left[\underbrace {w_{\phi,\psi}(s,a)r(s,a)}_{\text{Return}}-\lambda_{F}\underbrace{(w_{\phi}(s^{ \prime})-w_{\phi,\psi}(s,a))^{2}}_{\text{Bellman flow conservation penalty}} \right]-\lambda_{K}\underbrace{D_{KL}(\mathcal{D}_{w}||\mathcal{D})}_{\text{ KL regularization}}, \tag{13}\]
where \(s^{\prime}\) denotes the next state observed after taking action \(a\) at state \(s\), and \(\lambda_{F},\lambda_{K}\in\mathbb{R}^{+}\) denote the strength of both penalty terms. Note that the goal of our work is not to propose a new off-policy evaluation method, but to motivate ours in the specific objective to optimize the importance weighting for training offline RL algorithms. Importantly, our approach differs from previous off-policy evaluation methods [32; 26; 44], as further discussed in the related works (Section 6).
**Applying the weighting to offline RL.** The weighing function \(w_{\phi,\psi}\) could be pre-trained before training the policy, but this would introduce another hyperparameter: the number of pretraining iterations. As a consequence, we opt to train \(w_{\phi,\psi}\) in parallel with the offline RL algorithm (i.e., value functions and policy). In our experiments, we perform one iteration of offline RL update pairs with one iteration of weighting function update. We also found that weighting both \(\hat{J}_{\mathcal{D}}^{\gamma}(\pi)\) and \(\mathcal{C}(s,a)\) at each state-action pairs sampled from the dataset \(\mathcal{D}\) with \(w_{\phi,\psi}(s,a)\) performs better than solely weighting the regularization term \(\mathcal{C}(s,a)\). For example, when weighting the training objective of implicit Q-learning (IQL) [18] (an offline RL method), the weighted objective \(\mathcal{J}_{\mathcal{D}_{w}}(\pi)\) is: \(\mathbb{E}_{(s,a)\sim\mathcal{D}}\left[w_{\phi,\psi}(s,a)A(s,a)\log\pi(a|s)\right]\), where \(A(s,a)\) denotes advantage values. Please, see Appendix A.3 for implementation details. We hypothesize that weighting both the policy optimization objective \(\hat{J}_{\mathcal{D}}^{\gamma}(\pi)\) and regularization \(\mathcal{C}(s,a)\) in the same distribution (i.e., same importance weights) is needed to prevent policy \(\pi\) increasing \(\hat{J}_{\mathcal{D}}^{\gamma}(\pi)\) by exploiting out-of-distribution actions on states with lower weights \(w_{\psi,\phi}(s,a)\), which could lead to poor performance [8]. Appendix A.5.5 compares weighting both and only one objective. The training procedure is outlined in Algorithm 1.
```
[MISSING_PAGE_POST]
(Section 3). Prior work on imbalanced datasets [12] focused exclusively on imbalanced datasets with trajectories originating from a similar initial state. However, in real-world scenarios, trajectories can be collected from diverse initial states. For instance, when collecting datasets for self-driving cars, it is likely that drivers initiate the recording of trajectories from drastically different initial locations. We found that imbalanced datasets with diverse initial states exhibit a long-tailed distribution of trajectory returns, while those with similar initial states show a bimodal distribution (see Appendix A.4 for details). As diversity of initial states affects the type of imbalance, we focus our experimentation on the two types of datasets: _(i) Trajectories with similar initial states_ and _(ii) Trajectories with diverse initial states_.
Following the protocol in prior offline RL benchmarking [6], we develop representative datasets of each type using the locomotion tasks from the D4RL Gym suite. Our datasets are generated by combining \(1-\sigma\%\) of trajectories from the random-v2 dataset (low-performing) and \(\sigma\%\) of trajectories from the medium-v2 or expert-v2 dataset (high-performing) for each locomotion environment in the D4RL benchmark. For instance, a dataset that combines \(1-\sigma\%\) of random and \(\sigma\%\) of medium trajectories is denoted as random-medium-\(\sigma\%\). We evaluate our method and the baselines on these imbalanced datasets across four \(\sigma\in\{1,5,10,50\}\), four environments. Both types of datasets are briefly illustrated below and detailed in Appendix A.4. Additionally, we present the results on the rest of original D4RL datasets in Appendix A.5.
**(i) Trajectories with similar initial states.** This type of datasets was proposed in [12], mixing trajectories gathered by high- and low-performing policies, as described in Section 3. Each trajectory is collected by rolling out a policy starting from similar initial states until reaching timelimit or terminal states. We consider a variant of smaller versions of these datasets that have small number of trajectories, where each dataset contains \(50,000\) state-action pairs, which is \(20\) times smaller. These smaller datasets can test if a method overfits to small amounts of data from high-performing policies.
**(ii) Trajectories with diverse initial states.** Trajectories in this type of dataset start from a wider range of initial states and have varying lengths. One real-world example of this type of dataset is a collection of driving behaviors obtained from a fleet of self-driving cars. The dataset might encompass partial trajectories capturing diverse driving behaviors, although not every trajectory accomplishes the desired driving task of going from one specific location to the other. As not all kinds of driving behaviors occur with equal frequency, such a dataset is likely to be imbalanced, with certain driving behaviors being underrepresented
### Evaluation Setup
**Baselines and prior methods.** We consider uniform sampling (denoted as Uniform) as the primary baseline for comparison. In addition, we compare our method with two existing approaches for improving offline RL performance on imbalanced datasets: advantage-weighting (AW), proposed in the recent work by [12] and percentage-filtering (PF) [5]. Both AW and PF sample state-action pairs with probabilities determined by the trajectory's return to which they belong. The sampling probabilities for AW and PF are given as follows:
\[\mathcal{P}_{\text{AW}}(s^{i}_{t},a^{i}_{t}) \propto\exp((G(\tau_{i})-V_{0}(s^{i}_{0}))/\eta)\] (Advantage-weighting) (14) \[\mathcal{P}_{\text{PF}}(s^{i}_{t},a^{i}_{t}) \propto\mathbb{1}\left[G(\tau_{i})\geq G_{K\%}\right]\] (Percentage-filtering), (15)
where \((s^{i}_{t},a^{i}_{t})\) denotes the state-action pair at timestep \(t\) of trajectory \(\tau_{i}\). \(G_{K\%}\) represents a threshold for selecting the top-\(K\%\) of trajectories, with \(K\) chosen from \(\{10,20,50\}\) as practiced in [12]. \(V(s^{i}_{0})\) denotes the value of the initial state \(s^{i}_{0}\) in trajectory \(\tau_{i}\), and the coefficient \(\eta\) represents the
temperature coefficient in a Boltzmann distribution. We consider three levels of \(\eta\): low (L), medium (M), and high (H) in our experiments. Further details of the hyperparameter setup can be found in Appendix A.4. For the following experiments, we implement our DW and the above baselines on the top of state-of-the-art offline RL algorithms: Conservative Q-Learning (CQL) [22], Implicit Q-Learning (IQL) [19], and TD3BC [7]. Note that as AW can provide a better initial sampling distribution to train the weighting function in DW, we initialize training DW with AW sampling (denoted as _DW-AW_) and initialize training DW with uniform sampling (denoted as _DW-Uniform_) in the following experiments. We refer the readers to Appendix A.3 for the implementation details.
**Evaluation metrics.** Following the settings of [6], we train all algorithm for one million gradient steps with three random seeds in each dataset. We evaluate the performance of policies acquired through each method in the environment corresponding to the dataset by conducting \(20\) episodes every \(1000\) gradient step. To determine the policy's performance at a given random seed, we compute the average returns over \(20\) episodes during the final \(10\) evaluation rounds, each round separated by 1000 gradient steps. We chose to average over the performance at the last \(10\) rounds rather than solely at the last evaluation round because we observed that the performance of offline RL algorithms oscillates during gradient steps. The main performance metric reported is the interquartile mean (IQM) [1] of the normalized performance across multiple datasets, along with its \(95\)% confidence interval calculated using the bootstrapping method. As suggested in [1], IQM is a robust measure of central tendency by discarding the top and bottom \(25\)% of samples, making it less sensitive to outliers.
### Scenario (i): Trajectories with Similar Initial States
Figure 2(a) shows IQM of the normalized return for thirty-two different datasets where trajectories start from similar initial states (Section 5). For all the datasets, we use the best hyperparameters for AW and PF found in [12] and the hyperparameters for DW-AW and DW-Uniform are presented in Appendix A.4. The results demonstrate that both DW-AW and DW-Uniform outperform the uniform sampling approach confirming the effectiveness of our method. Moreover, combining DW with AW enhances the performance of DW, indicating that DW can benefit from the advantages of AW. This is likely because AW can provide a good initial sampling distribution to start training the weighting function in DW. While DW-Uniform did not exceed the performance of AW in this experiment, it should be noted that our method can be applied when datasets are not curated with trajectories such as reset-free or play style datasets where data is not collected in an episodic manner. This is useful for continuing tasks, where an agent (data curator) performs a task infinitely without termination (i.e., locomotion).
**Limited size datasets.** Figure 2(b) presents the results on smaller versions of \(8\) of these datasets used in Figure 2(a). Note that as we observe the higher temperature \(\eta\) enables AW with CQL to perform better in this type of dataset, we additionally consider AW-XH (extra high temperature) for comparison to provide AW with as fair a comparison point as possible. Further details on the hyperparameter settings can be found in Appendix A.4. Our methods consistently achieve significantly higher returns compared to AW and PF when combined with CQL. This suggests that our methods effectively utilize scarce data in smaller datasets better than weighted sampling approaches (AW and PF) that rely on episodic trajectory-based returns rather than purely transition level optimization like DW. In the case of IQL and TD3BC, we see a clear performance improvement of our methods over uniform sampling and PF while our methods perform on par with AW. For IQL, we hypothesize that this is because IQL is less prone to overfitting on small amounts of data due to its weighted behavior cloning objective [18], which always uses the in-distribution actions. However, it is worth noting that IQL falls short in performance compared to CQL with DW-Uniform. This suggests that IQL may primarily focus on replicating behaviors from high-return trajectories instead of surpassing them, as it lacks the explicit dynamic programming used in CQL.
**Takeaway.** Since CQL with DW-AW outperforms the other two offline RL algorithms in both dataset types, our suggestion is to opt for CQL with DW-AW, especially when dealing with datasets that might exhibit an imbalance and include trajectories originating from comparable initial states.
### Scenario (ii): Trajectories with Diverse Initial States
Figure 4 presents the results on thirty-two datasets of trajectories with diverse initial states (Section 5). We observe that uniform sampling's performance drops significantly in these datasets compared to trajectories with similar initial states, indicating that the presence of diverse initial states exacerbates the impact of imbalance. Both of our methods consistently outperform all other approaches considered in Section 5.2, including AW and PF methods. Notably, even the best-performing variant of AW (AW-M) falls short of matching the performance of our DW-AW, demonstrating the effectiveness of DW in leveraging the initial sampling distribution provided by AW and furthering its performance. The performance degradation of AW can be attributed to the presence of diverse initial states and varying trajectory lengths in these datasets. In such cases, state-action pairs in trajectories with high returns are not necessarily generated by high-performing policies. For instance, a sub-optimal policy can also easily reach the goal and achieve a high return if it starts close to the goal (i.e., lucky initial states). Consequently, over-sampling state-action pairs from high-return trajectories can introduce bias towards data in trajectories starting from lucky initial states. Although AW attempts to address this issue by subtracting the expected return of initial states (see Section 5.1), our results show that AW has limited success in addressing this issue. This is because the estimated expected returns of initial states can be inaccurate since AW uses the trajectories' returns in the dataset to estimate initial states' returns (i.e., Monte Carlo estimates). The trajectories in the dataset are finite in length, which makes the Monte Carlo estimates of expected returns inaccurate. To conclude, when an imbalanced dataset consists of trajectories starting from diverse initial states, we recommend using DW-AW to re-weight the training objectives in offline RL algorithms.
## 6 Related Work
Our approach builds upon recent advances in off-policy evaluation techniques, specifically density-ratio importance correction estimation (DiCE) [32]. DiCE has been primarily used for policy
Figure 3: **(a)** Our methods, DW-AW and DW-Uniform, achieve higher return than Uniform, indicating that DW can enhance the performance of offline RL algorithms on imbalanced datasets. Note that our methods in IQL, although not surpassing AW and PF-10% in performance, ours can be applied to offline RL dataset that are not curated with trajectories. **(b)** Our methods outperform Uniform in CQL, IQL, and TD3BC, indicating no significant overfitting in smaller datasets. DW-AW demonstrates superior returns compared to AW and PF, particularly in CQL, indicating our method effectively leverages limited data. IQL shows limited gains likely due to its difficulties in utilizing data from the rest of low-return trajectories in the dataset (see Section 5.2).
evaluation [29; 32; 10], while our method make use DiCE (i.e., the learned importance weights) to re-weight samples for offline RL algorithms. Recent works [43; 37; 33; 26] optimize the policy using DiCE via re-weighting behavior cloning with DiCE, while we found it fails to match offline RL algorithms' performance even in datasets with plenty of expert demonstration (Appendix A.5).
Offline imitation learning approaches [15; 30; 41] also consider imbalanced datasets similar to ours. However, these methods assume prior knowledge of which data points are generated by experts, while our approach does not rely on such information. Furthermore, our method can effectively handle datasets that include a mixture of medium-level policies and low-performing policies, whereas existing approaches often rely on expert-labeled data.
Multi-task offline RL algorithms [42; 14] filter data relevant to the current task of interest from datasets collected from multiple task. For example, Yu et al. [42] employ task relevance estimation based on Q-value differences between tasks. While our motivation aligns with data filtering, our problem setting differs as we do not assume knowledge of task identifiers associated with the data points. Additionally, our dataset comprises varying levels of performance within the same task, while existing works mix data from different tasks.
Support constraints [20; 38; 2; 40] have been proposed as an alternative approach to prevent offline RL algorithms from exploiting out-of-distribution actions, distinct from distributional constraints used in state-of-the-art methods [22; 7; 18]. While support constraints theoretically suit imbalanced data, the prior work [38] found that support constraints have not shown significant improvements beyond distributional constraint-based algorithms. Note that our method is independent of the constraint used in offline RL algorithms. Thus support constraints is orthogonal to our approach.
## 7 Conclusion, Future Directions, and Limitations
Our method, density-ratio weighting (DW) improves the performance of state-of-the-art offline RL algorithms [22; 18] over \(72\) imbalanced datasets with varying difficulties. In particular, our method exhibits substantial improvements in more challenging and practical datasets where the trajectories in the dataset start from diverse initial states and only limited amount of data are available. Future works can explore other optimization techniques to better address the Bellman flow conservation constraint in importance weights optimization (e.g., Augmented Lagrangian method [37]). Additionally, it would be valuable to study the impact of violating this constraint on the effectiveness of importance-weighted offline RL algorithms.
**Limitations.** Although our method improves performance by optimizing sample weights, we lack theoretical guarantees due to the absence of a unified theoretical analysis on the dependence of state-of-the-art offline RL algorithms on imbalanced data distribution. While some theoretical works [4; 36] have analyzed the interplay between data distribution and offline RL algorithm performance, they primarily focus on specific algorithms that differ significantly from the practical state-of-the-art offline RL algorithms.
Figure 4: Results on imbalanced datasets with trajectories starting from diverse initial states (Section 5.3). Compared to Figure 2(a), the performance of uniform sampling and AW decrease, showing that diverse initial states exacerbate the issue of imbalance. Our methods, DW-AW and DW-Uniform, achieve higher return than all the baselines, which suggests DW is advantageous in broader types of imbalanced datasets.
## Author Contributions
* **Zhang-Wei Hong:** Led the project and the writing of the paper, implemented the method, and conducted the experiments.
* **Aviral Kumar:** Advised the project in terms of theory, algorithm development, and experiment design. Revised the paper and positioned the paper in the field.
* **Sathwik Karnik:** Prepared the datasets and proofread the paper.
* **Abhishek Bhandwaldar:** Helpd scaling up experiments in the cluster.
* **Akash Srivastava:** Advised the project in the details of the practical and theoretical algorithm design and coordinated the compute.
* **Joni Pajarinen:** Advised the project in the details of the practical and theoretical algorithm design and experiment designs.
* **Romain Laroche:** Advised the project in the theory of the algorithms and dataset designs.
* **Abhishek Gupta:** Advised the project in terms of theory, algorithm development, and experiment design. Revised the paper and positioned the paper in the field.
* **Pulkit Agrawal:** Coordinated the project, revised the paper, and positioned the paper in the field.
## Acknowledgements
We thank members of the Improbable AI Lab for helpful discussions and feedback. We are grateful to MIT Supercloud and the Lincoln Laboratory Supercomputing Center for providing HPC resources. This research was supported in part by the MIT-IBM Watson AI Lab, an AWS MLRA research grant, Google cloud credits provided as part of Google-MIT support, DARPA Machine Common Sense Program, ARO MURI under Grant Number W911NF-21-1-0328, ONR MURI under Grant Number N00014-22-1-2740, and by the United States Air Force Artificial Intelligence Accelerator under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
|
2301.12530 | G-Rank: Unsupervised Continuous Learn-to-Rank for Edge Devices in a P2P
Network | Ranking algorithms in traditional search engines are powered by enormous
training data sets that are meticulously engineered and curated by a
centralized entity. Decentralized peer-to-peer (p2p) networks such as
torrenting applications and Web3 protocols deliberately eschew centralized
databases and computational architectures when designing services and features.
As such, robust search-and-rank algorithms designed for such domains must be
engineered specifically for decentralized networks, and must be lightweight
enough to operate on consumer-grade personal devices such as a smartphone or
laptop computer. We introduce G-Rank, an unsupervised ranking algorithm
designed exclusively for decentralized networks. We demonstrate that accurate,
relevant ranking results can be achieved in fully decentralized networks
without any centralized data aggregation, feature engineering, or model
training. Furthermore, we show that such results are obtainable with minimal
data preprocessing and computational overhead, and can still return highly
relevant results even when a user's device is disconnected from the network.
G-Rank is highly modular in design, is not limited to categorical data, and can
be implemented in a variety of domains with minimal modification. The results
herein show that unsupervised ranking models designed for decentralized p2p
networks are not only viable, but worthy of further research. | Andrew Gold, Johan Pouwelse | 2023-01-29T20:15:49Z | http://arxiv.org/abs/2301.12530v1 | # G-Rank: Unsupervised Continuous Learn-to-Rank for Edge Devices in a P2P Network
###### Abstract
Ranking algorithms in traditional search engines are powered by enormous training data sets that are meticulously engineered and curated by a centralized entity. Decentralized peer-to-peer (p2p) networks such as torrenting applications and Web3 protocols deliberately eschev centralized databases and computational architectures when designing services and features. As such, robust search-and-rank algorithms designed for such domains must be engineered specifically for decentralized networks, and must be lightweight enough to operate on consumer-grade personal devices such as a smartphone or laptop computer. We introduce G-Rank, an unsupervised ranking algorithm designed exclusively for decentralized networks. We demonstrate that accurate, relevant ranking results can be achieved in fully decentralized networks without any centralized data aggregation, feature engineering, or model training. Furthermore, we show that such results are obtainable with minimal data preprocessing and computational overhead, and can still return highly relevant results even when a user's device is disconnected from the network. G-Rank is highly modular in design, is not limited to categorical data, and can be implemented in a variety of domains with minimal modification. The results herein show that unsupervised ranking models designed for decentralized p2p networks are not only viable, but worthy of further research.
_Author's note: the experiments performed herein are open-source and can be found on GitHub1._
Footnote 1: [https://www.github.com/awrgold/G-Rank](https://www.github.com/awrgold/G-Rank)
## I Introduction
The problem of relevance ranking in information retrieval problems has been well-studied for decades, solutions for which have enabled users to query vast swathes of information on the World Wide Web and retrieve highly relevant results within milliseconds. Nascent search-and-rank techniques for web search culminated with PageRank in 1998 [1], directly leading to Google's ascendant dominance in the web search domain. All such algorithms, however, depend upon ever-growing databases of mapped relations between various information sources and topics, requiring enormous computational power to deliver lightning-fast results directly to a user's device. Therefore, these algorithms all depend upon highly centralized information architectures with thousands of skilled attendants dedicated to maintaining and improving system capabilities. In such a paradigm, the risk of impropriety such as misdirection and fraud is high due to the enormous financial incentives for being ranked higher in search results.
As such, typical ranking algorithms are wholly unsuited for deployment in decentralized information architectures such as peer-to-peer (p2p) file sharing networks (e.g. BitTorrent) and various Web3 applications. These networks are largely comprised of individual users where the maximum computational and storage capacity available to any search-and-rank algorithm is that of an individual's desktop computer or mobile device. The success of many nascent applications built atop decentralized networks therefore depends upon the efficacy of novel search-and-rank schemes designed specifically for edge devices in these domains. These algorithms must have a zero-server architecture, be lightweight enough to run on a cheap smartphone, and yet be robust enough to return highly relevant results to each individual user.
Furthermore, these algorithms must adhere to the ethos of these decentralized networks, which often emphasize user privacy and information security foremost among its tenets. Any ranking algorithm built in such a domain must therefore be able to function effectively utilizing data immediately available to a user of a p2p application, the majority of which is often the user's own data. That is not to say that a ranking algorithm cannot be improved via the sharing of information between participants in such networks, but rather that the algorithm must be entirely self-sufficient and self-contained without any meaningful expectation of obtaining new information outside of the local device. As first proposed in 2013 by Ormandi et al. [27], the concept of utilizing message-passing as a means to build a cohesive machine learning model in a distributed setting became a novel instrument in respecting user privacy by emphasizing local-first computational paradigms.
The concept of local-first software is not new [26], and privacy-preserving machine learning schemes such as encrypted machine learning [38][39] and federated machine learning [34][35][36][37] already exist, yet the problems of security, storage, and overhead persist. Unfortunately, most of these machine learning models are supervised which handicaps developers by requiring large amounts of high-quality training data to achieve meaningful results. Furthermore, many unsupervised ranking models that show promising results [19][20][21][22] are designed exclusively for centralized systems. As such, any decentralized algorithm
or model that can quickly and sufficiently retrieve and rank search results without the need for model training or human supervision would allow for p2p networks of any size to deliver meaningful search capabilities in a more trustless fashion. Therefore, truly decentralized unsupervised ranking system sits at the forefront of p2p and Web3 communications development.
The rapid growth of p2p file-sharing networks around the turn of the new millennium led to a boom in research for search algorithms designed explicitly for such networks [2][3][4][7][8][9][13][16]. Many such algorithms attempted to recreate the efficacy of well-known existing search and rank algorithms such as PageRank, yet the number of publications plateaued and began to decline around 2012. The explosive growth of blockchain and Web3 technologies has influenced a new generation of developers designing for a more decentralized web experience. Decentralized search and rank algorithms that do not depend upon any centralized entity to function properly, are domain-independent, and can sufficiently replicate the performance of more centralized solutions are still nascent. We demonstrate that a simple, lightweight, and effective ranking algorithm can be deployed to p2p applications while achieving respectable results.
We introduce the unsupervised ranking algorithm G-Rank designed explicitly for ranking search results in an internet-deployed p2p torrent-based music streaming platform. The goal of this first validation experiment is to demonstrate the "correctness" of an unsupervised learn-to-rank (LTR) model in the context of a distributed p2p file sharing network. This model requires no training data to function, is capable of returning relevant results to users within the first few queries, and is not constrained by any dependence upon large datasets. G-Rank is demonstrably capable of ranking results in line with their global popularity, even though the model itself is unaware of the best possible ranking for any given query term. G-Rank will quickly approach the optimal global ranking for all peers in the network, even if a user does not perform any queries themselves; as a network utilizing G-Rank grows in usership, new users will see highly relevant results even with their first query.
The rest of this paper is as follows. Section 2 expounds upon the problem of relevance ranking, namely supervised versus unsupervised methods. Section 3 details the implementation of the G-Rank algorithm, describing the click-log structure and gossip-based information dissemination mechanism necessary for its functioning, as well as the experimentation and evaluation of the model. Section 4 describes a number of experimental simulations of p2p network participants under a variety of scenarios, including the results of each experiment. Section 5 concludes that our algorithm is capable of deployment, and provides suggestions for future work.
## II Problem Description
Security within the domain of decentralized machine learning remains an unsolved problem. There exist numerous additional constraints in decentralized networks that traditional machine learning models need not be concerned with. Trustless, anonymous networks are rife with malevolent usership, and the task of identity verification in such networks also remains unsolved. Adding an additional layer of complexity, many p2p networks are built upon open-source software, affording any would-be adversary direct insight into potential attack vectors. As such, machine learning models engineered for public p2p networks require exceptional attention to detail across all facets of their design. These constraints disqualify any supervised models from the outset as they violate the trustless nature of p2p networks. Either the engineers of such supervised models must be trusted to train and validate the model, or the network participants must provide training data themselves, thereby introducing a critical vulnerability. Creating a LTR search engine for a p2p domain that requires no training yet can converge towards an optimal ranking as if an error rate is being minimized in a supervised model would constitute a major development in p2p applications.
Learn-to-rank is a well-known and thoroughly-studied problem with myriad solutions achieving excellent results, yet many of the most well-known ranking algorithms are designed around centralized data aggregators and supervised training methods. Past research into ranking search results within p2p networks are almost exclusively supervised methods [6][17][33][40], which besides the traditional pitfalls mentioned above also constrain the ranking problem into an optimization problem. Furthermore, such supervised methods lack inherent "memory" such that they cannot retain new information as they observe it; as such, they require large training sets and trusted providers of training data. Compiling relevant datasets and appropriate labels requires considerable effort, which historically has been performed manually by humans and is infeasible for exceptionally large datasets. Automated labeling methods such as semi-supervised learning can speed up this process, but these methods have the drawback of imparting their own inherent bias into the constructed dataset [42][43]. Therefore, the difficulty of labeling data in a manual or semi-supervised manner grows faster relative to the increase in size of data.
Other solutions treat ranking as a recommendation prediction problem, where results are sorted by the predicted score [30][31][32][33]. Framing the ranking problem as a recommendation prediction problem also depends heavily on the manner in which users "score" items that they are recommended. Depending on the application, the manner in which scores are calculated heavily influences the behavior of the recommender. In the domain of e-commerce, an item purchased by a user may be assigned a higher score than an item said user has viewed multiple times but not purchased, even if the user feels that the viewed item is more relevant to them. Meanwhile, a music recommender may assign a higher score to a song that appears in multiple playlists of a specific user yet has fewer overall streaming plays than a song that does not appear in any playlist yet contains a significant number of streaming plays for that same user. As such, any scoring system must be thoughtfully designed for
the specific recommendation algorithm and its domain.
With regards to distributed machine learning, federated machine learning has several drawbacks in this domain as well. Federated models are often less accurate due to their relative inability to capture the variance in the overall data throughout the network, as each model is iteratively fitted to a small subset of data. Federated learning techniques, as presented in [34][35][36][37], utilizes message passing to disseminate model parameters during training. This parameter-passing mechanism is often considered sufficient enough to obfuscate local data - affording some degree of user privacy - though such methods are insufficient to prevent determined adversaries from recreating input data [44]. That being said, any such supervised methods still face the issue of requiring training datasets which limits the scope of potential research due to inadequate training data availability and the infeasibility of synthesizing such datasets oneself. As such, unsupervised ranking algorithms that can approach the performance of supervised ranking methods may be better suited towards p2p domains, where a significant portion of software is open-source and user privacy is often given higher priority than for traditional web services. Significantly reduced overhead in algorithm implementation and maintenance, therefore, is of major benefit to p2p applications.
Machine learning models deployed in distributed or decentralized settings are vulnerable to several specific attack vectors, namely sybil and spam attacks, which can undermine model accuracy and efficacy, e.g. via "model poisoning attacks" [10][11]. Such attacks are inherently difficult to thwart in any decentralized network setting. As shown in [5], even PageRank is not immune to sybil attacks and therefore also requires considerable adaptation to trustless p2p environments. Sybil attacks on federated machine learning models present critical vulnerabilities, and solutions such as those mentioned in [41] depend upon assumptions that are unobtainable in live p2p networks. Meanwhile, spam attacks are often broader in scope yet still pose significant risk to machine learning models whose efficacy depend upon the veracity of the data they are fed.
These threats are well-understood and a variety of methods to thwart such attacks exist [41][45][46], however many of these solutions are based on supervised learning and therefore suffer from the same issues mentioned previously, or require the aggregation of network traffic through centralized "coordinators," eroding the trustlessness of p2p networks. As such, unsupervised machine learning models that are robust enough to function in the midst of spam or sybil attacks are critical to the expansion of search, ranking, and recommendation models for the decentralized web.
## III Architecture of G-Rank
Our G-Rank algorithm is a first humble step towards a first decentralised search engine. We focus on the domain of music and video search specifically. Our p2p architecture assumes each user operates their own node and searches for BitTorrent-based Creative Commons licensed music. This music application allows users (A.K.A. "nodes" when referring to network architecture, or "peers" when referring to other users in the network) to query other peers for the contents of their library and download files to their device. The clicklog is the central data structure within our architecture. It contains the user query and supporting info. Whenever a user issues a query, the user device appends the query and its associated results to a clicklog that is stored locally on the device. At any point, each peer can request an update from another peer containing its local clicklog, disseminating clicklog data with other peers in the network via a gossip protocol (See Section 3B). When a user receives a gossip message containing updated clicklog information, the device appends the new information to the local clicklog to be used by the ranking model in future queries.
The unsupervised method detailed herein focuses on ranking query search results relative to one another, i.e. pairwise comparison across all potential results. Due to the fact that each node in the network contains only a small subset of total possible search results, it is highly unlikely that any one node attain perfect ranking results without the dissemination of local clicklog information to other nodes in the network. Such a mechanism - be it via gossip, broadcasting search history, or a centralized information aggregation scheme - directly and heavily influences the behavior of the unsupervised ranking model. The continuous updating of data accessible to G-Rank is an example of continuous learning [12], where the model requires no re-training as each new data point becomes available. Instead, as each gossip message is received G-Rank considers this new information in real time, affording it the ability to continuously adapt to an ever-changing environment with zero human intervention. Therefore, the ranking model's dependence upon the clicklog dissemination scheme is closely investigated alongside the
Fig. 1: Decentralized p2p networks are zero-server architectures where often the only mechanism of information dissemination is via message-passing, infusing an additional constraint into the machine learning architecture.
actual performance of the ranking model, where two distinct gossip schemes are considered alongside ranking model parameters and functionality.
### _Unsupervised Ranking Model_
When a user searches for a query term, the ultimate goal is to provide the most accurate list of results ranked by relevance to the query term as well as to the user. First, the model checks the local clicklog for previous instances of a query term, and if this term has never been queried before it then searches for matches of this term in the metadata of local files, including the title, artist, and genre tags. The model does not consider misspellings/typos, although methods such as those mentioned in [2] are highly effective at correcting for typos in information retrieval (IR) schemes and could potentially be integrated with G-Rank. If the query term has been seen before, it returns the most popular results for this query weighted by the similarity of search and click behavior of other users who have also issued similar or identical queries (as described in Section 3D). In order to avoid plateauing performance, G-Rank incorporates a degree of statistical noise by swapping two randomly-selected items in the list of results for 50% of the queries.
Due to the fact that the search mechanism considers only the clicklog and item metadata, it is extremely unlikely that a item should erroneously become popularly associated with a query term that has no direct match with any of the item's metadata. The only situation in which this could arise is if a query term has never been seen before nor is contained in any accessible metadata. Should this happen, the search engine returns a list of popular items that have appeared recently in the user's local clicklog. However, because peers have the option to share their local clicklogs with other peers upon request, it is entirely plausible that a node or subset of the network could be unaware of newly added items with matching metadata at the time of the query. If this were to occur, a user could click on a recommended item that contains no matching metadata to the search query and then gossip their clicklog history to nearby nodes, who then also perform a search for the same term and click on the same result. Such an occurrence would then erroneously lead to a term-item pairing for which the associated item actually contains no matching metadata, which could then propagate throughout the network.
In order to avoid this situation, search results that contain matching metadata are always ranked above items that have term-item matches in the clicklog yet contain no matching metadata. The justification for such is that should users wish to find a specific item, they are ostensibly aware of the title, artist, album, or some other trait that would be found within the item's metadata such that they need not rely entirely upon the search history of a specific term in order to find said item. A positive side-effect of this restriction is that it also diminishes the effect of adversarial users "query-bombing" the network to negatively influence the performance of the ranking model.
### _Clicklog Structure_
Each node in the network contains a clicklog that stores the following primary attributes as a row entry: the node's unique ID, the query term, the query results in descending order, and the item the user clicked on. Additional clicklog attributes include the title of the item clicked upon, the tag metadata associated with that item, and a unique key associated with the query term consisting of the concatenation of the node's unique ID and the local query number. These additional attributes are used primarily during the evaluation of the simulations, though G-Rank does consider tag metadata during ranking if the querying node's clicklog does not reflect any direct query term matches. When a query is performed, the results are stored in local memory until a user clicks upon a result, after which the clicklog entry is created and appended locally. Over time, each node becomes increasingly aware of the click behavior of other nodes in the network without necessarily gleaning insight into the local libraries of said nodes. As such, the dissemination of clicklog data enables the unsupervised model to learn from the behavior of other users without revealing personally identifiable information.
### _Gossiping Clicklogs_
Gossip-based protocols allow for dissemination of information throughout a p2p network with varying degrees of efficiency. Regarding G-Rank, it is understood that traditional unsolicited gossip propagation schemes present a clear and present attack vector for adversaries to undermine the model's performance. Therefore, G-Rank depends upon _solicited gossip_ for clicklog dissemination. At any time, any node can send a _request_ message to one or more nodes it is aware of, also known as a _pull_ gossip scheme. The recipient of a _request_ message may reply with a _response_ message containing some or all of its local clicklog, which may contain clicklog entries from other nodes that the recipient has received via issuing its own _request_ messages. In our experiments, nodes cannot refuse update requests and only send _request_ messages to a single node.
The design of the gossip protocol that propagates clicklog information directly affects the performance of the ranking model, and therefore needs to be deliberately designed such that clicklog information is adequately disseminated without congesting the network. In order to determine exactly how the gossip parameters affect the model, specific evaluation metrics need to be determined. For example, should a node receive \(|K|=10\) results for a specific query, it is important to determine how many of these results are in the "optimal" ranking, i.e. for each result \(k_{i}\in K\) the distance between
Fig. 2: The primary attributes of the clicklog data structure. Each entry contains a unique identifier of the node performing the query, the query term, the ranked results (in descending order) for the query, and the item clicked upon.
the local rank \(L(k_{i})\) in the above query versus the global average rank \(G(k_{i})\) across all participants in the network for that query term. In this situation, an item with an "optimal" ranking has a distance of \(G(k_{i})-L(k_{i})=0\) for any specific query.
In distributed and decentralized networks it is well-understood that obtaining a global "snapshot" of the current network state becomes intractible as the network grows large. Well-known algorithms such as Chandy-Lamport [25] are still imperfect as they fail to capture incipient changes to the network state deriving from messages that are currently underway during the time of the snapshot, such that by the time the algorithm terminates the state of the network may have already changed. As such, determining a global truth for a p2p network can only be easily performed in a contained simulation environment in which a global observer aggregates all changes to the network's state. Therefore, it must be understood that any comparison against a "global" optimum in our experiments comes with the caveat that in a live network the global optimum may not be feasibly observable.
### _Node Discovery and Similarity Clustering_
The primary unsupervised method in G-Rank is based on a fuzzy non-parametric semantic self-clustering of nodes based upon a pairwise similarity score as described below. When a gossip response is received by node \(n_{i}\) it appends the new data to its existing clicklog and updates its local list of known nodes in the network. Such is the mechanism of node discovery in the network: via the receipt of clicklog data from other nodes in the network.
After receiving a clicklog update, \(n_{i}\) searches the incoming data for previously unseen unique node IDs. These unseen node IDs are added to a local list of known nodes, which are then sorted in descending order by a modified Jaccard similarity score between their queries and the results they each click upon. The similarity score \(S\) between a pair of nodes \(n_{i}\) and \(n_{j}\) is calculated as follows:
* Find the cardinality of the intersection of the top \(K\) query terms \(T^{K}(Q)\) between \(n_{i}\) and \(n_{j}\), denoted as \(\kappa_{t}\).
* Find the square of the cardinality of the intersection of clicked results for all query terms \(C_{i}(Q)\) and \(C_{j}(Q)\) between \(n_{i}\) and \(n_{j}\), denoted as \(\kappa_{m}\).
* The sum \(\kappa_{t}+(\kappa_{m})^{2}\) is divided by the cardinality of the the union of clicked results for all query terms \(C_{i}(Q)\) and \(C_{j}(Q)\) between \(n_{i}\) and \(n_{j}\), denoted as \(\kappa_{u}\).
That is,
\[S_{i}(n_{j})=\frac{\kappa_{t}+(\kappa_{m})^{2}}{\kappa_{u}}\]
The list of scores \(S_{i}(N)\) is normalized by dividing by \(\texttt{max}(S_{i})\) resulting in a similarity score between \(0.0\) and \(1.0\), where \(1.0\) indicates that two nodes have clicked on the exact same item for every single matching query. Therefore, the similarity score is a weighted ratio of identical query-click tuples to the overall number of queries shared between two nodes. As such, every node maintains a list of nodes it has become aware of via the clicklog, and determines its similarity to other nodes based on past click behavior. This similarity is then used to weight the results of future queries based on the click behavior of other users, such that users are more likely to see results other similar users have clicked on for similar query terms. By including \((\kappa_{m})^{2}\) in the similarity score, we account for divergent click behavior such that node similarity scores follow an exponential gradient. If the click behavior of node \(n_{i}\) diverges from that of \(n_{j}\) over time, \(S_{i}(n_{j})\) will more rapidly decrease than otherwise, allowing for more expedient "re-clustering."
In order to isolate highly divergent click behavior, we introduce the isolation constant \(F\) to the user similarity score. When \(F=0\), only the clicklogs of adjacent nodes with \(S_{i}(n_{j})>0\) are considered when ranking results. When \(F=1\), a node considers with equal weight the clicklog entries of all nodes it has received gossip from when ranking query results. As such, this isolation parameter allows for nodes to discount the clicklogs of other nodes if these nodes have query and click behavior that does not match its own at least once. Similarity weighting is calculated by taking the dot product between the aforementioned similarity scores for each node and sorted results based on the overall number of clicks found in each node's local clicklog. The resulting ranking \(R\) provided to querying node \(n_{i}\) for query \(Q\) is therefore calculated as:
\[R_{i}(Q)=(\forall k\in K_{Q}),\ \ \sum_{j=0}^{N}(C_{k}\cdot(S_{i}(n_{j})+F))\]
where \(S_{i}(n_{j})\) indicates the similarity score for each node pair \((n_{i},n_{j})\in N\), and \(C_{k}\) indicates the number of clicks associated with item \(k\in K_{Q}\) where \(K_{Q}\) is the unsorted set of results for query \(Q\). The resulting items are sorted in descending order by their associated scores. As such, each potential query result is assigned a score based on the number of clicks found in each node's clicklog, weighted by the similarity of each node to the node performing the query. Therefore, the dissemination of clicklog data not only informs other nodes of the popularity of items, it also allows for nodes to cluster themselves based on an easy-to-compute metric, further allowing for personalization of results.
A potential drawback of introducing such a similarity metric into the ranked results is that it introduces a possible attack vector for adversaries to influence the results of future queries throughout the network, e.g. via spam or sybil attacks. Spam attacks become less viable as the number of legitimate users grows larger, while more targeted attacks may be thwarted by the user similarity scheme itself. An adversary attempting to undermine the ranking algorithm by intentionally selecting irrelevant results for specific queries would find themselves increasingly isolated from other users performing legitimate queries, as their behavior over time would continue to deviate from that of other users. Sophisticated adversaries would then need to mimic legitimate
behavior for a large portion of their queries in order to remain relevant to other users without ostracizing themselves.
## IV Dataset and Experiment Setup
Our dataset consists of actual music releases and associated metadata. Our experimental setup is tailored to minimize the work to deploy G-Rank for decentralised search of BitTorrent audio and video content.
### _Dataset_
The dataset utilized in this experiment was compiled from a series of 256 actual music releases by real artists via the PandaCD record label2, all of which were released under the Creative Commons license. Entries may be singles, albums, EPs (extended-play releases), and LPs (limited-play releases). Every entry consists of three attributes: _Title_, _Artist_, and _Album_, as well as a number of associated _Tags_ as metadata, which describe the release in terms of genre. These tags have been compiled into a corpus of potential query terms, and every query term in this experiment consists of exactly one tag, of which there are a total of 39 unique values.
Footnote 2: [https://pandacd.io/](https://pandacd.io/)
### _Experiment Setup_
For all experiments we conducted an evaluation round every 100 queries, where a number of performance metrics are gathered (see Section 5F). In addition to the regular performance evaluation, these evaluation rounds act as progress markers at discrete intervals in the simulation, which are discussed in Section 5G. Each experiment, including the baseline, was conducted twice: once with similarity weighted isolation constant \(F=0\) and again with \(F=1\), demonstrating the effect that cluster isolation (see Section 3D) has on G-Rank's performance. Unless stated otherwise, simulation parameters are as follows:
* All gossip targets are drawn exclusively from each node's local clicklog data.
* All nodes keep track of gossip _progress_ such that previously-shared clicklog contents are omitted from new gossip requests.
* When a new node joins the network, it is bootstrapped by a randomly-selected node who shares with it a randomly-sampled subset of its own clicklog. Via this bootstrap mechanism, each adversarial node becomes aware of a handful of other nodes in the network to which it can gossip during its attack phase.
* There are exactly 10 malicious nodes in each adversarial experiment (with the exception of the Epic Sybil Attack), which are bootstrapped as stated above at simulation time step \(t=2500\), exactly 25% through the simulation.
Across all experiments, the simulated network consists of 100 nodes, all of which begin with a limited number of library items. The simulation is initialized as follows. For each node \(n_{i}\in N,i=\{0,...,99\}\) in the network, \(n_{i}\) is initialized by selecting at uniform random \(10\%\) of the items from the music dataset to add to its local library (approximately 26 songs per node). Next, a series of initial queries are performed. For each of the 39 possible query terms, each node performs a search for said query term and chooses at random one item from its library with a tag matching the query term and appends this entry to its local clicklog. Should a node's library not contain any items with tags matching the query term, it selects at random a single item from its local library, thereby introducing a small degree of noise into the clicklog. At this point, no ranking or click modeling is utilized for selection, and the clicklog of node \(n_{i}\) contains exactly 39 items. Then, node \(n_{i}\) gossips a random sample of 10% of its local clicklog to node \(n_{i-1}\) such that each node contains no more than 44 clicklog entries; 39 belonging to itself, and up to an additional five items that it receives via gossip from another peer.
This method of initialization affords each peer in the network an even number of clicklog items to utilize during a query, but an uneven distribution of network knowledge across each node such that nodes with higher IDs are more likely to be aware of a higher number of peers at the outset of the simulation. After every node has been initialized, the simulation begins and nodes are chosen uniformly at random alongside a random query term from the corpus to perform a query-term search. The results of the search are ranked as detailed in Section 2A, and an item to be clicked upon is chosen based on the aforementioned click model. The search and click results are then appended to this node's local clicklog. Thereafter, this node then performs a gossip round by requesting a gossip update from a randomly-selected peer node it is aware of (except in the case of the _Push vs. Pull_ experiments, see Section 5D).
There are two popular schemes for initiating gossip in p2p networks: time-based and probabilistic. In time-based schemes, a node gossips every \(t\) time units, whereas in most probabilistic schemes any given node has a probability \(p\) per time unit to gossip some information to a subset of other nodes, such that after \(t\) time units there is a
\[\Pr(X=t)=(1-p)^{t-1}\cdot p\]
probability that a node will have gossiped. To clearly illuminate the effect of adversaries on G-Rank's performance, our experiment implements a hybrid gossip approach such that at every simulation tick \(t\) a random node is receiving at least one update from another node it is aware of (see Section 3C). As such, a node is guaranteed to receive gossip post-query yet still is chosen probabilistically such that the above geometric probability distribution holds, given that a node has probability \(p=\frac{1}{|N|}\) of performing a query-then-gossip operation at an arbitrary time step \(t\). By utilizing such a gossip mechanism we ensure that clicklog information is propagated regularly throughout each simulation.
### _Click Modeling_
Modeling realistic user-clicking behavior is essential to the development of ranking algorithms. Not all user clicks
may be on relevant items in a list, and as such it can be expected that a certain degree of noise exists in user click data. Extrapolating such noise into a simulation therefore requires careful consideration. Without anticipating and modeling a certain degree of noise, a ranking model's query results may erroneously converge towards irrelevant items. Anticipating and modeling noise in user click behavior has been investigated [15], however for this experiment users select the highest-ranking item in most queries, except when multiple results with equal relevance scores were shown to the user. In this case, the result with the lowest item ID is chosen as a tiebreaker.
## V Adversarial Simulations and Performance Analysis
We simulate several adversarial conditions alongside a baseline simulation with no adversaries. Each adversarial simulation is intended to isolate and investigate the effects of specific adversarial and anti-social behavior on G-Rank's performance. Each simulation's results is compared to the baseline global performance of G-Rank, as the global optimal rankings are negatively affected by such attacks. As such, each scenario's impact on G-Rank's ability to converge towards a true global optimality without adversarial interference is investigated with the aid of the metrics described in Section 4B. In every adversarial simulation, the network is bootstrapped without any adversarial presence at first. At time step \(t=2500\), all attackers are bootstrapped into the network as described in Section 4B, where they lie in wait until time step \(t=5000\) to begin their attack. At this time, they perform their attack as described in each section below. Post-bootstrap, these adversarial nodes may receive request messages from benign nodes, even if they have not yet performed their attack.
### _Baseline Experiment_
Initially we conduct a baseline validation experiment to demonstrate the sensitivity of the node discovery process within distributed machine learning. Realistic simulations lack any centrality and thus have no central coordinator to discover other nodes. Our design integrates node discovery via the clicklog itself using a unique node identifier. Thus a single clicklog message provides both overlay network information for gossip dissemination, as well as the underlying data upon which the unsupervised model relies. This baseline experiment entails no adversarial interference, demonstrating how individual nodes adjust their rankings over time as they receive gossip throughout the simulation. All other experiments build upon this validation simulation's setup for comparison purposes.
### _Targeted Sybil Attack_
The first adversarial simulation is performed to demonstrate how a sophisticated adversary could undermine a p2p network utilizing G-Rank by forcing irrelevant results towards the top of query results. The adversaries execute a basic sybil attack where 10 new sybil nodes are bootstrapped into the network as described above, lying in wait until the predetermined attack time. At the time of attack, each sybil attacker chooses a single specific term to perform 100 queries with, each time clicking the bottom-most item in the list of ranked results. After the series of queries are complete, the attackers then lie in wait until they receive a gossip request from another peer in order to disseminate their malicious clicklog entries. This attack artificially inflates the relevance of otherwise low-ranked results to a specific query term, undermining the veracity of the rankings other nodes are shown. By repeatedly choosing the lowest-ranked item in the list, the attacker attempts to undermine G-Rank's ability to determine the most popular item associated with the query term. The purpose of this experiment is to examine the effect deliberately misleading clicklog entries has on G-Ranks ability to converge towards optimality.
### _Clicklog Inflation Attack_
The purpose of the second adversarial simulation is to examine G-Rank's ability to re-converge towards optimal rankings after a sudden, significant growth in the number of clicklog entries that the model considers when ranking content. This simulation differs from the _Targeted Sybil Attack_ in two key ways: the number of queries each adversary performs \(1000\) queries instead of \(100\), and each adversary chooses results purely at random. By performing a significant number of queries before gossiping, the adversary attempts to undermine G-Rank by injecting a significant statistical noise into each node's clicklog. The propagation of random clicklog noise throughout the network is investigated against the _Targeted Sybil Attack_ mentioned previously.
### _Epic Sybil Attack_
The purpose of this experiment is to examine the performance of G-Rank in the face of a significant number of adversaries. The third adversarial simulation is nearly identical to the first, except that the number of attackers equals 75% of the entire network participants. Each node is bootstrapped in the same manner as the _Targeted Sybil Attack_, lying in wait until the predetermined attack time. At the time of attack, each node performs \(100\) queries, again choosing the lowest-ranked item in the results for each query. They then wait until a gossip request message is received. The inclusion of a network super-majority of sybil nodes is intended to investigate G-Rank's ability to improve rankings over time in the face of severe adversarial conditions.
### _Push vs. Pull Experiment_
Within this fourth experiment we introduce a number of malicious nodes which conduct a _Targeted Sybil Attack_ under a modified gossip scheme. This experiment shows the dramatic impact of _Push_ versus _Pull_ gossip. This experiment proves that is is vital for security that malicious nodes can not easily insert their polluting content with honest peers, _i.e._ a push architecture. With a pull architecture, peers are more autonomous and decide individually the speed of incoming information, if they trust another peer, or may
randomly sample from discovered peers. Malicious nodes in this experiment send an unsolicited clicklog gossip messages to up to two peers3, whereas benign nodes push to no more than one peer. As Internet bandwidth is cheap, this simple experiment shows a first line of defence against clicklog spam without the need for significantly modifying G-Rank's core functionality. With the pull architecture utilized in the original _Targeted Sybil_ attack, there exists only one recipient of a malicious node's gossip. In this experiment, all nodes must accept incoming gossip messages. By comparing the push gossip scheme to the original pull-based scheme, we illuminate the difference in G-Rank's convergence rate between two different gossip mechanisms without altering G-Rank's core functionality. The purpose of this experiment is to highlight the effect various gossip and information dissemination schemes have upon G-Rank's efficacy.
Footnote 3: In the rare circumstance that an adversary is only aware of one other peer, it only gossips to that peer.
### _Evaluation Metrics_
For each simulation we utilized a number of metrics to evaluate G-Rank's ranking performance over time, its tenacity when facing adversarial conditions, and the network capacity overhead over time. The primary performance metric utilized is a positional edit distance metric where we compare the sum of index distances between each unique element in \(R_{i}(Q)\) and \(R_{g}(Q)\), where \(R_{g}(Q)\) indicates the globally optimal ranking for query \(Q\). \(R_{g}(Q)\) is computed simply by ranking the most popular items by their respective number of clicks associated with a specific query term across all nodes. This metric allows us to determine how far each item is from its most optimal position at any given point in time, giving us the ability to determine how G-Rank performs for any given node for a specific query term.
We also consider the rate of G-Rank's convergence towards the global optimal averaged across all nodes and possible query terms over time. The rate of change in this distance metric affords us insight into G-Ranks behavior over time, particularly during the adversarial simulations, such that we can better understand how G-Rank's long-term performance is affected by transient adversarial events. For each possible query term we also measure the number of results containing the most popular result in the top position in order to demonstrate the roughly even distribution of performance, regardless of the frequency a specific query term is issued.
In terms of space and storage metrics, we also measure the average clicklog size across all nodes over time, as gossip occurs consistently yet as time goes on the number of duplicate clicklog items being shared likely continues to grow. To better understand G-Rank's dependence upon gossip, we monitor the rate of growth in gossip message size (in bytes) as individual clicklogs grow large - an important metric considering the potential variation in each node's processing power and storage space. However, we do not consider any time-based computational overhead metrics as these are highly dependent upon numerous factors, including the programming language in which G-Rank is implemented as well as each individual device's computational power.
### _Performance Analysis_
The results of the initial baseline experiment show that without any adversarial conditions the performance of G-Rank rapidly approaches the globally-optimal ranking for each node. Figure 3 shows that the distance between each node's local ranking of results and the globally-optimal ranking for each possible query term drops precipitously in early stages of the simulation, approaching perfect ranking scores for all peers in the network. Figure 7 shows that the median percentage of queries containing the most popular song per tag initially grows slowly, accelerating in growth as gossip continues due to the increasing awareness of other nodes' queries and results. As more gossip occurs, the number of queries containing the top song associated with each query approaches 100%. Notably, the _Epic Sybil Attack_ simulations were the only experiments in which the median percentage of queries did not reach 100% by the end of the simulation, which likely is due to the significantly higher number of adversaries polluting each peer's clicklog. Figure 8 shows how the number of most popular items associated with each possible query term grows at approximately even
Fig. 3: Scatter plots depicting each nodeβs average distance to optimal rankings for each term, for each baseline simulation.
rates, indicating that the gossip scheme itself does not lead to an imbalance over time in ranking performance for lesser-used query terms. Figure 9 shows that while average gossip message size grows quickly at first, the rate of growth rapidly slows as peers become increasingly aware of one another. The size of each gossip message rarely exceeds 600 kB.
Considering the known threat that sybil and spam attacks pose to p2p networks, the results of the adversarial simulations generally fall in line with expectations. G-Rank is susceptible to sybil and spam attacks, though its relative resilience in the face of targeted attacks is notable. However, both the _Targeted Sybil Attack_ and the _Clicklog Inflation Attack_ have an outsize effect on performance, where due to the sheer size of infected clicklog entries, the entire network converges towards towards a single set of rankings that it appears unable to escape from, even considering the injected noise described in Section 3A. This indicates that local minimums are exceedingly difficult for G-Rank to escape from. As seen in Figure 4, G-Rank deviates from optimality post-attack, albeit at vastly different rates depending on the manner of adversarial interference. Figure 4 also shows that when \(F=0\), benign users are not as quickly affected by malicious gossip, most noticeably during the _Targeted Sybil Attack_ and the _Epic Sybil Attack_. Rankings in the _Clicklog Inflation Attack_ experience a more rapid divergence from optimality than in the _Targeted Sybil Attack_, though the results of the _Epic Sybil Attacks_ are significantly worse. When adversaries constitute a super-majority of peers in the network, results degrade rapidly before tapering off. Ranking results post-attack are still significantly more accurate than at the start across all adversarial simulations.
Notably, network peers in the _Push vs. Pull_ simulation see ranking performance rates improve faster than in the _Targeted Sybil Attack_ with the pull-based gossip scheme, as
Fig. 4: Scatter plots depicting each nodeβs average distance to optimal rankings for each term, for each adversarial simulation, for both \(F=0\) (top) and \(F=1\) (bottom).
Fig. 5: _Push vs. Pull_: Mean node distance to optimal rankings across all terms for \(F=0\).
Fig. 6: _Push vs. Pull_: Mean node distance to optimal rankings across all terms for \(F=0\).
seen in Figure 5 and Figure 6. The _Push vs. Pull_ comparison demonstrates that push-based gossip schemes result in faster convergence at the expense of faster divergence under adversarial influence. In both versions of this simulation, G-Rank converges significantly faster towards optimality, though is almost immediately trapped in a local minimum post-attack, further bolstering the argument that the gossip dissemination scheme holds an outsize influence on G-Rank's overall performance and resilience.
The effect that the isolation constant \(F\) has on such behavior is minimal, but not negligible. Setting \(F=0\) has the consequence of effectively disqualifying any nodes without at least one matching query-click pair with the querying node. Any influence an adversary then has on ranking is dependent upon the number of matching query-click pairs. As such, as the diversity of clicklog entries grows larger so too should the effect such a parameter has on insulating nodes from other malicious peers, in theory. Conversely, when \(F>0\), all clicklog data (including malicious entries) is considered in the ranking process as the weight of each entry will subsequently also be a positive non-zero value. The positive effect this has is greater personalization results for benign queries; clicklog results from nodes with similarity scores \(S_{i}(n_{j})=0\) still have the number of clicks associated with that result considered in the final ranking. The negative effect is that all clicklog entries, including malicious entries, are considered. In our simulations, there exists a negative effect on rank distances when \(F>0\), although the effect is not enough to permanently isolate peers from adversarial clicklog poisoning.
## VI Conclusion
We proposed G-Rank as a lightweight, modular, and easy-to-understand unsupervised continuous ranking model designed explicitly for permissionless p2p networks. The results demonstrate that unsupervised search-and-rank models designed specifically for p2p applications show merit and are worthy of further research. G-Rank demonstrates that a simple unsupervised model can recommend near-perfect results to users in sterile network conditions. The self-clustering method described in Section 3D allows for a high degree of algorithm customization such that individual nodes can dramatically alter their ranked results based on the behavior of other nodes. By altering their search results based on similarity of behavior, peers in the network are able to isolate themselves from adversarial behavior to varying degrees. G-Rank shows varying degrees of resilience in the face of transient adversarial conditions, particularly regarding highly targeted nefarious behavior. The scale of negative adversarial impact depends heavily upon the clicklog dissemination gossip scheme. Furthermore, the isolation constant \(F\) has some effect on insulating peers from adversarial clicklog poisoning, though we suggest further research including larger more diverse data such that peers can more effectively distance themselves from the behavior of dissimilar peers.
Potential for future development of unsupervised decentralized search and ranking models in p2p networks is exceptionally rich. One of the primary pitfalls of the p2p network domain is the threat adversarial actors such as sybil attackers may have on the model. As such, mitigating threats above the network and protocol layers at the model level is a rich field for future development. Other potential model
Fig. 8: Percentage of queries where the most popular song associated with each query term is included in the ranked results, average between both baseline simulations.
Fig. 7: The median percentage of queries containing the most popular song for each query, across all experiments.
Fig. 9: Mean gossip message size (in kilobytes) for all simulations. Each clicklog row entry is approx. 600 bytes in size.
improvements may include augmenting the user clustering model beyond a simple similarity metric such that sybil and other spam attacks become classified as outliers with regards to "typical" user behavior, where recommendation and ranking scores are more heavily influenced by the behavior of other users within clusters. Fuzzy clustering methods such as the one mentioned herein allow for peers to improve their local rankings based on those received by other similar users. More explicit self-clustering methods may lead to significantly improved performance, particularly those adept at identifying and isolating statistical outliers, such as measuring the distance of an outlier from all known cluster centroids. We therefore conclude that unsupervised learn-to-rank models in adversarial p2p networks show significant promise and are worthy of further research.
|
2304.09916 | An Intent-based Framework for Vehicular Edge Computing | The rapid development of emerging vehicular edge computing (VEC) brings new
opportunities and challenges for dynamic resource management. The increasing
number of edge data centers, roadside units (RSUs), and network devices,
however, makes resource management a complex task in VEC. On the other hand,
the exponential growth of service applications and end-users makes
corresponding QoS hard to maintain. Intent-Based Networking (IBN), based on
Software-Defined Networking, was introduced to provide the ability to
automatically handle and manage the networking requirements of different
applications. Motivated by the IBN concept, in this paper, we propose a novel
approach to jointly orchestrate networking and computing resources based on
user requirements. The proposed solution constantly monitors user requirements
and dynamically re-configures the system to satisfy desired states of the
application. We compared our proposed solution with the state-of-the-art
networking embedding algorithms using real-world taxi GPS traces. Results show
that our proposed method is significantly faster (up to 95%) and can improve
resource utilization (up to 76%) and the acceptance ratio of computing and
networking requests with various priorities (up to 71%). We also present a
small-scale prototype of the proposed intent management framework to validate
our solution. | TianZhang He, Adel N. Toosi, Negin Akbari, Muhammed Tawfiqul Islam, Muhammad Aamir Cheema | 2023-04-19T18:39:22Z | http://arxiv.org/abs/2304.09916v1 | # An Intent-based Framework for Vehicular Edge Computing
###### Abstract
The rapid development of emerging vehicular edge computing (VEC) brings new opportunities and challenges for dynamic resource management. The increasing number of edge data centers, roadside units (RSUs), and network devices, however, makes resource management a complex task in VEC. On the other hand, the exponential growth of service applications and end-users makes corresponding QoS hard to maintain. Intent-Based Networking (IBN), based on Software-Defined Networking, was introduced to provide the ability to automatically handle and manage the networking requirements of different applications. Motivated by the IBN concept, in this paper, we propose a novel approach to jointly orchestrate networking and computing resources based on user requirements. The proposed solution constantly monitors user requirements and dynamically reconfigures the system to satisfy desired states of the application. We compared our proposed solution with the state-of-the-art networking embedding algorithms using real-world taxi GPS traces. Results show that our proposed method is significantly faster (up to 95%) and can improve resource utilization (up to 76%) and the acceptance ratio of computing and networking requests with various priorities (up to 71%). We also present a small-scale prototype of the proposed intent management framework to validate our solution.
Vehicular Edge Computing, Intent-based Networking, Software-Defined Networking, Resource Management, Virtual Network Embedding
## I Introduction
The automotive industry is one of the fastest-growing industries. In recent years, the increased use of onboard microprocessors such as On-Board Units (OBUs) and sensors technology has led to technological advancements that enabled vehicles to provide various safety and driver assistance-related systems. For example, modern cars can autonomously drive to their destination, warn the driver of external hazards, and avoid collisions. However, many of these applications' growing demands for computational resources urge the use of the communication infrastructure and connection with Road Side Units (RSUs) to offload the heavy computational tasks. Moreover, vehicles are becoming increasingly connected, and V2X (Vehicle to Everything) communications enable vehicles to communicate with each other and the outside world, allowing applications to go beyond internal functions and provide improved awareness of impending events over a wider area. For many conventional connected vehicles, the network was only responsible for transferring data from vehicles to the cloud. However, applications are evolving into a highly distributed layer that resides directly within the network fabric. In fact, it is crucial to support many real-time applications and perform analytics at the edge as close as possible to the data source, e.g., real-time collision avoidance systems on autonomous vehicles. Consequently, Vehicular Edge Computing (VEC) [1, 2] has become the mainstream paradigm to meet strict performance requirements such as response time and network bandwidth of real-time applications.
With an increasing number of computational nodes, for
Fig. 1: Vehicular Edge Computing Overview
example, cloud, edge devices, vehicles, RSUs, and compute-enabled network components, the networking operation domain in VEC is becoming more complex. Fig. 1 depicts one such VEC scenario where mobile vehicles may establish a network connection with other vehicles (inter-vehicle communication), RSUs, and access points (base stations). In addition, RSUs can be interconnected to other RSUs or to access points by either a wired or wireless network. Although each vehicle may have extra computing resources in its OBU, heavier computational workloads can be offloaded to the edge data centers (EDCs). Thus, it is challenging to efficiently orchestrate and manage the underlying networking and computing resources based on various service requirements in such a VEC environment. In other words, developing, deploying and operating applications in VEC environments is not trivial. Therefore, it is essential to create mechanisms to automatically capture the applications' deployment requirements (intents) to activate and assure them network-wide. Existing solutions try to address these issues mainly through service placement approaches and task offloading techniques without taking networks into consideration [3]. In this work, we aim to bridge the gap between the deployment requirements of VEC applications (business intent) and what the network delivers by building required algorithms considering both the computing and networking requirements of the applications.
Software-Defined Networking (SDN) is a technology that helps tackle the network management and orchestration complexity [4] and has been widely used in VEC and Mobile Edge Computing (MEC) environments [5, 6]. SDN centralizes the network's control plane and provides automation, cost-efficiency, programmability, and greater efficiency for network management. In recent years, Intent-Based Networking (IBN) based on SDN is also emerging to automate networks further. It provides network intelligence by evoking a high-level intent, detecting potential deviations from that intent, and prescribing actions required to ensure that the intent is always satisfied, such as link rerouting and service reallocation via migration [7]. Based on IBN techniques, the application provides intents to indicate the desired network requirements, such as network bandwidth and end-to-end delay.
Several industrial vendors, such as Cisco and VMWare, also focus on the IBN agility for edge network management. However, addressing only network resource requirements is insufficient in VEC. It should include both computing and networking requirements to guarantee end-to-end service delay. Furthermore, the network is dynamic at the edge, and failures or congestion can occur in switches, links, RSUs, EDCs, etc. A framework is required to automatically react to these issues and requirements to consistently and reliably maintain edge services operational. Following the industry trend, this work proposes an intent-based manager to orchestrate both networking and computing resources for VEC applications. Intent can be considered a high-level, abstract declaration for applications that describes their desired state or result [8]. For example, a service provider may want an autonomous vehicle to maintain low-latency and high-throughput connections for its image processing service. An intent can be compiled into several service requests with resource requirements via existing network templates and policy languages, such as PGA, Kinetic, and Janus [8] or NLP models [9].
To the best of our knowledge, this is one of the earliest efforts to install and support joint networking and computing intents for VEC applications. Current Virtual Network Embedding (VNE) algorithms [10, 11, 12, 13, 14, 15] are inadequate for the intent installation problem in VEC environments for multiple reasons. Firstly, current VNE algorithms do not allow the allocation of multiple virtual nodes to the same physical node, negatively impacting resource utilization and the acceptance rate. Additionally, traditional virtual network requests (VNR) are treated as standalone, while an intent of a VEC application may need to be compiled into multiple VNRs. Current VNE algorithms also do not support adding location constraints as needed by VEC intents, making them incapable of handling the mobility aspect of the users/applications. Furthermore, current VNE algorithms do not consider computing requirements within the intent framework and models. Lastly, current VNE algorithms do not handle the installation of intents with priorities. Simply assigning priorities to the intents does not resolve the issue, as we also need to satisfy users' Quality of Service (QoS) requirements.
To address these problems, we propose an efficient online algorithm for intent installation with different priorities while considering both computing and networking-related properties. The key **contributions** of the paper are as follows:
* We introduce the computing resource and location requests into Intent-Based Networking for VEC.
* We propose a priority- and location-aware algorithm for the installation and management of intents with different priorities.
* We compare our proposed algorithm with the state-of-the-art VNE algorithms in a large-scale simulation with real-world data sets and edge networks.
* We implement and evaluate the proposed intent-based edge computing with a real SDN controller on a Mininet emulation platform.
## II System Overview
In this section, we first showcase the proposed intent management framework. Then we highlight the intent features available to the applications running in a VEC environment. Lastly, we discuss the intent life-cycle.
### _Intent Management Framework_
Current IBN frameworks only allow the mapping of application network resource requirements. In this paper, our goal is
Fig. 2: Intent management framework
to extend the intent framework to allow the expression of both compute and network elements along with application QoS. Thus, we need to holistically manage the edge/cloud platform for compute resources, orchestrate the container placements for micro-services, and utilize the network controller to map the virtual network. As shown in Fig. 2, by integrating SDN controller (e.g. ONOS or OpenDayLight), edge and cloud platform (e.g. OpenStack), and container orchestrator (e.g. Kubernetes), we propose an intent-based manager to cohesively orchestrate both networking and computing resources.
The applications' intents are submitted to the intent-based manager using a declarative manifest. The manager then extracts the requirements of each intent to check whether the intent can be installed with the existing compute and network resource capacity. For a successful intent compilation, the extracted compute requirements from the intent are transformed into a resource allocation task with the help of the edge/cloud platform and the container orchestrator. The network request is also transformed into a VNR and the manager instructs the SDN controller to install the network intent.
### _Intent Features_
Fig. 3 shows the features available to the application to be expressed as per intent in the proposed intent management framework, which are categorized in three groups as follows:
**Node:** An application/service can choose a set of nodes (locations) where the service can be executed (location constraints). In addition, the compute resource requirements (e.g., vCPU, memory, or storage) for the service can also be specified (resource constraints).
**Link:** The application/service can choose the desired bandwidth requirement and express the minimum expected latency for the service to function correctly (network constraints).
**Priority:** As multiple services can be deployed across the system with competing and conflicting interests, intent from each service must have a priority specified while submitting the intent. Thus, depending on the intent priority and the QoS requirements, proper intent installation strategy and intent failure handling mechanisms can be followed.
### _Intent Life-cycle_
The proposed intent framework allows applications to specify both their network and compute resource requirements. The intent manager accepts the intent specification requests and compiles them into installable intents that require some actions to meet the desired application state. Finally, when the actions are carried out in both the network and computing environments, some changes are made, for example, flow rules being pushed to switches or compute resources reserved to deploy a microservice on a node.
Fig. 4 depicts the complete life-cycle for the intents in the proposed framework. An intent can be in one of the following states: _Ready, Active, Suspending, Failed, Withdrawn, Terminated_ (represented in oval shapes in the figure). The intent manager takes the actions to make changes to the environment and satisfy intent requirements. These actions are represented with rectangular shapes. We provide a brief overview of the key actions in the life-cycle of the intents.
**Intent Submission:** Intents are submitted by the applications/service providers to the intent management framework. Upon receiving an intent asynchronously, the intent manager transforms the intent into several compute and network requests which can be used for the intent compilation phase.
**Intent Compilation:** The resource discovery is made by the SDN controller and the VM/container orchestrator. The intent manager can communicate with them to get regular updates on network topology changes and the resource capacity of the compute nodes. Thus, if the network and compute resources are sufficient to handle the submitted intent, the intent will be compiled successfully, and the intent state becomes _Ready_. Otherwise, the intent state goes to the _Failed_ state after exhausting the sufficient retry attempts.
**Intent Installation:** A _Ready_ intent can be installed in the system by reserving both the network and compute resources, such as compute and bandwidth for an application. When the intent is installed successfully, the state is changed to _Active_.
**Intent Recompilation:** An _Active_ intent might be suspended by the application or due to a topology and resource update. In this case, the intent state is _Suspending_, and it enters the Intent Recompilation phase. If the desired system state can be reached after a few recompilations attempts via link remapping or service migration [7], then the intent re-enters an _Installation_ phase. Otherwise, the intent state is _Failed_.
**Intent Withdraw:** An application can request at any time to withdraw both an active or a failed intent. In this case, the system withdraws the intent, and if the intent is active, takes back all the allocated network and compute resources. A _Withdrawn_ intent can be resummitted as a new intent based on the intent requirement template.
**Intent Deletion:** While in the _Withdrawn_ state, the application can request to delete the intent requirement template entirely. At this point, the intent state will be _Terminated_.
The retry threshold can be adaptively configured according to the current edge conditions. In highly dynamic environments, a small threshold can result in a high failure ratio. A
Fig. 4: Intent lifecycle
Fig. 3: Intent features and constraints
large threshold, on the other hand, may cause a significant number of intents to be reinstalled, resulting in an increase in processing time. Furthermore, a large retry threshold can improve the acceptance rate for high-priority intents.
## III System Model
The phases of intent installations and contention resolutions between the intents can be translated into an _Online Virtual Network Mapping_ or _Virtual Network Embedding_ (VNE) problem. The acceptance ratio and average resource utilization are critical parameters needed to be optimized.
**Vehicular Edge Computing:** VEC or a substrate network can be modeled as a weighted undirected graph \(G(N,L,A_{N},A_{L})\) including the global network hardware, network links, edge devices and end users, where \(N\) denotes the set of physical nodes and \(L\) denotes the set of the physical network links. \(A_{N}\) and \(A_{L}\) denote attributes associated with nodes and links, respectively. For concise modeling, we consider CPU, memory capacities, and location constraints for node attributes, and bandwidth and latency for link attributes. We use \(P\) to denote all loop-free paths within the VEC network and \(P_{uv}^{k}\) to denote \(k\) shortest paths between node \(u\) and \(v\).
**Intents and requests:** Let \(I^{m}\) indicate the intent submitted by the application/service \(m\). Let \(\pi^{m}\) denote the priority of Intent \(I^{m}\). After the intent compiling process, \(I^{m}\) can be compiled into several virtual networking and computing requests \(R_{i}^{m}\) where \(i\) denotes the \(i\)th requests of intent \(I^{m}\). As a microservice architecture, each compiled request of an intent can also be modeled as an undirected graph \(R^{m}(N^{m},L^{m},C_{N}^{m},C_{L}^{m})\), where \(C_{N}^{m}\) and \(C_{L}^{m}\) denote the set of node and link constraints, such as the CPU, memory, and location constraints for virtual nodes, and bandwidth and latency for virtual links. For instance, in Fig. 5, _intent1_ is compiled into two requests, where the computing requirement of virtual node \(v_{1}^{1}\) is \(5\). The bandwidth and delay requirement of the virtual link between \(v_{1}^{1}\) and \(v_{2}^{1}\) are 1 and 10, respectively. Virtual nodes \(v_{1}^{1}\) and \(v_{4}^{1}\) should be allocated in physical node \(V1\), i.e., user node. As a result, if a mobile user reconnects to another EDC, virtual links associated with \(v_{1}^{1}\) and \(v_{4}^{1}\) need to be retrouted or other virtual nodes associated with these links might need to be relocated accordingly.
Location constraints \(C_{N}^{m}(v)=n(v)\) of a virtual node \(v\) can be divided into _fixed_ and _mobility-related_ location constraints. For the fixed location constraint, the virtual node location constraint is associated with a stationary physical node, such as a certain roadside unit, gateway, or edge data center. However, if location constraints are associated with mobile end-users such as autonomous vehicles or pedestrians, the location of the virtual node can change over time, i.e., \(n(v)\) is a mobile node. The intent-based orchestrator actively monitors and maintains the intent installation for mobility-related location constraints due to the virtual node's location changes.
**Problem Description:** The installation of a set of intents is defined by mappings: \(M\{I^{m}\}:\{R^{m}(N^{m},L^{m},C_{N}^{m},C_{L}^{m})\to G(N,P_{uv}^{k},A_{N},A_ {L})\}\), from a set of \(R^{m}\) to \(G\), where \(N^{m}\subset N\) is the node mapping and \(L^{m}\subset P_{uv}^{k}\) is the virtual link mapping to the network path. An intent installation is successful when its compute resource requirements and network resource requirements are all satisfied. Fig. 6 illustrates a possible mapping for all requests of the three intents in Fig. 5, installed on the substrate network.
**Objectives:** The objective of intent installation is to maximize the intent acceptance ratio owing to their priorities while efficiently utilizing both computing and networking resources. Let binary variable \(X^{m,t}=\{1,0\}\) denote whether intent \(I^{m}\) is successfully installed at time \(t\) or not and \(X_{i}^{m,t}=\{1,0\}\) denote whether request \(R_{i}^{m}\) of intent \(I^{m}\) can be satisfied or not. When intent \(I^{m}\) is not submitted, \(X^{m,t}=0\). Thus, the success of a general intent installation is defined as:
\[X^{m,t}=X_{0}^{m,t}\wedge X_{1}^{m,t}... \tag{1}\]
Therefore, the long-term intent acceptance ratio is:
\[\lim_{T\rightarrow\infty}\frac{\sum_{t=0}^{T}\sum X^{m,t}}{\sum_{t=0}^{T}|\{ I^{m,t}\}|} \tag{2}\]
where \(|\{I^{m,t}\}|\) is the total number of submitted intents at \(t\).
We define three different priority levels for intents: _high_, _mid_, and _low_. With different priority levels, we define two installation semantics: 1) high and low priority intents are successfully installed only if all their compiled requests are satisfied and embedded \(\forall X_{i}^{m}=1\); and 2) a mid-priority intent allows its requests to be installed partially.
To quantify the acceptance ratio of mid-priority intents, we need to model the long-term request acceptance ratio, which can be formulated as:
\[\lim_{T\rightarrow\infty}\frac{\sum_{t=0}^{T}\sum X_{i}^{m,t}}{\sum_{t=0}^{T} \left|\left\{X_{i}^{m,t}\right\}\right|} \tag{3}\]
where \(\left|\left\{X_{i}^{m,t}\right\}\right|\) is the total number of requests at time \(t\).
Fig. 5: An example of intents and compiled requests
Fig. 6: An example of intent installation on substrate network
To the VEC provider, the cost of intent \(I^{m}\) installation is modeled as the sum of total resource requirements:
\[\kappa^{m}=\alpha\sum_{n\in N^{m}}cpu_{n}+\beta\sum_{n\in N^{m}}mem_{n}+\gamma \sum_{l\in L^{m}}bw_{l}/delay_{l} \tag{4}\]
where \(\alpha\), \(\beta\), and \(\gamma\) are weights for resources in different categories valued by the VEC provider. Thus, the revenue of intent \(I^{m}\) installation for VEC provider at time \(t\) can be formulated by:
\[\varepsilon^{m,t}=\sum X_{i}^{m,t}\cdot\kappa_{i}^{m,t} \tag{5}\]
Similar to [11], the revenue-to-cost ratio is used to quantify the long-term resource utilization:
\[\lim_{T\rightarrow\infty}\frac{\sum_{t=0}^{T}\varepsilon^{m,t}}{\sum_{t=0}^{T }\kappa^{m,t}} \tag{6}\]
It is known that the general VNE problem is NP hard [10]. Thus, we rely on heuristics to practically solve the problem.
## IV Online Intent Management
Existing solutions and algorithms [11, 13, 14] of virtual embedding problem cannot be directly applied to the intent installation problem in VEC environments. In this section, we describe our proposed online intent management solution, which includes a priority-aware intent (PAI) installation algorithm and corresponding location-aware mapping (LAM) algorithm for intent-based VEC. The intuitions behind our proposed priority-aware intent-based processing algorithm are:
* Microservices (requests) with location constraints should be processed first to increase the acceptance ratio.
* Microservices with less complexity and resource demands that have less impact on other intent installations will be processed first.
* To increase the acceptance ratio of higher-priority intents and total resource utilization, intent requests with higher priority will be installed first.
* If there is no request left for the highest installation level, we process compiled requests of all other intents.
* In the end, if there is no higher priority intent left for processing, we consider the requests with the lowest installation priority.
* There is a significant difference in processing and operation costs of intent reinstallation with virtual node relocation and merely remapping its virtual link [16]. The mapping may change due to the user mobility after a virtual node has been allocated in the mobile user node. Therefore, allocating other nodes that share virtual links in proper locations can reduce maintenance costs.
### _Priority Aware Intent Installation_
With different priorities, the intent contention resolution algorithm is the key component for IBN-based edge computing management and orchestration. At each time interval \(t\), the priority-aware installation algorithm first checks intents associated with _Suspending_ event. If virtual link remapping cannot satisfy the intent, it changes the intent to _Failed_ state for reinstallation. We set the retry threshold for reinstallation from _Failed_ intent to 3. To increase the acceptance ratio of intents with higher priorities and reserve sufficient resources for subsequent intents, we divide intents into two installation semantics and policies (Algorithm 1): _InstallAll(I)_: must satisfy all compiled requests of one intent. _InstallBest(R)_: satisfy as many compiled requests as possible. For each priority group, the intents \(I^{high}\), \(I^{low}\) and compiled requests \(R^{mid}\) are sorted in ascending order based on the cost model Eq. (4).
``` Input :\(G\) edge network graph with previous mapping information \(M(t\) ) Input : time interval \(\{I_{submit}\},\{I_{suspend}\},\{I_{fail}\}\)
1foreach\(I\in\{I_{suspend}\}\)do
2ifCheckLink(I)==Failedthen
3 RemapPath(I);
4ifstatus(I)==Failedthen
5\(I_{fail}\gets I\)
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
310
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
490
510
521
532
541
556
572
583
584
591
592
593
504
5294
530
542
543
544
555
566
573
585
594
595
606
596
597
6100
611
6201
6212
632
645
666
678
689
7012
7103
7140
715
7216
730
740
7417
742
7518
752
763
764
778
799
800
9100
1011
1022
1034
1045
1056
1067
107
1088
1099
1110
1123
1140
1141
115
1162
1173
1184
1194
1205
1069
176
1077
1099
1127
1280
1290
131
1421
1432
1445
146
1477
1589
160
1790
1910
1921
1049
170
1932
105
1069
1070
1094
1095
1096
1097
1098
1099
1110
1100
1111
1111
11111
11111
1112
1122
1232
1414
1424
1535
1646
1778
1899
1900
1912
1092
1093
1094
1095
1096
1097
1098
1099
1099
1100
1100
1120
1213
1240
1251
1262
1273
1284
1285
1296
1297
1298
1299
1300
140
1412
1428
143
1445
146
1479
150
1629
1709
170
1920
193
194
164
165
166
167
1689
1790
171
180
194
1720
195
196
1973
196
1974
175
176
177
1785
1986
187
1999
1990
1991
2092
2093
2100
2109
2109
2111
2122
213
2140
2142
2152
2163
2174
2165
2166
2177
2167
2168
2179
2189
2192
2193
22194
2195
2196
2197
2197
2219
222
222
223
224
2252
267
228
2298
2299
230
2310
232
2332
240
2412
242
243
2445
246
247
2482
249
252
253
254
255
256
257
258
269
269
270
269
271
269
280
281
282
283
284
285
286
286
287
287
288
289
299
300
3101
329
334
350
360
371
380
391
392
393
394
395
396
397
398
399
400
301
302
303
304
305
306
307
308
309
3109
311
323
3124
324
3325
336
334
340
341
342
343
354
356
361
362
363
371
372
373
384
388
399
401
398
399
410
424
43
445
465
476
476
486
490
491
492
493
494
495
496
497
498
499
500
510
529
536
549
556
561
576
577
586
587
598
601
599
610
592
593
594
6110
595
629
639
649
652
669
667
679
689
690
700
601
692
693
694
695
696
697
698
699
700
601
699
699
710
699
720
699
73
740
740
751
751
752
756
769
776
778
78
790
801
791
792
793
794
802
810
829
83
8403
85
86
879
888
89
994
95
969
975
996
976
97
977
978
98
999
999
999
100
999
1100
1111
11111
1111
1111
1111
1111
111
1111
1111
1111
1111
1111
1111
1111
1111
1111
11111
11111
11111
111111
1111111
111111
1111111
11111111
1
algorithm also considers the location constraints, including fixed and mobility-related location constraints. Traditionally, location constraints of virtual network requests are fixed, such as Virtual Network Function (VNF) location constraints of Service Function Chaining (SFC) [17] and host constraints due to data security or privacy. For the intents with mobility-related location constraints, we allocate requests such that it minimizes the possibility of virtual node reallocation.
Following our installation semantics, the compiled requests \(\{R\}=I^{m}\) of a high-level or low-level priority intent, and all compiled requests with mid-level priority \(\{R\}=\sum I^{m}\) are processed sequentially. LAM first allocates requests with fixed and mobility-related location constraints \(R^{loc}\). Then, it allocates other requests without any location constraints \(R^{non}\). For each successful node mapping _nodeMap_, the mapping result satisfies both node and link requirements. If a virtual link \(l_{u,v}\) exists in \(R\) and virtual node \(v\) has been mapped to node \(n(v)\), i.e., \(M(R)^{v}=n(v)\), the virtual link is mapped based on \(k\) shortest path between the testing node \(n\) and \(n(v)\). If any node mapping \(M(R)^{v}\) is failed, the request mapping is also failed without further processing.
From steps 3 to 7, for each \(R^{loc}\), virtual nodes \(\{v^{loc}\}\) with fixed and mobility-related location constraints \(\{n(v)\}\) are mapped firstly. From steps 9 to 17, the remaining virtual nodes are mapped in a breadth-first search manner. At step 8, virtual nodes are sorted based on scores calculated in Eq. (7); and the virtual node with minimum cost is selected (\(u\)) as the start node. The virtual node score in a request \(R\) is formulated as:
\[{S(i^{\prime})}^{R}=(\alpha^{\prime}\cdot cpu+\beta^{\prime}\cdot ram)\cdot k_{ nn,i^{\prime}}^{bw} \tag{7}\]
where \(cpu\) and \(ram\) are normalized resource requirements of virtual node \(i^{\prime}\). The ratio of \(\alpha^{\prime}\) and \(\beta^{\prime}\) is based on the computing resource requirement of the selected virtual node \(i^{\prime}\), \(\alpha^{\prime}/\beta^{\prime}=c{pu^{i^{\prime}}}/{ram^{i}}\). \(k_{nn,i}^{bw}\) is the average neighbor degree with bandwidth as the weight. \(k_{nn,i}^{bw}=\frac{1}{s_{i}}\sum_{j\in N(i)}bw_{ij}k_{j}\), where \(s_{i}\) is the weighted degree of node \(i\), \(N(i)\) is the set of node \(i\)'s neighbors, \(k_{j}\) is the degree of node \(j\) which belongs to \(N(i)\). \(bw_{ij}\) is the bandwidth of the edge (link) that connects node \(i\) and \(j\).
The node mapping procedure continues until the searching queue \(Q\) is empty or a mapping is failed. For each node mapping, already selected physical nodes in the request mapping \(M(R)\) are tested first to minimize the link mapping cost at step 12. If there is no matching, candidate physical nodes within the search depth are sorted (Eq. (9)) and tested for mapping at step 14. At step 17, unexplored adjacent virtual nodes are enqueued based on the virtual node sorting for further mapping. The candidates nodes \(N_{u}^{+}\) for unmapped virtual node \(u\) are the intersection set of edge nodes within search depth \(d_{u,v}\) (the number of switch hops) of the current mapping node \(n(v)\in M(R)\),
\[N_{u}^{+}=\bigcap_{v\in M(R)}N_{n(v)}^{+}\left(d_{u,v}\right) \tag{8}\]
where \(N_{n(v)}^{+}\left(d_{u,v}\right)\) is all nodes within search range of node \(n(v)\) and \(n(v)\) is the mapping node of virtual node \(v\) that \(n(v):v\to n\). The search depth is calculated as \(d_{u,v}=d\cdot\lceil Delay_{l}/\mu\rceil\), \(l_{u,v}\in L_{i}^{m}\) where virtual link \(l_{u,v}\) exists, \(\mu\) is delay coefficient (\(\mu=10ms\)) and \(d\) is the search range coefficient (\(d=2\)). If there is no mapped node, \(N_{u}^{+}=G(N)\) for the first virtual node mapping. When the request mapping is successful, the remaining physical resources are updated accordingly.
At step 14, the candidate physical nodes \(N_{u}^{+}\) are sorted in a descending order based on the node score model (Eq. 9). The edge node \(i\)'s score in the substrate network is formulated as:
\[{S(i)}^{sub}=(\alpha\cdot cpu+\beta\cdot ram)\cdot k_{nn,i}^{bw} \tag{9}\]
where the coefficients of remaining \(cpu\) and \(ram\) resources are \(\alpha\) and \(\beta\) (\(\alpha+\beta=1\)). They are computed based on the remaining computing resources of all nodes within the search distance as:
\[\alpha/\beta=\sum_{j\in N_{i}^{+}(d)}c{pu^{j}}/\sum_{j\in N_{i}^{+}(d)}{ram^{j}} \tag{10}\]
Compared with traditional VNE solutions, we allow virtual nodes embedded in the same edge node to increase node utilization and reduce the network cost. In other words, each edge node can be a cluster of connected physical hosts. Multiple virtual nodes of the same request can be allocated in the same edge data center or the same physical host. For the virtual link mapping, a remote edge node with longer paths results in reduced bandwidth resources for other intent installations. In other words, it may reduce the intent installation acceptance ratio when the network resources are limited. Furthermore, most mapping algorithms consider the node and link mapping separately. However, virtual links of microservices at the edge are distance and latency-sensitive. Selecting all node locations without considering delays of a path can dramatically increase the reject ratio of intent installation. Therefore, we introduce search depth and distance parameters in node sorting to allocate requests in proximity.
## V Performance Evaluation
In this section, we compare the proposed intent installation algorithm with baseline algorithms based on the existing virtual network embedding algorithms, _grcrank_ (Global Resource Capacity) [13], _rwrank_ (Markov Random Walk PageRank) [11], and _nrmrank_ (Node Ranking Metric) [14] in terms of intent acceptance ratio, resource utilization, and execution time. For a large-scale simulation, we utilize the real-world taxi GPS dataset in Shanghai (April 1, 2018)1 and the locations of base stations from Shanghai Telecom.2
Footnote 1: [http://soda.shdatalic.org.cn/download/31](http://soda.shdatalic.org.cn/download/31)
Footnote 2: [http://squanewang.com/TelecomDataset.html](http://squanewang.com/TelecomDataset.html)
### _Experiment Configurations_
We extract the taxi GPS data within one hour with the number of taxis ranging from 1000 to 3000. The location of each edge node or server is calculated based on the density of base stations of Shanghai telecom by the K-mean algorithm
(K=200) [18]. Physical links within the VEC network are generated based on the Delaunay Triangulation algorithm [19]. As a result, there are 758 physical links with a 6.6852 km average distance between edge servers. The average distance between a user and the nearest edge station is 1.63 km.
The delays of wireless connection between users and base stations and wired network link are calculated as follows:
\[t=t_{wireless}+t_{wired} \tag{11}\]
\(t_{wireless}=W*\log_{2}\frac{S_{g}}{N}\) where the channel gain \(g_{t}\) is \(127+30*\log(d)\), where \(d\) is the distance between the user and local base station [20]. The channel bandwidth \(W\) is set to 20 Mhz, the noise power \(N\) is \(2*10^{-13}\) Watt, and the wireless transmit power of vehicle S is 0.5 Watt. The propagation time of wired links in milliseconds is calculated as \(t_{wired}=0.005*d\), where \(d\) in \(km\) is the length of the direct optical cables.
An intent is generated and submitted at the same timestamp when the Taxi ID first appeared in the GPS data. The same intent is terminated when the Taxi ID is last shown in the data. Microservices are generated after the intent compiling (Fig. 4). Fig. 7 depicts the number of intent submissions and suspended events over time, and the total submissions and suspended event number of intent requests with various user numbers. A large number of intents were submitted by users before time 25. Table I illustrates the details of the physical network and intent parameters. Experimental results are reported based on the average of 10 randomly generated values.
### _Results Analysis_
We evaluate the proposed PAI and LAM algorithms (_pailam_) described in Sections IV-A and IV-B in various aspects, namely intent acceptance ratio, resource utilization, and execution time. Figure 8 illustrates the performance comparisons in terms of acceptance ratio, utilization ratio, and execution time with various numbers of users. The ratio of location constraint requests to all requests is 0.1. It shows that our proposed LAM algorithm can efficiently install delay-sensitive requests with and without location constraints. LAM significantly increases the intent acceptance ratio (Fig. 7(a)) and utilization (Fig. 7(b)) by up to \(58\)%-\(71\)% and \(66\)%-\(76\)%, respectively, compared with other online installation algorithms. By considering the entire physical network for mapping, the execution time of baseline VNE algorithms is too high to suit the large-scale intent installation (Fig. 7(d)). Due to the appropriate candidate selections, the processing time of LAM is decreased by up to \(95\)% compared with other online algorithms. We further evaluate our proposed intent framework and _pailam_ algorithm with various parameters, including the intent priority, search depth for the node mapping candidates, and the ratio of requests with location constraints (Fig. 9).
**Intent priority**: Figure 8(a) shows the reject ratio along the time, in which high, mid, and low-level intent priorities are evenly generated. Compared to the mid-priority intents, the reject ratio of high-priority intents are significantly smaller. Both mid and high-priority intents can be maintained at a high acceptance ratio (\(0.923\)-\(0.979\) and \(0.931\)-\(0.979\)). Intents with low priority (\(0.741\)-\(0.907\)) are rejected when one of the requests is not satisfied for higher-priority intents.
**Allocation search depth**: With a fixed location constraints ratio of 0.1, we examine the acceptance ratio of _pailam_ with various search depths for node mapping (Fig. 8(b)). The difference in acceptance ratio between depths \(2\), \(4\), and \(8\) is insignificant among the different numbers of users. With a larger group of candidates, the depth \(d\)=\(4\) has the best performance in all scenarios. However, the increase in algorithm execution time is considerably greater than the increase in performance compared to \(d\)=\(2\). When \(d\)=\(8\), the acceptance ratio decreases slightly because the larger group of candidates may lead to more mapping rejections.
**Location constraints**: Figure 8(c) illustrates the acceptance ratio when the percentage of user location-related requests varies for 2000 users. _pailam_ uniformly and significantly outperforms other online algorithms in all scenarios. The acceptance ratio is the highest in the scenario where \(10\)% of total intents are location-related (pailam-0.1). The acceptance ratio of location-related requests may be higher than the non-location-related requests when the computing and networking resources of \(n(v)\) locations with mapping constraints are sufficiently large. For example, when the user's onboard processing resources and its network connections to edge servers are sufficiently large (0.5), the acceptance ratio is higher than the \(0.2\) scenario.
**Link remapping and intent reinstallation**: Suspended events are triggered along the time due to user mobility
\begin{table}
\begin{tabular}{|l|l|l|} \hline Physical Networks & \multicolumn{2}{l|}{Intents Parameters} \\ \hline edge node & 200 & req num. & [1, 4] \\ \hline edge link & 758 & vir. node/link & [2, 4](2, 4] \\ \hline node CPU & [10,40] & vir. CPU & [1, 2] \\ \hline node RAM & [10,80] & vir. RAM & [1, 4] \\ \hline link bw & [400,1000] & bw/delay & [1,2](10,100) \\ \hline \end{tabular}
\end{table} TABLE I: Physical and intents parameters
Fig. 7: Intent data statistics with various amount of users
(Fig. 6(b)). Compared to baseline VNE algorithms, the proposed algorithm does not directly map the suspended requests as if they are new submissions. As a result, it largely reduces the execution time and the cost of VM or container migrations for intent reinstallation [7]. This is because the operating cost of path rerouting is significantly smaller than live migration [16].
## VI Intent-Based Computing Prototype
In this section, we showcase an Intent-Based Vehicular Edge Computing prototype based on the SDN controller (ONOS) and Mininet Emulation platform.3 Virtual node (VM and container) mapping and virtual link embedding are controlled by the intent-based framework. We generated a series of events to validate the feasibility, availability, and flexibility of our proposed system (Fig. 10). With a 100 priority (low-level), \(Intent1\) is compiled into one request with \(v_{1}^{1}\), \(v_{2}^{1}\) and \(v_{3}^{1}\) and links \(v_{1}^{1}\)-\(v_{2}^{1}\) and \(v_{1}^{1}\)-\(v_{3}^{1}\). The requirement of each node is 2 vCPUs and 4 GB RAM and the location constraint of \(v_{1}^{1}\) is \(user1\). Each virtual link requires 20 Mbps of bandwidth and 30 ms of latency. With a 200 priority (mid-level), \(Intent2\) is compiled into two requests with virtual link \(v_{1}^{2}\)-\(v_{2}^{2}\) and \(v_{1}^{2}\)-\(v_{3}^{2}\), respectively. The requirement of each node is 2 vCPUs and 4 GB RAM and the location constraint of \(v_{2}^{2}\) is \(EDC1\). The link requirement is 20 Mbps and 100 ms.
Footnote 3: You can find a demo at [https://youtu.be/ZXBXdrug_x4](https://youtu.be/ZXBXdrug_x4)
At time _i1_ and _i2_, _intent1_ and _intent2_ are submitted and installed, respectively. At _e1_, \(user1\) who is connected to EDC1 moves to a new position and gets connected to EDC4. As a result, a mobility event is raised to check the installation of _intent1_ (Fig. 10(a)). At _e2_, EDC2 goes down. \(v_{2}^{1}\) and \(v_{3}^{1}\) are reallocated to satisfy _intent1_ (Fig. 10(b)). At time _e3_, the link between S3 and S4 goes down (Fig. 10(c)). Between _e4_ and _e5_, request \(v_{1}^{2}\)-\(v_{3}^{2}\) of _intent2_ failed due to EDC1 is down and \(v_{3}^{2}\) location constraint cannot be satisfied (Fig. 10(d)). At _e5_, EDC1 is up again, and request \(v_{1}^{2}\)-\(v_{3}^{2}\) is reinstalled. As shown in Fig. 11 and Fig. 12, the prototype application can react to the computing, networking, and mobility events, satisfy the intents requirements and manage the life-cycle of intents efficiently.
## VII Related work
Virtual network embedding (VNE) [12] refers to the embedding of a virtual network in a substrate network. To provide custom user-defined end-to-end guaranteed services to end users, there are different problems that are addressed in this problem domain, such as optimal resource allocation, self-configuration, and organization of the network. Yu _et al._[10] showed an approach that combines path splitting, path migration, and customized embedding algorithms to enable a substrate network to satisfy a larger mix of virtual networks. Dietrich _et al._[21] addressed the problem of multi-provider VNE with limited information disclosure. EPVNE [22] is a heuristic algorithm that reduces the cost of embedding the Virtual Network (VN) request and increases the VN request acceptance ratio. Heija and Hesselbach [23] proposed an online power-aware algorithm to solve the VNE problem using fewer resources and less power consumption with end-to-end delay as a constraint. Jinke _et al._[24] proposed a VNE model, where high-priority users can get extra resources compared to low-priority users. Ogino _et al._[25] proposed a VNE method to minimize the total substrate resources required during substrate resource sharing among multiple priority classes. Nguyen _et al._[26] proposed a node-ranking approach, and
Fig. 8: Installation performance comparisons with different online algorithms
Fig. 9: Installation performance with various amount of users for different priorities, search depth, and user location constraints
a parallel GA-based algorithm for the link mapping stage to solve the online VNE problem. DeepViNE [27] is a Reinforcement Learning (RL)-based VNE solution to automate the feature selection. MUVINE [15] is an RL-based prediction model for multi-stage VNE among cloud data centers.
Recently, there has been some research to improve network functionalities with the help of SDN intents in different application scenarios. ONOS Intent Framework [28] indicates the IBN operations used in the ONOS SDN controller. Han _et al._[29] proposed an IBN management platform based on SDN virtualization to automate the management and configuration of virtual networks. Cerroni _et al._[30] proposed a reference architecture and an intent-based North Bound Interface (NBI) for end-to-end service orchestration across multiple technological domains, with a primary use-case being the infrastructure deployment on the Internet of Things (IoT). DISMI [31] is also proposed as an intent-based north-bound interface of a network controller. Addad _et al._[32] benchmarked the ONOS intent NBI using a methodology that takes into consideration the interface access method, type of intent, and the number of installed intents. OSDF [33] is an SDN-based network programming framework that provides high-level APIs to be used by managers and network administrators to express network requirements for applications and policies for multiple domains. Sanvito _et al._[34] extended the ONOS Intent Framework enabling compiling multiple intents and re-optimizing their paths according to the network state based on flow statistics. Szyrkowiec _et al._[35] proposed architecture for automatic intent-based provisioning of secure services in multi-layer IP, Ethernet, and optical networks while choosing the appropriate encryption layer. Rafiq _et al._[36] enables IBN to make effective network resource utilization and minimize the maximum link capacity utilization.
In summary, existing VNE algorithms do not support allocating multiple virtual nodes to the same compute node, and VNRs are treated as individual requests. However, our work supports VEC applications compiled into multiple VNRs. Our approach also extends intents with location constraints and incorporates computing requirements, while satisfying users' QoS requirements. It can also handle the mobility aspect of the users/applications. Unlike existing VNE algorithms, our approach is suitable for intent installation with priorities.
## VIII Conclusions and future work
We proposed a novel intent framework to jointly orchestrate networking and computing requirements of applications based on user requirements in vehicular edge computing environments. Our proposed solution constantly monitors user requirements and dynamically reconfigures the system to satisfy the desired states of applications. It optimizes resource utilization and acceptance ratio of computing and networking requests with various priorities. Results show that our proposed framework outperforms state-of-the-art algorithms in terms of acceptance ratio, resource utilization, and execution time. We also provided a small-scale prototype to validate our proposed framework. In future work, we plan to fully implement our framework by extending the intent framework
Fig. 11: Emulation performance in bandwidth along the time
Fig. 12: Emulation performance in delay and its rolling average along the time
Fig. 10: Emulation scenarios. The numbers over the links show the bandwidth and delay, i.e. 400,1.
of the ONOS SDN controller and consider inherited periodic mobility patterns for intent re-installation and management.
|
2310.13609 | Statistical and dynamical properties of polarised crowd | We present a minimal computational model to mimic the crowd in marathon race.
We aim to examine the influence of frontliners on crowd dynamics by comparing
the simulated races with and without their presence. The primary outcome of our
study revealed that the local velocity and density of the participants exhibit
a wave pattern similar to what is observed in actual races. Another important
result we obtained is that the travelling wave in the crowd consistently
propagates with a constant speed, irrespective of the system size under
consideration. The dynamic of participants in the longitudinal direction mainly
contributes for the velocity fluctuation and the fluctuation in the transverse
direction is suppressed. In the absence of frontliners, the fluctuations in
density and velocity weakens without significantly influencing the other
statistical and dynamical characteristics of the crowd. It is also observed
that the density wave travels faster than the velocity wave. Through this
research, we aim to enhance our understanding of crowd motion, which can inform
the development of effective crowd management strategies and contribute to the
successful control of such events. | Pratikshya Jena, Shradha Mishra | 2023-10-20T15:59:54Z | http://arxiv.org/abs/2310.13609v1 | # Statistical and dynamical properties of polarised crowd
###### Abstract
We present a minimal computational model to mimic the crowd in marathon race. We aim to examine the influence of frontliners on crowd dynamics by comparing the simulated races with and without their presence. The primary outcome of our study revealed that the local velocity and density of the participants exhibit a wave pattern similar to what is observed in actual races. Another important result we obtained is that the travelling wave in the crowd consistently propagates with a constant speed, irrespective of the system size under consideration. The dynamic of participants in the longitudinal direction mainly contributes for the velocity fluctuation and the fluctuation in the transverse direction is suppressed. In the absence of frontliners, the fluctuations in density and velocity weakens without significantly influencing the other statistical and dynamical characteristics of the crowd. It is also observed that the density wave travels faster than the velocity wave. Through this research, we aim to enhance our understanding of crowd motion, which can inform the development of effective crowd management strategies and contribute to the successful control of such events.
## I Introduction
Comprehending the behavior or dynamics of human crowds is valuable for optimizing both our everyday professional operations and our individual well-being. This understanding can significantly enhance managerial effectiveness as well. Nowadays city-scale public events are gaining significance in the global competition for socio-economic growth [1, 2, 3]. Events like sports competitions, exhibitions, and national celebrations exemplify the growing demands for appropriate technological solutions and support to effectively monitor and manage large crowds [4, 5, 6, 7]. In the present day, crowd management has become increasingly challenging. To prevent stampedes [8, 9] and ensure safe and enjoyable events, crowd monitoring has become essential. The rapid urbanization further emphasizes the need for proper support in planning public infrastructure for high-density crowds. The effectiveness of crowd management is heavily influenced by pedestrian behavior within the crowd. By gaining insights how individuals behave within a crowd, appropriate strategies can be implemented to ensure safety and optimize the crowd flow. Efficient crowd management demands the collaboration of diverse disciplines and practices, encompassing sociology, theoretical physics, applied mathematics, computer sciences, and more [10, 11, 12, 13, 14, 15].
Numerous studies are available on crowd managements that includes real-world events involving camera tracking and data collection, as well as computer simulated scenarios to investigate crowd behavior. A major portion of crowd management studies centers on crowd disasters, stampedes, large gatherings at national and spiritual events, as well as city-marathon races [16, 17, 18]. Recent studies emphasize on experimental investigations of pedestrian flow [19, 20, 21]. In addition to the experimental studies, numerous computational and statistical models [22, 23, 24, 25, 26, 27, 28, 29, 30, 31] have been developed to explore and analyze pedestrian flow patterns. To investigate crowd disasters and stampede situations, a significant number of experimental studies [32, 33, 34] are conducted, relying on empirical methods and computer simulations [35, 36, 37, 38].
Another prominent event is the marathon race, which demands special attention when addressing crowd control measures. Currently, marathon is a global cultural and sports event, representing personal accomplishment. It provides opportunity for community unit and also enhances local tourism by showcasing culture and landmarks. Indeed, there are numerous studies focused on city marathons [39, 40, 41, 42, 43, 44, 45], which aim to capture and analyze crowd dynamics during the races where these studies utilize various methods, including video analysis, data tracking, and participant surveys, to understand how crowds behave and interact during marathon events. The computer-simulated numerical models [46, 47, 48, 49] for marathon attempt to replicate the real events. These models use mathematical equations and algorithms to represent the interactions and dynamics of the crowd during the marathon.
Most of the previous computational models are based on the social interaction among the individuals of the crowd [24, 50, 51, 52]. In a recent study[44], the authors have utilized tens of thousand road-race participants in four starting corals: Chicago 2016, Chicago 2017, Paris 2017, Atlanta 2017 [44] to explain the flowing behaviour of polarized crowds by examining it's response to boundary motion. The outcomes of this experimental observations [44], elucidates the crowd dynamics specifically concerning velocity and density waves influenced by the presence of race staff (frontliners) at the boundaries. The longitudinal velocity wave propagates upstream at a constant speed which is a characteristics of information transfer in the polarized crowd in all races. However, the orientational fluctuations are suppressed locally in
transverse direction.
In this paper, our objective is to develop a basic and minimal model that can be applicable for polarized crowd and can act as a universal representation. Here, we consider a collection of participants of the crowd on a two dimensional track. The position and velocity of individual participant incorporates Newton's laws of motion and principles of kinematics, to capture the essential dynamics of runners during a marathon event. It's outcome are certainly comparable with the real-world data obtained from video tracking, enhancing its practical applicability. By providing this simplified model we aim to enhance our understanding of crowd behavior and dynamics, making it easier to understand to researchers and practitioners alike.
Motivated with the real world situations where crowd can be guided by some trained staff or completely independent crowd free to move on a desired track, we model the system with and without frontliners. Where the frontliners are defined as the race staff which always lead the crowd. In large races, the race staffs form boundary to monitor the crowd motion and spectator areas ensuring safety and the prevention of interference among the participants [53]. The races without race staffs may lack proper organization and management but still enjoyable with a higher degree of freedom [31, 34].
The principal results of this paper are the propagation of hybrid coupled velocity-density wave throughout the system opposite to the crowd motion, as observed in recent experimental work [44]. However, for without frontliners it appears that the spreading of the profile is more prominent but the basic structure remains the same. Additionally, the speed distribution depicts the propagating wave has a constant speed irrespective of the number of participants in the races considered. Based on the observation from the distribution for longitudinal and transverse velocity, it can be reported that the propagation of velocity wave primarily occurs in the longitudinal direction. The distribution of density illustrates the initial dispersion of the individuals in different direction gradually diminishes and the spreading become more uniform and stabilizes into a consistent pattern as individuals find their preferred positions and this is consistent for both the system with and without frontliners. The density and velocity propagate periodically through the system, but the density wave propagates faster than that of velocity.
## II Model
The computational model used in this study serves as a tool for examining the motion characteristics of individual participants and the collective behavior observed in marathon events. We model a collection of participants on a two-dimensional passage with reflecting boundaries in transverse direction and no-boundaries in longitudinal direction. We use the Newton's laws of motion, to describe the fundamental motion of runners. It takes into account the position \({\bf r}_{i}=(x_{i},y_{i})\) and velocity \({\bf P}_{i}=(P_{xi},P_{yi})\) of the \(i^{th}\) participants. We choose the \(x-\) and \(y-\) directions as direction parallel \(\parallel\) (longitudinal) and \(\perp\) (transverse) to the direction of moving crowd respectively as shown in the schematic of the model Fig. 1. The velocity of the each participant is updated by
\[\frac{d{\bf P}_{i}}{dt}={\bf h}+{\bf F}_{i}+{\bf P}_{\eta_{i}}(r,t) \tag{1}\]
where left hand side is simply the inertial term, the mass of the participant is taken as unity (because the Gravity is unimportant for the motion in a plane). The three terms on the right hand side are different forces: (i) biased drive to polarise the crowd along the track of the path \(+x\) direction with \({\bf h}=(h_{0}\hat{x},0\hat{y})\), (ii) The short ranged soft-repulsive interaction among the participant denoted by \({\bf F}_{i}=\sum_{j=1}^{N}{\bf f}_{ij}\) where \({\bf f}_{ij}=(r_{ij}-2\sigma)\hat{r}_{ij}\), accounts for the mutual exclusion among the participants of size \(\sigma\). \(r_{ij}=|{\bf r}_{i}-{\bf r}_{j}|\) and (iii) small random uncorrelated thermal noise \({\bf P}_{\eta}\) having strength \(\Delta_{P}=10^{-4}\) (for both components) taking care of perturbation arises due to any kind of random force.
Further particles move in direction of their velocity obeying the following rule;
\[\frac{dx_{i}}{dt}=v_{0}P_{xi}+\zeta_{xi}(t) \tag{2}\]
\[\frac{dy_{i}}{dt}=P_{yi}+\zeta_{yi}(t) \tag{3}\]
In the \(y-\)direction participants simply follow the velocity in \(y-\)direction with an additional random uncorrelated noise \(\zeta_{yi}(t)\), whereas in the \(x-\)direction the step size of the participants \(v_{0}\) is chosen from the Gaussian distribution \(P(v_{0})=\frac{1}{\sigma\sqrt{2\pi}}\exp(\frac{v_{0}-\mu}{2\sigma})^{2}\) with mean \(\mu=0.6\) and variance \(\sigma=0.03\). where at every time and for different participants it is obtained independently. In both directions velocity is modified with an additional random uncorrelated noise of strengths \(\Delta_{x}=\Delta_{y}=10^{-4}\). The motivation behind the choice of Gaussian step size in longitudinal direction, that in the real race, participants can have random step size in the direction of the race and it avoids the overcrowding and possibility of the arbitrary large step size obtained from the velocity update in \(x-\)direction. To investigate the crowd properties during the races in both the cases with and without race staffs or frontliners the model incorporates the presence and absence of frontliners in the simulated races as a key ingredient. The two cases are named as system with and without frontliners (WF) and (WOF) respectively. Fig1(a-b) provides a schematic snapshot of our model for the system with and without frontliners at some fixed time.
To include the frontliners in the race we have chosen some participants as frontliners and updated the positions in such a way that other participants unable to cross theses frontliners.
In our study, we consider different scenarios with varying numbers of participants (N) within the crowd. Specifically, we choose values of \(N\) in the range \(500-5000\). These different numbers allow us to examine the impact of crowd size on the observed dynamics. For all \(N\) values, initially we start with a mean density of 1.0 within the crowd by arranging them in finite area. The effective size of the participants \(r_{0}=0.8\) is the intrinsic length scale in the system and the ratio of \(\tau=r_{0}/v_{0}=0.8/0.5=1.6\) is the intrinsic time scale. Later the participants are allowed to move according to the update equations given in Eq. 1, 2 and 3. We have performed the simulation by taking the time step \(\Delta t=6.25\times 10^{-4}\tau\) for time \(6.25\times 4\times 10^{5}\tau\). In the longitudinal direction the crowd is free to explore, whereas in the transverse direction, it experiences confinement and extent of the space is \(W=500r_{0}\).
## III Result
We first analyse the impact of frontliners on the dynamics of participants in the races. SM1 movie captures the animation of the system WF for a particular crowd size by setting \(N=1000\). The corresponding figure at time \(=7\times 6.25\tau\) is shown in Fig. 1. The circles represent the participants and the color of the circle represent the \(x-\) component of velocity \(P_{x}(t)\) of the participant and the color shows the magnitude of \(P_{x}(t)\). The local density around a participant can be seen by the clustering of the participants. The frontliners are marked by the particles inside the rectangular area in Fig. 1 and by squares in animation in SM1. We can very clearly observe that starting from the initial homogeneous and random velocity of particles, density and velocity patterns are formed (as shown by the bottom panel pictures in Fig. 3). With time these waves traverse along the longitudinal direction of the crowd, such that the high and low density wave pattern continues.
We also analyse the system WOF and the snapshot of the system at time \(t=7\times 6.25\tau\) shown in Fig. 1(b) and corresponding animation in SM2. The symbols and color have the same interpretation as for the system WF. The basic characteristics of the participants is similar for the two cases. But for the system WOF, the pattern of local density and velocity is diluted and looks like moving slower. The detail comparison of the two systems with and without frontliners will be discussed later.
Based on the observations, from the animation and the data on the real crowd, various observable such as velocity and density waves, speed and velocity distribution of the participants, distribution of density in the system as well as density and velocity auto-correlation, can be examined to explore the dynamics and characteristics of the moving crowd. These observable are valuable tool for characterizing the dynamic behavior of participants during the races. We treat the crowd as a continuum and investigate the dynamics of the crowd by measuring the local coarse-grained density \(\rho(\mathbf{r},t)\) and velocity \(p_{x}(\mathbf{r},\) t). The movement of individual participants during the races generates pressure, leading to emergence of hybrid wave of velocity-density which propagates throughout the system. We further examine the participants speed \(u=\sqrt{(P_{x}^{2}+P_{y}^{2})}\) distribution \(P(u)\) and velocity distribution \(P(P_{i,j})\) (where (i, j) represent x and y components of velocity) to understand the characteristics of the travelling wave. To interpret the spreading and squeezing of crowd during the races, we also analyze the density distribution \(P(\rho)\) of the crowd in races having different numbers of participants and at different times. This can be beneficial to understand the crowd distribution to manage and control the stampede like situation during the marathon events. We additionally calculate the velocity and density auto-correlation functions \(C_{p_{x}}(t)\) and \(C_{\rho}(t)\) respectively to get the information about how the velocity and density are correlated with respect to time. The decay of the velocity auto-correlation provides information about characteristic time-scale associated with the relaxation of the system. From this, we estimate how fast the participants spread and move in the race. Furthermore, the density auto-correlation gives the information about how the density fluctuations in a system are correlated over time. By analyzing the decay we can quantify how fast participants diffuse and spread through the system. Below the descriptions of each observable are provided one by one from the result of our numerical simulation.
_Velocity and density profile:-_ To employ a continuum approach to analyze the large-scale motion within the crowd, we define the local coarse-grained density, denoted as \(\rho(x,t)\), and the velocity, represented as \(p_{x}(\)x, \(t)\), at different spatial positions x and time \(t\). The coarse-grained local density and velocity is defined as follows. We divide the whole track on which the race goes on into \(n=500\) rectangular cells of size \(\Delta x=10r_{0}\) in \(x-\)direction and breadth equal to the full length of the track in \(y-\)direction. The \(x=0\) line is always fixed at the mean of the \(x-\)coordinates of the frontliners. Later the distances in the longitudinal direction for both the models system WF and WOF are measured with respect to mean \(x-\)coordinates of the frontliners and front of the wave respectively. Then we calculate the density in each cell \(\rho(x,t)\) by counting the number of participants in each cell and dividing by the area of the cell. To determine the coarse-grained velocity \(p_{x}(x,t)\) we add the longitudinal velocities of the participants in the cell. In Fig. 2, the model picture depicts the above procedure explained to determine the coarse grain density \(\rho(x,t)\) and velocity \(p_{x}(x,t)\).
Fig. 3 provides a visual representation of crowd flow by showing the propagation of coarse-grained density \(\rho(x,t)\) and velocity \(p(x,t)\) waves at three times. The direction of flow of crowd is downward and vertical spread of the track is in the horizontal direction. In Fig. 3 (a-c) for the local density \(\rho(x,t)\) and (d-f)
for the local velocity \(p(x,t)\) at three different times \(t=(0,50,100)\times 0.625\tau\) respectively for the system with frontliners. Similarly Fig. 3 (g-i) for the local density \(\rho(x,t)\) and (j-l) for the local velocity \(p(x,t)\) at three different times \(t=(0,50,100)\times 0.625\tau\) respectively for the system without frontliners.
Starting from the initial time, the density and velocity profile undergoes an upward propagation (opposite to the direction of crowd motion) inferring a hybrid coupled wave of density and velocity transmitting within the system. It is clearly noticeable through keen observation that with time an initial homogeneous density of participants splits into density wave, which moves in the direction opposite to the direction of crowd. Similarly, initially almost uniform profile of longitudinal velocity splits into velocity bands moving in the same direction as of density wave. Clearly the density and velocity are not always in the same phase. The relation between density and velocity field is non-monotonic in nature. For regions with lower velocity encourages the increase in local density, but after some moderate velocity density starts to decrease. Hence the local density \(\rho(p)\) has a non-monotonic dependence on local velocity as shown in Fig. 4(a).
Based on the observation for WOF it appears that the pattern of the local density and velocity follows the same trend as for the system with frontliners, only the spread of density and velocity is more and intensity of the wave weakens. The relation between local density \(\rho(p_{x})\) on velocity is non-monotonic in nature as shown in Fig. 4(b). The key difference in the dependence of local density on local velocity for the two cases is that for the system WOF it follow a nearly Gaussian distribution
Figure 1: Figure (a-b) illustrates the schematic of our computational model for the races with frontliners and without frontliners respectively. The images are generated from the simulation. The circles depict the participants in the races in both the scenarios and colors of the circle represent the local \(x-\) component of velocity \(P_{x}\) of the participants. **h** is the external drive to polarize the crowd. In Figure (a), the squares inside the rectangle represent the frontliners. \(W\) is the width of the track in the transverse direction.
Figure 2: The picture depicts the track is divided into \(n\) rectangular cells. Here, the x-axis represents \(x_{0}-x\), where x and \(x_{0}\) are the co-ordinates of participants and the mean \(x\)-coordinates of the frontliners respectively. The zoomed picture presents a single cell inside which the coarse-grained density \(\rho(x,t)\) and velocity \(p_{x}(x,t)\) are calculated. The colors of the circle have the same meaning as in Fig. 1.
with range of distribution wider than that for the system with frontliners. Whereas, for the system WF, the \(\rho(p_{x})\) deviates from the Gaussian with smaller range of \(p_{x}\). The Gaussian dependence of the local density \(\rho\) of the crowd on the local velocity is previously found in the study of [39], where the authors has investigated empirically the velocity distributions of finishers in New York City marathon, American Chicago marathon, Berlin marathon and London marathon without any race staffs.
_Speed distribution_:- The Fig. 5 illustrates the distribution of speeds \(u\) of participants for both the cases, the presence and the absence of frontliners. The distribution of local speed \(P(u)\) is obtained by calculating distribution of speeds of all the participants. Fig. 5(a) depicts the speed distribution of participants in the presence of frontliners at a specific time \(t=30\times 6.25\tau\). The figure showcases the impact of different system sizes (\(N=1000,1500,2500,3000\)), represented by distinct colors. From the figure, it is evident that the speed distribution is Gaussian with non-zero mean for all \(N\)'s. Hence, the wave is propagating throughout the system with an approximate constant speed, denoted as \(c_{0}(t)\), which measures around \(0.8\). We further calculated the speed distributions for other times \(t=10\times 6.25\tau\) and \(t=25\times 6.25\tau\) and find that the mean of the speed distribution remains invariant with time too. Hence, initially moving crowd relaxes quickly and then it propagates with a constant speed and a kind of steady state is developed. Further, since the mean speed does not depends on the size of the crowd, hence the traveling wave in the crowd is non-dispersive in nature, unlike the normal waves in any media [54].
In Fig. 5(b), the distribution \(P(u)\) is shown for the scenario without frontliners. The speed distribution re
Figure 4: The plots (a-b) showcase the local density (\(\rho\)) vs. local velocity (\(p_{x}\)) at a specific time for the race having 1000 participants in the presence and absence of frontliners respectively. The symbols are from the results of the numerical simulation and lines are the fit to Gaussian distribution.
Figure 3: The plot (a-c) and (g-i) showcase the coarse-grained density \(\rho(x,t)\) at three successive time t=\((0,5,10)\times 6.25\tau\) in the presence and absence of frontliners respectively. The plot (d-f) and (j-l) represent coarse-grained velocity \(p_{x}\)(x, t) for the same.
mains the same for different system size and have the same mean speed \(~{}0.8\). The observation suggests that the presence or absence of frontliners does not have a significant impact on the mean of the travelling speed of the wave indicating system's robustness in response to external influencers/perturbations. The two distributions of the system with and without frontliners have one key difference that the \(P(u)\) for the system WF have a long tail, whereas no such tail is observed for system WOF. Hence, for the system with frontliners few participants are moving with relatively higher speed. Participants adjacent to the frontliners are mainly contributing for such high speeds (The Fig. 1 and animation SM1 visually demonstrated the occurrence).
_Velocity distribution_:- To further check the fluctuation in the velocity of participants in longitudinal and transverse direction, we calculate the distribution of the two components of velocities of particles \(P_{x}\) and \(P_{y}\) for three different sizes of the crowd. Fig. 6 (a-b) depicts the longitudinal and transverse component of crowd's distribution of velocity for a specific time for the system with frontliners and for the three different system sizes (\(N=1000,1500,3000\)) and (c-d) represents the identical plots without fronliners in the similar sequence. The distribution of velocity from the Fig. 6 the velocity distribution for the system with and without frontliners respectively, it can be observed that the transverse component has zero mean, whereas the longitudinal component exhibits a peak at some non-zero value. These findings suggest that there is no such significant movement in the transverse direction and the propagation of velocity wave primarily occurs in the longitudinal direction. The distribution is narrow for the transverse direction whereas it is much broad for the longitudinal direction that suggest that the moving crowd has larger fluctuations in longitudinal direction in comparison to transverse direction, which is completely opposite to what has been observed for the polar flocks [55; 56; 57; 58; 59]. The large fluctuations in longitudinal direction is observed in real marathon races [44]. The longitudinal direction of crowd has an effective driving force \(h_{0}\), which should lead to the suppressed velocity, but the velocity is suppressed in transverse direction. This occurs may be due to the confinement introduce in the transverse direction. The result of increase in the size of the flock increases the role of confinement and lead to more narrower distribution of velocity in the transverse direction as can be clearly seen in Figs. 6(a-d).
The two distributions \(P(P_{x})\) and \(P(P_{y})\) has some overlap for the system with frontliners, whereas there is no overlap for system without frontliners. This is mainly due to the restricted motion of the participants due to the presence of frontliners in the direction of moving crowd, that lead to relatively less broadness for the distribution in longitudinal direction in comparison the participants without frontliners. That further lead to an overlap in the two distributions for the system with frontliners.
_Density distribution_:- Till now we have focused on the characteristics of the crowd based on the velocity wave, but density of the moving crowd also have interesting properties as it was clear from the snapshots shown in Fig. 3 and animations in SM1 and SM2. Hence we now focus on the characteristics of the crowd based on the distribution of local density \(P(\rho)\). Fig. 7 (a-c) and (d-f), illustrates the density histogram of the system at three different times \(t=(10,20,30)\times 6.25\tau\), comparing the two cases the presence and absence of frontliners. As shown in Fig. 7(a-c), at the early time Fig. 7(a), the initial distribution is almost flat, the uniform arrangement of participants, but with time the distribution splits and one peak at relatively high density and nearly uniform distribution at lower densities. Initial spreading phase occurs as the marathon begins that leads to a relatively low crowd density as participants have enough space to move freely. On the
Figure 5: (color online) The plot (a-b) showcase the distribution of speed \(P(u)\) of participants at a specified time(t=30 \(\times\) 6.25\(\tau\)) for different system sizes (races) by varying the number of participants in the races (\(N=1000,1500,2500,3000\)) in the presence and absence of frontliners respectively.
Figure 6: (color online)(a-b) illustrates distribution of longitudinal and transverse components of crowdβs velocity \(P(p_{i,j})\) at a particular instant(t=30 \(\times\) 6.25\(\tau\)) for different \(N=1500,3000\) respectively in the presence of frontliners. (c-d) shows replicated plots in the absence of frontliners.
other hand, as the front liners move forward faster, they tend to create a denser region behind them. Due to this crowd aggression, a density band is formed and the backward propagation of this band through the system is exhibited. Subsequently, the slower participants move ahead so that the crowd spreads out again gradually leading to a decrease in density. Fig. 7(d-f) depicts the same for time steps \(t=(10,20,30)\times 6.25\tau\) for the system without frontliners. Initially, individuals tend to disperse in different directions because there is no
Figure 8: The figures (a-c), (d-f) depict histograms of density at a specific time (\(t\)=\(30\times 6.25\tau\)), showing the distribution of individuals within different system sizes (N=1000,3000 and 4000) in the presence and absence of front-liners respectively. The red dotted lines represent the density of the frontliners.
Figure 7: In the plots (a-c), the histograms display the density at three different times \(t\)=(10, 20 and 30)\(\times 6.25\tau\) for a race with \(N\) = 3000 individuals in presence of the frontliners. Here the x-axis represents the variable \(\rho\) and the y-axis represents the distribution of density \(P(\rho)\). The red dotted lines represent the density of the frontliners.The plots (d-f) showcase the same in the absence of frontliners scenario.
central guiding force for regulating the movement of the crowd. Afterwards, the initial homogeneity gradually diminishes. As time progress the mean density decreases and tail of the distribution increases.
Further, we also calculated the \(P(\rho)\) at fixed time and for different system size \(N=1000,3000,4000\) for both the cases system WF and WOF as shown in Figs. 7(a-c) and (d-f) respectively. For the system WF, the \(P(\rho)\) shows the bimodal structure for all system size. Also for the system WOF, the nature of the distribution remains the same for all system size, but the mean distribution shifts towards the high density for both the cases. This implies the characteristics of the density wave remains invariant with respect to the system size. The shift in the mean density for both the cases is due to the different relaxation time for different \(N\).
_Auto-correlation functions_:- To further check the correlation within the crowd with respect to time we measured the auto-correlation function of velocity and density defined as; \(C_{p_{x}}(t)=\langle\delta\mathbf{p_{x}}(t)\cdot\delta\mathbf{p_{x}}(t+\delta t)\rangle\) and \(C_{\rho}(t)=\langle\delta\rho(t)\delta\rho(t+\delta t)\rangle\) respectively. The fluctuations in velocity and density are defined as: \(\delta p_{x}(t)=(p_{x}(t)-p_{0})\) and \(\delta\rho(t)=(\rho(t)-\rho_{0})\), where \(p_{0}\) and \(\rho_{0}\) is mean value of \(p_{x}(t)\) and \(\rho(t)\) over the time. The \(C_{p_{x}}(t)\) and \(C_{\rho}(t)\) are averaged over all the cells in the longitudinal direction and four different sizes of the race. Fig. 9(a-b), we show the plots of the velocity \(C_{p_{x}}(t)\) and density \(C_{\rho}(t)\) auto-correlation functions of the two fields for the system WF and WOF respectively. Both the \(C_{p_{x}}(t)\) and \(C_{\rho}(t)\) show the early time exponential decay to late time oscillation. The decay of density correlation is sharper than that for the velocity as shown in the insets of Fig. 9(a-b). Thus we can say that the density wave is propagating faster in comparison to the velocity. This is very clear from the animation shown in SM1, that density is travelling faster in comparison to the velocity in the moving crowd. The same can be seen from the one-dimensional plot of coarse-grained density and velocity in SM3.
## IV Discussion
We have demonstrated a minimal computational model to replicate the crowd in a marathon race and examined the properties of the moving crowd. The understanding of the dynamical and statistical properties of the crowd can help us to control and manage the crowd. It can be beneficial for many socio-economical purposes.
We have investigated how crowds behave during races, comparing scenarios with and without designated leaders. This stems from real-world situations where crowds can either be guided by trained staff or left to move freely along a chosen path. In larger races, Organizers play a pivotal role in ensuring events run smoothly and professionally by acting as a boundary, overseeing crowd movement. Races without this oversight may be less organized but offer a more liberating experience. We present a model that incorporates both scenarios to accurately simulate the actual marathon experience.
Most of the outcomes of our study, qualitatively fit with the observable which are reported in [44] for different races by using the empirical data obtained by video-tracking. The central result of our paper is the upstream propagation of hybrid coupled velocity-density wave throughout the system as observed in recent experimental work [44]. The speed distribution depicts the propagating wave has a constant speed irrespective of the number of participants in the races considered. That makes the characteristic of the travelling wave in polarised crowd very different from the usual wave moving through a medium, which is dispersive in nature or the speed of the wave depends on the wavelength. The distribution of longitudinal and transverse velocity shows that the fluctuation of velocity primarily occurs in the longitudinal direction and velocity fluctuations in transverse direction are highly suppressed, unlike the velocity fluctuation in polar flock [55; 56; 57; 58; 59]. We compared our results for the two cases, the system with and without frontliners, and observed that the key characteristics of the moving crowd are similar for both types of races. Only the contrast of density is weak for the system without fronliners due to the completely free motion in the direction of crowd propagation.
Our findings lead to future direction of research, focusing on examining the impacts of various types of boundaries positioned perpendicular to the passage. Another prospective pathway of future research involves the introduction of some external perturbations to investigate the response i.e. stampede like situation [27; 60].
###### Acknowledgements.
P.J. gratefully acknowledge the DST INSPIRE fellowship for funding this project. The support and the re
Figure 9: The plots (a-b) depict the velocity auto-correlation (\(C_{P_{x}}(t)\)) vs. time (\(t\)) and averaged density auto-correlation (\(C_{\rho}(t)\)) vs. time (\(t\)) in the presence and absence of frontliners respectively.
sources provided by PARAM Shivay Facility under the National Supercomputing Mission, Government of India at the Indian Institute of Technology, Varanasi are gratefully acknowledged by all authors. S.M. thanks DST-SERB India, ECR/2017/000659, CRG/2021/006945 and MTR/2021/000438 for financial support. P.J. and S.M. also thank the Centre for Computing and Information Services at IIT (BHU), Varanasi.
|
2307.10917 | Efficient quantum amplitude encoding of polynomial functions | Loading functions into quantum computers represents an essential step in
several quantum algorithms, such as quantum partial differential equation
solvers. Therefore, the inefficiency of this process leads to a major
bottleneck for the application of these algorithms. Here, we present and
compare two efficient methods for the amplitude encoding of real polynomial
functions on $n$ qubits. This case holds special relevance, as any continuous
function on a closed interval can be uniformly approximated with arbitrary
precision by a polynomial function. The first approach relies on the matrix
product state representation. We study and benchmark the approximations of the
target state when the bond dimension is assumed to be small. The second
algorithm combines two subroutines. Initially we encode the linear function
into the quantum registers with a shallow sequence of multi-controlled gates
that loads the linear function's Hadamard-Walsh series, exploring how
truncating the Hadamard-Walsh series of the linear function affects the final
fidelity. Applying the inverse discrete Hadamard-Walsh transform transforms the
series coefficients into an amplitude encoding of the linear function. Then, we
use this construction as a building block to achieve a block encoding of the
amplitudes corresponding to the linear function on $k_0$ qubits and apply the
quantum singular value transformation that implements a polynomial
transformation to the block encoding of the amplitudes. This unitary together
with the Amplitude Amplification algorithm will enable us to prepare the
quantum state that encodes the polynomial function on $k_0$ qubits. Finally we
pad $n-k_0$ qubits to generate an approximated encoding of the polynomial on
$n$ qubits, analyzing the error depending on $k_0$. In this regard, our
methodology proposes a method to improve the state-of-the-art complexity by
introducing controllable errors. | Javier Gonzalez-Conde, Thomas W. Watts, Pablo Rodriguez-Grasa, Mikel Sanz | 2023-07-20T14:40:55Z | http://arxiv.org/abs/2307.10917v6 | # Efficient quantum amplitude encoding of polynomial functions
###### Abstract
**Loading functions into quantum computers represents an essential step in several quantum algorithms, such as quantum partial differential equation solvers. Therefore, the inefficiency of this process leads to a major bottleneck for the application of these algorithms. Here, we present and compare two efficient methods for the amplitude encoding of real polynomial functions. This case holds special relevance, as any continuous function on a closed interval can be uniformly approximated with arbitrary precision by a polynomial function. The first approach relies on the matrix product state representation. We study and benchmark the approximations of the target state when the bond dimension is assumed to be small. The second algorithm combines two subroutines. Initially we encode the linear function into the quantum registers with a swallow sequence of multi-controlled gates that loads the linear function's Hadamard-Walsh series coefficients. Applying the inverse discrete Hadamard-Walsh transforms the series coefficients into an amplitude encoding of the linear function. Then, we use this construction as a building block to achieve a \(\mathcal{O}(n)\) block encoding of the amplitudes corresponding to the linear function and apply the quantum singular value transformation that implements a polynomial transformation to the block encoding of the amplitudes. Additionally, we explore how truncating the Hadamard-Walsh series of the linear function affects the final fidelity of the target state, reporting high fidelities with minimal resources.**
## 1 Introduction
Over the past few decades, there has been a significant interest in quantum computing due to its theoretical capacity to surpass classical information processing for certain specific application areas. Despite the fact that current quantum computers are hindered by noise and decoherence, there have been successful experimental demonstrations of quantum advantage [1, 2, 3]. However, these achievements have yet to have any practical relevance, leaving the search for useful applications ongoing. Many promising quantum algorithms, such as solving systems of linear equations [4, 5], performing data fitting [6], computing scattering cross sections [7, 8], pricing financial derivatives [9, 10, 11, 12, 13, 14, 15], and in general solving differential equations [16, 17, 18, 19, 20], require the efficient loading of classical data into quantum devices. Unfortunately, this step remains a challenging problem, and it is a major bottleneck for the practical application of quantum computation, especially within the emerging field of quantum machine learning in the NISQ-Era [21, 22, 23].
In this regard, the main drawback comes from the fact there is no universal loading protocol and each particular case must be carefully studied in order to design bespoke encoding protocols that cater to the specific problem to be solved [24, 25, 26]. In this sense, one of the main embedding techniques is the amplitude encoding, which loads the values of a discretized, normalized complex function into the amplitude of the quantum
states [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48]. Several approaches have already been presented in the literature for implementing the amplitude embedding, many requiring a huge (exponential) number of resources (ancillas and gates) [28, 29, 30, 31, 32], oracles [33, 34, 35, 36, 37, 38], sparsity in the quantum state [39], training variational circuits [40, 41, 42], truncating a series expansion [43, 44], use the quantum singular value transformation [27] or matrix product states [45, 46, 47, 48, 49, 50, 51, 52, 53].
There exists a necessity for loading polynomial functions due to the growing number of applications of quantum computing. Specifically, in finance [9, 10, 11, 12, 13, 14, 15, 54], the ability to efficiently load first order polynomials, \(f(j)=\;aj+b\) allows for options pricing via quantum amplitude estimation (QAE) without coherent arithmetic [55]. In this sense, applying QAE allows us to extract the amplitude \(\sum_{j}f(j)p(j)=\mathbb{E}[f(X)]\) where \(X\) is a random variable with probability mass function \(p(j)\). This approach can be generalized to the multidimensional setting where the ability to load multivariate linear functions yields efficient algorithms for pricing basket options [56]. Furthermore, quantum circuits for efficiently loading the linear function can be used to construct a block encoding of the identity function thus allowing us to apply the quantum singular value Transformation [57, 58, 59, 60, 61, 62] (QSVT) in order to obtain any polynomial amplitude encoding [63, 27]. This is a powerful tool capable of uniformly approximating the encoding of any continuous real function defined on a closed interval with arbitrary precision [64, 65].
In this article we present two methods for implementing the amplitude encoding of real valued polynomials into quantum computers with linear complexity. The first one is based on the matrix product sate (MPS) representation of quantum states and its implementation on a quantum computer. We explore how approximating the MPS affects the achieved fidelity and the resources requirements [66, 67, 68, 69, 70, 71]. On the other hand, the second method is consists of two steps, first we propose a novel protocol to efficiently load the linear function based of the discrete Hadamard-Walsh transform (DHWT) [72] that we use to achieve a block encoding of the amplitudes with complexity \(\mathcal{O}(n)\). Secondly, we use the QSVT to implement a polynomial transformation on the eigenvalues of the block encoding to achieve the desired target state [63].
The article is structured as follow, first we review the loading of polynomials via MPS. Next, we introduce our new methodology that combines DHWT to load the linear function with the QSVT to implement the polynomial transformation of the amplitudes. Finally we show numerical results and compare our method with previous results in the literature.
## 2 Loading of polynomials via MPS
In this section we analyze the methods based on MPSs to encode polynomials into the amplitude of a quantum state according to following definition.
**Definition 1**: _Let \(P(x):\mathds{R}\rightarrow\mathbb{C}\), be a polynomial with complex coefficients. We define the \(n\)-qubit normalized representative state of \(P(x)\) as the quantum state \(\ket{\Phi_{P}}=\frac{1}{C_{P}}\sum_{j=0}^{2^{n}-1}P(j)\ket{j}\), with \(C_{P}\) the normalization factor._
In the particular case of the linear function, i.e. \(P(x)=\;x\), we have the normalization factor is \(C_{P}=\;\sqrt{(2^{n+1}-1)(2^{n}-1)2^{n}/6}\).
The complete description of a quantum state of \(n\) linearly connected qubits (sites) can be represented by a tensor, \(A\), with \(n\) physical indices. A state of this kind is referred to as a matrix product state (MPS) [66, 67, 68, 69]. Each physical index is assigned to a qubit and has a degree of \(d=2\) i.e., the index is either \(0\) or \(1\). For a specific choice of each physical index \(j_{i}\), the tensor's values give rise to a collection of \(n\) matrices whose product is equal to the amplitudes of the computational basis state \(\ket{j_{n-1}\ldots j_{0}}\). For open boundary conditions, the MPS representation of a quantum state is given by
\[\ket{\Psi}= \sum_{j_{0}\cdots j_{n-1}}\sum_{\alpha_{1}\cdots\alpha_{n-1}}A^{[ 1]}_{j_{0},\alpha_{1}}A^{[2]}_{j_{1},\alpha_{1}\alpha_{2}}\] \[\cdots A^{[n]}_{j_{n-1},\alpha_{n-1}}\ket{j_{n-1}\ldots j_{0}}. \tag{1}\]
This representation has \(n-2\) tensors of order \(3\), denoted as \(A^{[i]}_{j_{i}\cdots\alpha_{i-1}\alpha_{i}}\), \(\forall\;i\neq 0,n-1\), and \(2\) external tensors \(A^{[1]}_{j_{0},\alpha_{1}}\) and \(A^{[n]}_{j_{n-1},\alpha_{n-1}}\) of order \(2\), with \(j_{i}\) the physical indexes that range from \(0\) to \(d-1\) and \(\alpha_{i}\) the virtual indices from \(0\) to \(\chi_{i}-1\). The virtual dimensions, \(\chi_{i}\), connecting each pair of tensors via the virtual indices are referred as
the bond dimensions. We define the bond dimension of the entire MPS as \(\chi=\max_{i}\,\chi_{i}\).
When it comes to representing polynomials as quantum states, Grasedyck [45] proved that for a real valued polynomial of degree \(d\) encoded in the amplitudes of a quantum state according to Def. 1, the MPS bond dimension is as much \(\chi=\ d+1\).
### Obtaining the exact MPS
The MPS representation of a quantum state \(|\Psi\rangle\) is not unique, as different choices of \(A^{[i]}_{j_{i},\alpha_{i-1}\alpha_{i}}\) can yield the same quantum state. We focus on the left canonical form, which implies the following conditions
\[\sum_{j_{0},\alpha_{1}}A^{[1]}_{j_{0},\alpha_{1}}A^{[1]\dagger}_{ j_{1},\alpha_{1}}=1, \tag{2}\] \[\sum_{j_{i},\alpha_{i}}A^{[i]}_{j_{i},\alpha_{i-1}\alpha_{i}}A^{[ i]\dagger}_{j_{i},\alpha^{\prime}_{i-1}\alpha_{i}}=\delta_{\alpha_{i-1}\alpha^{ \prime}_{i-1}},\] (3) \[\sum_{j_{n-1}}A^{[n]}_{j_{n-1},\alpha_{n-1}}A^{[n]\dagger}_{j_{n},\alpha^{\prime}_{n-1}}=\delta_{\alpha_{n-1}\alpha^{\prime}_{n-1}}. \tag{4}\]
To obtain the MPS representation of a quantum state, the singular value decomposition (SVD) is employed [73]. Initially, the quantum state, represented as a tensor \(A\) of rank \(n\) and dimension \(2\), is reshaped into a matrix by combining all the indices except one. The SVD is then applied to this matrix, decomposing it into the matrix of left singular vectors, \(U\), the matrix of singular values, \(\Sigma\), and the matrix of right singular vectors, \(V^{\dagger}\). As we will depict in the following section, it is possible to truncate the smallest singular values by choosing a desired bond dimension \(\chi\) and keeping only the \(\chi\) largest values. Then, the matrix of singular values is contracted with the matrix of right singular vectors (left canonical form), and the resulting matrix, \(\Sigma V^{\dagger}\), is reshaped back into a tensor that now has an extra virtual index. This process, shown in Fig. 1, is iterated for each physical index. Finally, we obtain an MPS that approximates the original quantum state while providing a compact and efficient representation.
The computational cost of performing SVD on an \(m\times n\) matrix is typically \(\mathcal{O}(\min(mn^{2},m^{2}n))\) and must be taken into account as a preprocessing cost in this methodology. This cost can be reduced for sparse or structured low-rank matrices. In the case of an exact matrix product state (MPS), the bond dimension doubles for each connection between core tensors. This leads to a maximum bond dimension of \(2^{[n/2]}\), occurring in the middle of the MPS. As a result, the computational cost of the entire algorithm is primarily determined by the SVD of this central square matrix, i.e. \(\mathcal{O}(2^{3n/2})\), which exhibits exponential scaling with the number of qubits. Therefore, this non-negligible classical pre-processing cost must be considered in the overall complexity of the MPS algorithm.
In the particular case of the linear function, the analytical expression of the exact MPS [74], which has bond dimension \(\chi=2\), reads
\[|\Phi_{L}\rangle= \sum_{j_{0}\cdots j_{n-1}}\left(j_{0}/C\quad 1\right)\begin{pmatrix}1 &0\\ 2j_{1}/C&1\end{pmatrix}\] \[\cdots\begin{pmatrix}1\\ 2^{n-1}j_{n}/C\end{pmatrix}|j_{n-1}\ldots j_{0}\rangle\,. \tag{5}\]
### Approximation of the exact MPS
While in the worst case scenario it is possible to exactly represent any tensor as an MPS by allowing the bond dimensions to grow up to \(2^{[n/2]}\), we can potentially achieve an exponential compression by approximating the initial tensor using \(\mathcal{O}(2n\chi^{2})\) elements, given a fixed bond dimension, \(\chi_{i}=\chi\ \forall\ i\). However, as the entanglement of the state to be approximated increases, the minimum bond dimension \(\chi\) required to get a good description of the state using an approximated MPS representation also grows [75, 76]. Notice that when truncating the maximum bond dimension of the MPS, the complexity of the classical preprocessing turns into \(\mathcal{O}(n\chi^{3})\).
Figure 1: Iterative singular value decomposition (SVD) procedure for obtaining the MPS from a tensor with \(n=3\) physical indices. The process involves \(n-1\) uses of the SVD. Notably, the matrices containing the singular values are absorbed to the right, resulting in the left canonical form of the MPS.
In order to estimate the error incurred when approximating by truncating the bond dimension in the MPS representation, it is necessary to consider that at each time we perform a SVD, we are discarding singular values. The error incurred when approximating a matrix by considering its \(k\) largest singular values is determined by the Eckart-Young theorem [77], and corresponds to the sum of the omitted singular values. Furthermore, in the complete process, this approximation is performed for each core tensor. Thus, the overall error of the MPS approximation can be upper bounded in the Frobenius norm as
\[\|A-\tilde{A}\|_{F}^{2}\leq\sum_{i=1}^{n-1}\left(\sum_{k=\chi_{i}+1}^{\dim( \Sigma_{i})}\sigma_{k}^{2}(\Sigma_{i})\right), \tag{6}\]
,where \(\tilde{A}\) denotes the approximated MPS. This equation encompasses the error contributions from the \(n-1\) singular value matrices \(\Sigma_{i}\), each characterized by a fixed bond dimension, \(\chi_{i}\). In order to keep the approximation error low, it is crucial that the spectrum of each \(\Sigma_{i}\) decays rapidly.
In this regard, the state-of-the-art technique for preparing smooth, differentiable, real functions using matrix product states, has conjectured in Ref. [46] that the singular values exhibit exponential decay. This allows for good approximations of such functions while maintaining low bond dimension values, thus significantly reducing the required resources. The method relies on the fact that, for such functions, the entanglement entropy, quantified by the von Neumann entropy, scales logarithmically with the number of qubits, \(n\), and therefore, these functions can be efficiently approximated by an MPS with a low bond dimension, as argued in Ref. [75]. Although empirical results with this technique applied to polynomials show good performance with \(\chi=2\), the upper bound of the von Neumann entropy depends on the maximum derivative value of the function within the considered interval. Consequently, as we show discuss later, for certain polynomials the truncation to \(\chi=2\) does not yield a satisfactory approximation.
In the particular case of the linear function, the analytical expression from Eq. 2.1 reveals that achieving the exact MPS representation requires a bond dimension of \(\chi=2\). However, one might consider reduce the bond dimension to \(1\). To explore this possibility, when studying the linear function, our focus will be on two specific cases: \(\chi=2\) (exact MPS) and \(\chi=1\) (product state).
### From MPS to circuit
Let us now analyze the resources needed to translate an MPS in either cases, exact or approximated, into a quantum circuit [79, 70, 69, 78]. First we will assume that the bond dimensions are powers of two, padding the tensors with zeros if needed. Therefore, we can assume that these tensors are isometries of \(2\chi_{i}\times\chi_{i+1}\) and then be embedded into a unitary gate acting on \(n_{\chi_{j}}=\max\{\log_{2}(2\chi_{i}),\log_{2}(\chi_{i+1})\}\) qubits. The arrangement of gates in the MPS conversion process, following a linear topology, results in the formation of a single layer of multi-qubit unitaries, whose size \(n_{\chi_{j}}\) depends on the bond dimension. These unitaries are organized in a staircase topology, commonly referred to as a linear circuit layer [78]. Therefore the complexity now is equivalent to implement the cascade of \(n-1\) multi-qubit unitaries. In the case \(\chi=2\), this complexity scales as \(\mathcal{O}(n-1)\) two-qubit unitaries. On top of this, the cost of decomposing these unitaries into two-qubit gates must be taken into account. This cost is considerably larger when decomposing multi-qubit unitaries and might result in an exponential number of two-qubit gates. [80, 81]. Additionally, one might consider to implement circuits to approximate the unitaries [78]. Last, in Ref. [71] authors proposed a method for loading translation-invariant short-correlation MPS with an error \(\epsilon\) in depth \(T=\mathcal{O}(\log(N/\epsilon))\).
We can conclude that even though the MPS of polynomials have an efficient representation, we have to consider the cost of obtaining the SVD and the cost of the decomposition of the unitaries to implement their circuit. These costs can be significantly reduced with some approximations that truncate the bond dimension to be \(\chi=2\), although this includes a non controllable source of error in the representation of the state [27, 46, 82, 47]. Additionally, one can also consider the possibility of approximating the large unitaries of circuit to load the MPS [78], which might result into good representation although there is not a pri
Efficient loading of polynomials via DHWT and QSVT
In this section we present a method to load polynomials by combining a technique to encode the linear function into the amplitudes of a quantum state via the discrete Hadamard-Walsh transform (DHWT) with the quantum Singular Value Transformation (QSVT) algorithm. The first methodology encodes the linear function with a swallow sequence of multi-controlled gates that loads its Hadamard-Walsh series expansion, followed by the inverse discrete Hadamard-Walsh transform. By truncating the series expansion, this approach enables a controllable approximation of the target state up to a certain error, \(\epsilon\), measured in terms of the infidelity with respect to the exact state or via the deviation in each amplitude with respect to the one of the ideal state, \(\delta_{j}\). The second step uses the block encoding of the linear function and applies the QSVT to implement a polynomial transformation, \(P(x)\), on the amplitudes of the quantum state corresponding to the identity function, which results into the polynomial encoding.
### The Discrete Hadamard-Walsh transform
**Definition 2**: _The discrete Hadamard-Walsh transform (DHWT) is a linear, orthogonal and symmetric operation that transforms discrete signals or sequence of sorted data into a new representation given by the Hadamard-Walsh Series_
\[HWT:(z_{0}^{(n)}\ldots z_{N-1}^{(n)})\rightarrow(x_{0}^{(n)}\ldots x_{N-1}^{(n )}), \tag{7}\]
\[x_{k}^{(n)}=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}z_{j}^{(n)}W_{k}(j) \tag{8}\]
_where \(j=\sum_{m=0}^{n-1}j_{m}2^{m}\), \(k=\sum_{m=0}^{n-1}k_{m}2^{m}\) with \(n=\log_{2}(N)\), \(j_{m},k_{m}\in\{0,1\}\) and \(W_{k}(j)=(-1)^{\sum_{m=0}^{n-1}j_{m}k_{m}}\) is the \(k-th\) Walsh function, where we have used the natural order._
We also define the binary norm of an integer as \(|k|_{b}=\ \sum_{m=0}^{n-1}k_{m}\). Note that when \(|k|_{b}=1\), it is equivalent to say that \(k\) is a power of 2. Additionally, when representing a quantum state \(|j\rangle\) in terms of a binary notation we will denote the state as \(|j_{n-1}\ \ldots\ j_{0}\rangle\), taking the order of the tensor product from right to left.
**Lemma 1**: _Let be the discrete sorted sequence \((0,1\ldots,2^{n}-1)\). Then, the coefficients of its Hadamard-Walsh series are given by_
\[x_{k}^{(n)}=\begin{cases}2^{n/2}(2^{n}-1)/2&\text{if }k=0\\ -2^{n/2}k/2&\text{if }|k|_{b}=1\\ 0&\text{otherwise}\end{cases} \tag{9}\]
Note that in general, the sparsity of the state encoding the Hadamard-Walsh series of a polynomial of degree \(d\) represented in \(n\) qubits, with \(d\leq n\), is \(s=\sum_{k=0}^{d}\binom{n}{k}\)[83]. Therefore, one might consider the techniques in Ref. [39] to implement the state with depth \(\mathcal{O}(\log(ns))\) and \(\mathcal{O}(ns\log(s))\) ancillary qubits.
### Exact loading of the linear function via DHWT
In the particular case of the linear function, the target state is \(|\Phi_{L}\rangle=\ \frac{1}{C}\sum_{j=0}^{2^{n}-1}j\,|j\rangle\), with \(C=\sqrt{(2^{n+1}-1)(2^{n}-1)2^{n}/6}\). Notice from the Lemma 1 that the state that encodes the discrete Hadamard-Walsh transform of the coefficients of \(|\Phi_{L}\rangle\) reads
\[\left|\tilde{\Phi}_{L}\right\rangle=\frac{1}{C}\sum_{|k|_{b}\leq 1}x_{k}^{(n)} \left|k\right\rangle,\]
which is a \(|1\rangle\)-sparse quantum state, i.e. the strings corresponding to their eigenstates with a non-zero coefficient have at most one \(|1\rangle\), i.e. \(\{|00\ \ldots 00\rangle\,\ |10\ \ldots 0\rangle\ldots,|00\ \ldots 01\rangle\}\).
Due to its structure, the state \(\left|\tilde{\Phi}_{L}\right\rangle\) can be efficiently encoded into a quantum computer with
Figure 2: Discrete Hadamard-Walsh Transform for \(n=\ 3\) qubits in the natural order representation.
gate complexity \(\mathcal{O}(n)\) according to the circuit depicted in Fig. 3, where the angle of every multi-controlled rotation is given by \(\theta_{k}=\ \arcsin(\frac{2x_{2k}^{(n)}}{\tilde{C}G})\), with \(k=0\ \ldots\ n-1\) and \(G=\ \prod_{i=k+1}^{n-1}\cos(\theta_{i})\) if \(k<n-1\) and \(G=1\) if \(k=n-1\). We denote the unitary corresponding to this circuit as \(U_{L}\). Once that we have encoded the state \(\left|\tilde{\Phi}_{L}\right\rangle\) that represents the discrete Hadamard-Walsh series of our target state, we can simply uncompute the Hadamard-Walsh transform to obtain \(\left|\Phi_{L}\right\rangle\). In terms of gates on a quantum computer, this operation is a parallel implementation of Hadamard gates on all the qubits.
### Approximated loading of linear polynomials via DHWT
In the previous section we have conclusively demonstrated the efficient loading of linear functions and now the inevitable query arises is: can the Hadamard-Walsh series be deliberately truncated while maintaining control over the process? In other words, is it possible to strike a balance between the number of terms truncated and the resulting error, thereby achieving a quantum state approximation that accurately encodes our intended target?
We proceed to illustrate how the loading of the linear function can be approximated by truncating the Hadamard-Walsh series. Let us assume that we have the DHWT for \(n\) qubits, then the non zero coefficients are
\[\vec{h}_{n}^{(n)}=(x_{0}^{(n)}\ x_{1}^{(n)}\ x_{2}^{(n)}\ x_{4}^{(n)}\ \ldots\ x_{2^{n-1}}^{(n)}) \tag{10}\]
with \(\left|x_{1}\right|<\left|x_{2}\right|<\left|x_{4}\right|\ \ldots<\left|x_{2^{n-1}}\right|<\ \left|x_{0}\right|\). We keep \(x_{0}\) and the largest \(k_{0}\) values of the coefficients with \(\left|k\right|_{b}=1\), i.e. \(\vec{h}_{k_{0}}^{(n)}=(x_{0}^{(n)}\ 0\ 0\ldots 0\ x_{2^{n-k_{0}}}^{(n)}\ \ldots\ x_{2^{n-1}}^{(n)})\). We now construct the circuit to generate the state encoding these renormalized coefficients of the truncated series. Then, the fidelity of the resulting state, \(\left|\Phi_{L}^{k_{0}}\right\rangle\), with the exact state, \(\left|\Phi_{L}\right\rangle\), is given by
\[F=\frac{\frac{1}{4}\left(-1+2^{n}\right)^{2}+\frac{1}{3}2^{-2+2n}\left(1-4^{ -k_{0}}\right)}{\frac{1}{4}\left(-1+2^{n}\right)^{2}+\frac{1}{3}2^{-2+2n} \left(1-4^{-n}\right)}. \tag{11}\]
More details about this expression are given in the Appendix A. Note that while the structure of the circuit is the same, see Fig. 3, the angles have changed its value to \(\theta_{k}^{k_{0}}=\ \arcsin(\frac{2x_{2k}^{(n)}}{\tilde{C}^{k_{0}}G^{k_{0}}})\), with \(\tilde{C}^{k_{0}}\) the new normalization factor and \(G^{k_{0}}=\ \prod_{i=k+1}^{n-1}\cos(\theta_{k}^{k_{0}})\) if \(n-k_{0}\leq k<n-1\) and \(G^{k_{0}}=1\) if \(k=n-1\). We denote the unitary corresponding to the circuit that loads the truncated Hadamard-Walsh series as \(U_{L}^{k_{0}}\).
Now, assuming an infidelity \(\epsilon\), it is possible to obtain the expression
\[k_{0}=\frac{1}{2}\log_{2}\bigg{[}2((1-\epsilon)\left(\frac{1}{2^{2n}}-\frac{3 }{2^{n}}+2\right)\]
\[-\frac{3}{2^{2n+1}}+\frac{3}{2^{n}}-2)\bigg{]}, \tag{12}\]
which establishes the trade off between infidelity and truncation. In the asymptotic limit \(n\rightarrow\infty\), we obtain \(k_{0}=\frac{1}{2}\log_{2}\left[-4\epsilon\right].\)
Additionally to this analysis, we also study the deviation of the amplitudes of the approximated state with respect to the ideal state. We describe the exact state as \(\left|\Phi_{L}\right\rangle=\ \frac{1}{\tilde{C}}\sum_{j=0}^{2^{n}-1}j\left|j\right\rangle\), with \(C=\sqrt{(2^{n+1}-1)(2^{n}-1)2^{n}/6}\). We now focus on describing the state that comes from the truncation of the series, denoted as \(\vec{h}_{k_{0}}^{(n)}=(x_{0}^{(n)}\ 0\ \ldots\ 0\ x_{2^{n-k_{0}}}^{(n)}\ \ldots\ x_{2^{n-1}}^{(n)})\). We compare this with the state produced by the
Figure 3: Circuit implementation for loading the quantum state \(\left|\Phi_{L}\right\rangle\), denoted as \(U_{L}\) with complexity \(\mathcal{O}(n)\). The first part of the circuit loads the state encoding its Hadamard-Walsh series \(\left|\tilde{\Phi}_{L}\right\rangle\) and once it has been loaded, we apply the Hadamard-Walsh transform to achieve \(\left|\Phi_{L}\right\rangle\). If we consider only the first \(k_{0}\) rotations, then we get the approximated Hadamard-Walsh series \(\left|\tilde{\Phi}_{L}^{k_{0}}\right\rangle\) and the respectively approximated state \(\left|\Phi_{L}^{k_{0}}\right\rangle\). We denote this last circuit that loads the approximated state as \(U_{L}^{k_{0}}\). See the appendix in Ref. [84] for the details of the decomposition of multi-controlled gates.
series corresponding on doing the DHWT on \(k_{0}\) qubits given by \(\vec{h}_{k_{0}}^{(k_{0})}=(x_{0}^{(k_{0})}x_{2^{0}}^{(k_{0})}\;\;\ldots\;x_{2^{k_ {0}-1}}^{(k_{0})})\). From this comparison we can obtain that
\[\alpha:=x_{2^{n-k_{0}+j}}^{(n)}/x_{2^{j}}^{(k_{0})}=2^{3/2(n-k_{0})}\] \[\forall\;\;j=0,\;...,\;k_{0}-1.\]
Therefore we can write
\[\vec{h}_{k_{0}}^{(n)}=\alpha(x_{0}^{(k_{0})}\;0\;\ldots\;0\;x_{2^{0}}^{(k_{0}) }\;\;\ldots\;x_{2^{k_{0}-1}}^{(k_{0})})\]
\[+\beta(2_{0}^{k_{0}}\;0\;\ldots\;0) \tag{13}\]
with \(\beta=2^{n-k_{0}-1}\left(\frac{2^{n-1}}{2^{n/2}}-\frac{2^{n/2}(2^{k_{0}}-1)}{2 ^{k_{0}}}\right)\) the offset that will induce the maximum deviation in the amplitude. Thus, we can conclude that when truncating the sate, we are loading the state \(\left|\Phi_{L}^{(k_{0})}\right\rangle=\;\frac{1}{C^{k_{0}}}\sum_{j=0}^{2^{k_{0 }-1}}(\alpha j+\;\beta)\left|j\right\rangle\) on the most significant \(k_{0}\) qubits, with \(C^{k_{0}}\) the normalization factor. When adding the remaining \(n-k_{0}\) qubits, this will lead to an state with a degeneracy \(2^{n-k_{0}}\) of every of this amplitudes (graphically a step function)
\[\left|\Phi_{L}^{k_{0}}\right\rangle=\;\frac{1}{\bar{C}^{k_{0}}}\sum_{j=0}^{2^{ k_{0}-1}}\sum_{l=0}^{2^{n-k_{0}-1}}(\alpha j+\beta)\left|j\right\rangle\left|l\right\rangle\]
with
\[\tilde{C}^{k_{0}}=\bigg{(}2^{n-k_{0}}\big{(}\alpha^{2}(2^{n+1}-1)(2^{n}-1)2^{ n}/6\]
\[+\alpha\beta(2^{n}-1)2^{n}+2^{n}\beta^{2}\big{)}\bigg{)}^{1/2}. \tag{14}\]
Finally, merging both sums we finally get
\[\left|\Phi_{L}^{k_{0}}\right\rangle=\;\frac{1}{\bar{C}^{k_{0}}}\sum_{j=0}^{2^ {n}-1}(\alpha\lfloor j/2^{n-k_{0}}\rfloor+\beta)\left|j\right\rangle. \tag{15}\]
From these expression we can define the deviation of the amplitude of \(\left|j\right\rangle\) as
\[\delta_{j}=\left|\frac{1}{\bar{C}^{k_{0}}}(\alpha\lfloor j/2^{n-k_{0}}\rfloor +\beta)-\frac{j}{\bar{C}}\right|\leq\beta/\bar{C}^{k_{0}} \tag{16}\]
From now on, we will assume that we have access to a state preparation oracle denoted as \(U_{L}\), along with its adjoint and their controlled variants. This allows us to load the linear function either in an exact or approximated manner.. Note that in the worst case scenario, the controlled version can be achieved by controlled every gate of the oracle.
### Polynomial transformation of amplitudes via quantum singular value transformation
Once we have introduced the quantum circuit that loads the linear function (or its approximations) via the unitary \(U_{L}\) (\(U_{L}^{k_{0}}\)) given by the DHWT or even using the circuit obtained by its (approximated) MPS, our method uses quantum singular value transformation (QSVT) [57, 58, 59, 60, 61, 62] to achieve the polynomial transformation of the amplitudes. In this work we will stick to the procedure detailed in Ref. [63]. The remarkable insight of this work is to show how it is possible to use the QSVT polynomial transformation to explicitly construct quantum circuits that apply the polynomial transformation to the amplitudes of some quantum state of interest whose encoding implementation is known. Note that this protocol is also able to implement complex polynomial transformations of complex amplitudes.
#### 3.4.1 Block encoding
The first step is the block encoding of the amplitudes.
**Definition 3**: _[_57, 58, 59, 60, 61, 62, 85_]_ _Let be \(A\) a n-qubit operator, \(\alpha,\varepsilon\in\;\mathrm{I\!R}^{+}\) and \(a\in\mathrm{I\!N}\). We say that the (a+n)-qubit unitary \(U_{A}\) is an \((\alpha,a,\varepsilon)\)-blockencoding of \(A\) if_
\[\|A-\alpha(\langle 0|^{\otimes a}\otimes I)U_{A}(|0\rangle^{\otimes a}\otimes I) \|_{2}\leq\varepsilon. \tag{17}\]
According to Theorem 4 in Ref. [63], we can map the amplitudes of \(\left|\Phi_{L}\right\rangle\) to an Hermitian matrix of \(2n+1\) qubits, \(\tilde{A}=\;\sum_{j=0}^{2^{n}-2}j/C\left|\Phi_{j}\right\rangle\left\langle \Phi_{j}\right|+...\) such that \(\left\langle\Phi_{j}|\Phi_{j^{\prime}}\right\rangle=\;\delta_{jj^{\prime}}\) and the residual term is orthogonal to the first term. Moreover, we can obtain a \((1,1,0)\) block encoding \(U_{\tilde{A}}\) of \(\tilde{A}\) by using \(controlled-U\) and \(controlled-U^{\dagger}\) four times and \(\mathcal{O}(n)\) single and two qubit gates. The total amount of ancillary qubits required so far is \(n+1\).
An alternative \((1,1,0)\) block encoding \(U_{B}\) resulting by following the idea of Ref. [27] could be implemented by applying the unitary dilation technique [86] to \(B=\;\sum_{k=0}^{2^{n}-1}j/C\left|j\right\rangle\left\langle j\right|\), given \(\|B\|\leq 1\). This operation would require an efficient simulation of the Hamiltonian \(H=\;\arccos(B)\)[87].
#### 3.4.2 Polynomial Amplitude transformation
Once we have obtained the amplitude block encoding, we present how to implement polynomial transformations of complex amplitudes via the QSVT [60, 27, 63]. The construction and efficiency of the explicit quantum circuits for applying the polynomial transformation to amplitudes of a quantum state relies on the efficiency of the implementation of the unitary corresponding to loading of the linear function \(U_{L}\) or its approximations (\(U_{L}^{k_{0}}\)). This particular choice of \(U_{L}\) corresponds to the objective of encoding the polynomial \(P(x)\) as the transformation prepares an state proportional to \(\sum_{j}P(j/C)|j\rangle\).
**Theorem 1**: _(Theorem 5 in Ref. [63]) Let be polynomial with complex coefficients of degree \(d\) and \(\gamma:=\text{ max}_{x\in[-1,1]}|P(x)|\), then the transformation_
\[\frac{1}{C}\sum_{j=0}^{2^{n}-1}j\,|j\rangle\rightarrow\frac{1}{C_{p}}\sum_{j= 0}^{2^{n}-1}P(j/C)\,|j\rangle\]
_where \(C_{p}\) is a new normalization factor, can be achieved by using controlled-\(U_{L}\), controlled-\(U_{L}^{\dagger}\)_
\[\mathcal{O}\left(d/\mathcal{F}\ \right)\]
_times,_
\[\mathcal{O}\left(nd/\mathcal{F}\ \right)\]
_one- and two-qubit gates, with \(\mathcal{F}=\sqrt{\frac{\sum_{j=0}^{2^{n}-1}|P(j/C)|^{2}}{\gamma^{2^{n}}}}\) the L2 norm filling ratio when the polynomial has been normalized such that its maximum value in the interval [-1,1] is 1. Additionally to the ancillary qubit for the block encoding \(U_{\tilde{A}}\) of the \(2n+1\) Hermitian matrix \(\tilde{A}\), another three extra ancillary qubits are needed, two for the polynomial transformation and one for amplitude amplification._
We depict an explicit description of the circuit for the whole process in Appendix B.
Note that therefore, the complexity of this protocol depends on the polynomial encoding that we aim to achieve. In this sense, some particular transformations will head to an efficient circuit, while others will introduce an exponential consumption of resources, depending on the filling ratio \(\mathcal{F}\).
Alternatively, if we had achieved the block encoding \(U_{B}\) according to [27], the resources needed to achieve the polynomial transformation would be \(d/2\) applications of \(U_{B}\) and \(U_{B}^{\dagger}\), \(2d\) CNOT gates, \(2d\) controlled \(R_{z}\) gates, two Hadamard gates and extra ancillary qubit (1+2 in total). Additionally, the final step, given by the amplitude amplification would introduce an additional qubit and each round requires one call to each \(U_{B}\), \(U_{B}^{\dagger}\), a 4 qubit (anti)-controlled-Z gate, an \((n+4)\) qubit (anti)-controlled-Z gate, \(n\) Hadamard gates and two \(R_{y}\) rotations. Note that the classical complexity of preprocessing for calculating the rotation angles that lead to the polynomial transformation which scales as \(\text{poly}(n,\log(1/\epsilon))\) in order to solve for angle sequences for a Laurent polynomial with degree \(n\) and error tolerance \(\epsilon\)[62, 90, 91].
Figure 4: Comparison of different methods for loading the linear function on \(n=6\) qubits. \((i)\) Illustrates the resulting state by using different encoding protocols, DHWT with \(k_{0}=3\) and MPS \(\chi=1\), in noisy an ideal scenarios. The choice of \(k_{0}=3\) is due to the fact this case has a high fidelity and the minimum error measured in the \(L_{2}\) norm for DHWT method in presence of noise. This is shown in \((ii)\) and \((iii)\), where fidelity and error measured in the \(L_{2}\) norm for both the ideal and noisy DHWT methods are plotted with respect to \(k_{0}\). We have not considered noise for the case MPS \(\chi=1\) as its loading circuit is only a layer of single qubit rotations, and its impact is marginal.
Finally, in either case, once the polynomial transformation has been achieved, we can analyze how the error coming from the approximation of the linear function affects the encoding of the resulting polynomial. For the amplitude of the state \(|j\rangle\) we consider the deviation in them amplitude of the polynomial
\[\Delta_{j}=\ |P(j/C)-\ P(j/C\pm\delta_{j})|\]
\[\leq\delta_{j}\ \max_{x\in[0,1]}\lvert P^{\prime}(x)\rvert,\]
with \(\delta_{j}\) given by Eq. (16). If we now assume \(P(x)=\sum_{k=0}^{d}c_{k}x^{k}\), then
\[\Delta:=\max_{j}\{\Delta_{j}\}\leq\frac{d^{2}-d}{2}\frac{\beta}{\tilde{C}^{k_{ 0}}}\ \max_{k}\{|c_{k}|\}.\]
## 4 Numerical Results
Having established the theoretical framework, in this section we present numerical simulations comparing the methods outlined in this paper. The primary objective is to provide empirical support for the analytical findings mentioned earlier.
### Linear function
We begin our analysis by evaluating the performance of the exact and approximated loading of the linear function and its its robustness in terms of fidelity against noise.
For the noisy simulations the methodology is described as follows. The quantum circuit is transpiled to a native set of gates, including _CNOT, Id, Rz \((\theta)\), X_, and _Sx_. We consider various noise quantum channels, namely, bit-flip (Pbf), amplitude-damping (T1), dephasing (T2), gate errors (rD, CNOT error), and measurement error (pmeas). The specific noise parameters used in the simulations are summarized in Tab. 1.
The analysis of loading the linear function is depicted in Fig. 4, where we have considered \(n=6\) qubits. In Fig. 4 (i) we have depicted a comparison of the ideal case \(\chi=1\) for the MPS technique, the noisy and ideal cases with \(k_{0}=3\) for the DHWT, and the exact encoding. In Fig. 4 (ii) and (iii), we can appreciate a trade-off between the truncation error (fidelity or \(L_{2}\) norm) and the experimental error for different values of \(k_{0}\) in the DHWT technique. We observe that the highest fidelity and the smallest \(L_{2}\) norm error are achieved at truncation levels \(k_{0}=2\) and \(k_{0}=3\). In the comparison, we also include the ideal MPS with \(\chi=1\).
### Example of polynomial function
We present a second analysis where we consider the encoding of a polynomial function. In Fig. 5, we present the loading of the polynomial function
\[P(x)=\frac{1}{C_{p}}(x-1/C)(x-20/C)(x-50/C)(x-60/C)\]
by using \(n=6\) qubits and the two methods studied in this paper. Our approach involves two distinct loading strategies: first loading the linear function and then applying the polynomial transformation with the QSVT method, ensuring no induced errors (Figs. 5 (\(i\)) and (\(ii\))), and via the
Figure 5: Different methods to load the polynomial function \(P(x)=\frac{1}{C_{p}}(x-1/C)(x-20/C)(x-50/C)(x-60/C)\) for \(n=6\) qubits using different methods. For this particular example the filling ratio is \(\mathcal{F}=0.6184\). The worst and best implementation methods in which the linear function is loaded first, followed by the application of QSVT are displayed in \((i)\) and \((iii)\), respectively. In \((ii)\), results for the loading using MPS are shown, achieving perfect loading with \(\chi=4\), although theory predicts \(\chi=5\) as the upper bound. The \(L_{2}\) norm and fidelities for each case are displayed in Tab. 2.
matrix product state (MPS) representation of the polynomial itself (Fig. 5\((iii)\)).
In order to load the linear function in the two steps encoding, we utilize either the DHWT method introduced in this work for different truncation values \(k_{0}\), or the MPS approach with \(\chi=~{}1\). In Tab. 2 we show the \(L_{2}\) norm and fidelity of the final state with respect to the exact state for each methodology. We can appreciate that from \(k_{0}\geq 4\) the value of the fidelity achieved by the combined protocol is better than the direct polynomial MPS technique with \(\chi=2\).
Additionally, the fidelities with respect to the exact state resulting from the approximated loading of the linear function for the methods utilizing QSVT to perform the polynomial transformation are also presented. Remarkably, the loading of the linear function with the MPS using \(\chi=1\) achieves an high fidelity of \(F=0.9885\). This suggests that the quantum state of the linear function is nearly a product state. To gain deeper insights, we performed an analysis of the single-qubit rotations that generate this approximated state. We fitted the angles of the rotations to an analytical expression and leveraging this fitting information, we trained a variational circuit aimed at preparing a product state that optimizes the fidelity with respect to the exact linear function. For additional details, please refer to the Appendix C.
## 5 Comparison with other methods
The first statement we would like to highlight is that we have put large part of our efforts in achieving an approximate loading of functions as simple as the linear function, by introducing a controllable error that reduces the depth of an already efficient circuit. As far as our knowledge is, there is no previous result in the literature that can do the same task with a comparable performance. That being said, we proceed to compare our method with similar results.
QSVT enables the application of polynomial transformations to the amplitudes of a quantum state. However, when utilizing this technique to load a polynomial function encoded in the quantum state's amplitudes, the efficient loading of the linear function is crucial. Previously in literature, authors in Ref. [27] explored the possibility of implementing the QSVT to the block encoding of the sinus series. Our method posses the same advantages of using QSVT that they mention: 'we avoid discretizing the values the function can take, providing instead a continuous approximation to the function. Our method is straightforward and versatile, as the same circuit template can be used for a wide range of functions.' In contrast, our method avoids the error that propagates into the final loaded state due to the polynomial approximation of the arcsin function by efficiently loading the block encoding of linear function, and the subsequent polynomial transformations applied to load the desired polynomials into the amplitudes. In this context, our approach focuses on implementing the block encoding of the linear function rather than the sinusoidal function. To achieve this, we have proposed a method based on the Hadamard-Walsh Transform, which introduces controllable error during the loading of the linear function. The cost of this replacement in the block encoding can me mainly expressed
\begin{table}
\begin{tabular}{c||c||c} \hline \hline
**Parameter** & **Description** & **Value** \\ \hline \hline SQG time & Single qubit gate time (ns) & 35 \\ CX time & CX gate time (ns) & 540 \\ rD & Deviation ratio for the single qubit gates & 2,457E-04 \\ Pbf & Bit-flip error during the rz gate & 2,457E-04 \\ CNOTerror & Deviation ratio for CX gate & 8,328E-03 \\ pmeas & Readout error & 2,23E-01 \\ pth & Thermal population of the ground state & 0.01 \\ T1 & Decoherence time (us) & 214.84 \\ T2 & Dephasing time (us) & 214.84 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Noise parameters description and their value. We have estimated the numerical values from the calibration data provided for the IBM device βibm_jakartaβ.
in terms of adding \(n\) extra ancillary qubits to the circuit, fact that leads to include more controlled operations. In the general case of loading an arbitrary function the trade off between the additional complexity of our protocol versus the approximation of the arcsin should be taken into account. For instance, [27] is more efficient for loading sinusoidal polynomials.
In addition to QSVT-based methods, there have been approaches utilizing matrix product states (MPS) for the loading of smooth differential real functions (SDR) into quantum state amplitudes [45, 46, 48, 88]. However, these methods lack rigorous error bounds. Ref. [46] shows that for such functions, favorable outcomes can be obtained by employing a fixed bond dimension of two, attributed to the logarithmic scaling of entanglement entropy in these cases [47]. In this paper, we have explored the resource requirements of this approach and compare it to our linear function+QSVT approach, specially in the case of loading of the linear function, for which we have analytical expressions for fidelity and error propagation.
Considering a different perspective, it was recently proposed the use of the Hadamard-Walsh series [43] for amplitude encoding. This proposal leverages the fact that for functions whose derivative is bounded, the error of their Discrete Hadamard-Walsh Series is exponentially suppressed with the index of the truncation [89]. The authors use the Hamiltonian simulation techniques depicted in [83] to achieve a Hadamard-Walsh approximated (\(\epsilon_{1}\)) simulation of the unitary \(U=e^{-i\hat{f}\epsilon_{0}}\) with \(\hat{f}=\sum_{x}f(x)\ket{x}\bra{x}\) the operator corresponding to the target function to be encoded. Finally they use an ancillary qubit to generate the operator \(-i(I-e^{-i\hat{f}\epsilon_{0}})\) acting on the state \(\sum_{x}\ket{x}\ket{1}\), which approximates the target state at first order of \(\epsilon_{0}\) and introduces a protocol conditioned to the probability of measuring the ancillary qubit in the state \(\ket{1}\). Therefore the total protocol introduces to sources of error, \(\epsilon_{1}\) corresponding to the Hadamard-Walsh series approximation and \(\epsilon_{0}\) corresponding to the Taylor expansion. This technique introduces an error for loading functions even for those cases in which \(\epsilon_{1}=0\), as in the linear case. By contrast our subroutine that uses the DHWT leverages the sparsity of the series corresponding to polynomial functions to efficiently generate quantum circuits that encode the states directly into the amplitudes. Additionally, this proposal based on the Hadamard-Walsh transform is not as efficient as pur methodology for the application of loading the linear function for derivative pricing.
Finally, we also remark that our method is not a priori limited by the bound of the derivative of the target function, as is the case for the MPS [46] or the Hamiltonian simulation based on DHWT [43].
\begin{table}
\begin{tabular}{c||c||c||c} \hline \hline
**Method** & **Fidelity linear** & \(L_{2}\) **norm** & **Fidelity poly** \\ \hline \hline DHWT (\(k_{0}=0\)) + QSVT & 0.7559 & 1.2867 & 0.0297 \\ DHWT (\(k_{0}=1\)) + QSVT & 0.9389 & 1.2085 & 0.0728 \\ Lin MPS (\(\chi=1\)) + QSVT & 0.9885 & 0.6497 & 0.6224 \\ Direct Pol MPS (\(\chi=1\)) & - & 0.6012 & 0.6712 \\ DHWT (\(k_{0}=2\)) + QSVT & 0.9848 & 0.5949 & 0.6774 \\ DHWT (\(k_{0}=3\)) + QSVT & 0.9962 & 0.3815 & 0.8598 \\ Direct MPS (\(\chi=2\)) & - & 0.2018 & 0.9597 \\ DHWT (\(k_{0}=4\)) + QSVT & 0.9991 & 0.1941 & 0.9627 \\ DHWT (\(k_{0}=5\)) + QSVT & 0.9998 & 0.0876 & 0.9923 \\ Direct Pol MPS (\(\chi=3\)) & - & 0.0391 & 0.9985 \\ DHWT (\(k_{0}=6\)) + QSVT & 1 & 0 & 1 \\ Direct Pol MPS (\(\chi=4\)) & - & 0 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Error in \(L_{2}\) Norm (third column) and Fidelities (fourth column) for Different Loading Methods of \(P(x)=\frac{1}{C_{p}}(x-1/C)(x-20/C)(x-50/C)(x-60/C)\). Methods are arranged in ascending order based on fidelity of the polynomial encoding (fourth column). We also show the fidelity of loading the linear functions for those 2 steps methods (second column).
Conclusions
In this article, we have considered the problem of loading real polynomials into a quantum computer, with a particular focus on the encoding of the linear function. We have presented and reviewed two methodologies based on different approaches. The first one is based on the matrix product state, and even though it offers really good results for some particular cases when achieving an approximated encoding with \(\chi=2\)[46], there is not theoretical control over the error the method incurs. The second algorithm introduced in this paper combines the discrete Hadamard-Walsh transform (DHWT) to achieve the block encoding of the amplitudes with the quantum singular value transformation (QSVT) to implement the polynomial transformation.
In the technique based on the DHWT, the coefficients of the Hadamard-Walsh series of a given function are loaded into a quantum state and the inverse discrete Hadamard-Walsh transform (DHWT) is applied to achieve the amplitude encoding of the target function. This idea constitutes a novel and promising approach for functions whose Hadamard-Walsh series can be efficiently encoded, as for instance when it sparse [39] or efficiently truncated [83].
We would like to remark that even though our work has been focused on encoding real polynomials, it could be easily extender to load complex polynomials, i.e. \(P~{}:~{}\mathbb{C}\rightarrow\mathbb{C}\), multivariate polynomials or even non linear functions approximated with polynomials [63, 27]. As a future scope we willconsider the feasibility of using our DHWT based method to load highly discontinuous square wave-like functions.
## Acknowledgements
We thank N. Guo and A. Rattew for the useful discussions regarding the amplitude transformations via the QSVT. We thank M. Cea-Fernandez for the discussions on the matrix product states. The authors acknowledge financial support from the project grant PID2021-125823NA-I00 funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe" and "ERDF Invest in your Future", Basque Government through Grant No. IT1470-22, and by the Spanish CDTI through Plan complementario Comunicacion cuantica (EXP. 2022/01341)(A/20220551), as well as from OpenSuperQ+100 (101113946) of the EU Flagship on Quantum Technologies, Spanish Ramon y Cajal Grant RYC-2020-030503-I, UPV/EHU PhD Grant PIF 20/276, as well as from the EU FET-Open project EPIQUS (899368).
|
2302.13418 | Hybrid completely positive Markovian quantum-classical dynamics | A concise and self-contained derivation of hybrid quantum-classical dynamics
is given in terms of Markovian master equations. Many previously known results
are re-derived, revised, some of them completed or corrected. Using as simple
method as possible, our goal is a brief introduction to state-of-the-art of
hybrid dynamics, with a limited discussion of the implications for foundations.
and without discussion of further relevance in quantum-gravity, or chemistry,
numeric methods, etc. Hybrid dynamics is defined as special case of composite
quantum dynamics where the observables of one of the two subsystems are
restricted for the commuting set of diagonal operators in a fixed basis. With
this restriction, the derivation of hybrid dynamical equations is clear
conceptually and simple technically. Jump and diffusive dynamics follow in the
form of hybrid master equations. Their stochastic interpretation (called
unravellings) is derived. We discuss gauge-type ambiguities, problems of
uniqueness, and covariance of the diffusive master equation. Also conditions of
minimum noise and of monitoring the quantum trajectory are derived. We conclude
that hybrid formalism is equivalent with standard Markovian theory of
time-continuous quantum measurement (monitoring) on one hand, and is a
motivating alternative formalism on the other hand. | Lajos DiΓ³si | 2023-02-26T22:10:38Z | http://arxiv.org/abs/2302.13418v2 | # Hybrid completely positive Markovian quantum-classical dynamics
###### Abstract
A concise and self-contained derivation of hybrid quantum-classical dynamics is given in terms of Markovian master equations. Many previously known results are re-derived, revised, some of them completed or corrected. Using as simple method as possible, our goal is a brief introduction to state-of-the-art of hybrid dynamics, with a limited discussion of the implications for foundations, and without discussion of further relevance in measurement problem, quantum-gravity, or chemistry, numeric methods, etc. Hybrid dynamics is defined as special case of composite quantum dynamics where the observables of one of the two subsystems are restricted for the commuting set of diagonal operators in a fixed basis. With this restriction, the derivation of hybrid dynamical equations is clear conceptually and simple technically. Jump and diffusive dynamics follow in the form of hybrid master equations. Their stochastic interpretation (called unravellings) is derived. We discuss gauge-type ambiguities, problems of uniqueness, and covariance of the diffusive master equation. Also conditions of minimum noise and of monitoring the quantum trajectory are derived. We conclude that hybrid formalism is equivalent with standard Markovian theory of time-continuous quantum measurement (monitoring) on one hand, and is a motivating alternative formalism on the other hand.
## I Introduction
In real word, quantum and classical phenomena coexist and evolve according to their own rules. They do interact, of course, and we know very well the mathematical models of some particular interactions. The action of a classical system on the quantum one is modeled if we make the Hamiltonian \(\hat{H}\) depend on the classical system's variables \(x\). The _backaction_, i.e. the quantum system's impact on a classical one is also known well from the quantum measurement: the quantum system rules the pointer position \(x\) of a classical meter. This backaction is extremely specific. More general backaction is the central problem to understand and to describe mathematically when quantum and classical systems are interacting.
The central object of composite quantum-classical systems is the hybrid state, represented by the hybrid density [1]:
\[\hat{\rho}(x)=\hat{\rho}_{x}\rho(x), \tag{1}\]
where \(\rho(x)\) is the normalized probability density of the classical variables \(x\) and \(\hat{\rho}_{x}\) is the density operator of the quantum system conditioned on the value \(x\) of the classical variable.
The efforts and results concern the possible evolution equations for \(d\hat{\rho}(x)/dt\). The bottleneck is the backaction although its elementary pattern is known from all quantum theory textbooks. The von Neumann measurement of the complete orthogonal set \(\{\hat{P}_{x}\}\) of Hermitian projectors is imposes the following change of the hybrid state (1):
\[\Big{\{}\hat{\rho}\rightarrow\frac{\hat{P}_{x}\hat{\rho}\hat{P}_{x}}{\text{tr }(\hat{P}_{x}\hat{\rho})}\text{ with prob }\rho(x)=\text{tr}(\hat{P}_{x}\hat{\rho})\Big{\}} \Longleftrightarrow\Big{\{}\hat{\rho}(x)\longrightarrow\hat{P}_{x}\hat{\rho} _{x}\hat{P}_{x}\Big{\}}, \tag{2}\]
As we see, the textbook stochastic jumps are _equivalent_ with a single deterministic map of the hybrid state. If we construct a stochastic dynamics underlying the process on the lhs, we have an equivalent deterministic hybrid dynamics for \(\hat{\rho}(x)\). Von Neumann's statistical interpretation of quantum states (also called the Born rule) follows from the statistical interpretation of the hybrid state.
For longtime, the dynamics of von Neumann measurement, l.h.s. of (2), was missing from the textbooks, as irrelevant. Lately, by very different motivations, it was constructed in the continuous limit of discrete von Neumann measurements. The wining formalism has been Markovian stochastic equations, not the hybrid formalism. Refs. [2; 3; 4] were milestones, reviews are in [5; 6; 7; 8; 9].
Whether hybrid dynamics, developped on their own, yield more than the time-continuous measurements in hybrid formalism, i.e.: extension of the r.h.s. of (2). Before the answer, we introduce ourselves into the calculus of hybrid dynamics.
A large body of investigations of quite different concepts, motivations, methods, level of mathematical rigor, etc., emerged in forty years, from the earliest attempts to couple quantum and canonical classical systems [1; 10; 11] through the present author's [12; 13; 14; 15; 16; 17; 18; 19] and other's results [20; 21; 22; 23; 24; 25; 26; 27; 28] directly related to the present work, and many other contributions, e.g., in refs. [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41].
Here we use a unique approach, elementary derivations, economic presentation, in order to give a complete but concise account of Markovian hybrid quantum-classical dynamics modeled by hybrid master equations (HMEs). Sec. II derives the canonical jump HME and its stochastic interpretation, called _unraveling_. From this HME, sec. III derives the limit of continuous, diffusive HME which, in sec. IV is put into the general covariant form. Condition of minimum irreversibility, covariant and non-covariant unravelings, and monitoring are contained in the subsections. Comparison with and comments on previous results will be concentrated in the sec. V. At last, sec. VI contains conclusion of the author.
## II Hybrid master equation
Quantum theory is universal in our approach and classical systems are special case of quantum ones. Accordingly, consider the Hilbert space \(\mathcal{H}_{QC}=\mathcal{H}_{Q}\otimes\mathcal{H}_{C}\) where \(\mathcal{H}_{C}\) will host our classical system in a fixed basis \(\{|x\rangle\}\). The composite quantum state \(\widehat{\rho}\), and composite Hamiltonian \(\widehat{H}\) (as well as the composite observables) are diagonal in the fixed basis:
\[\widehat{\rho} = \sum\hat{\rho}(x)\otimes|x\rangle\langle x|, \tag{3}\] \[\widehat{H} = \sum\hat{H}(x)\otimes|x\rangle\langle x|. \tag{4}\]
The block-diagonal objects \(\hat{\rho}(x),\hat{H}(x)\) will be called respectively the hybrid state and hybrid Hamiltonian. We are looking for a Markovian evolution equation for \(\widehat{\rho}\), which is completely positive (CP) and preserves the block-diagonal form of \(\widehat{\rho}\).
We start from the CP map \(\Lambda\) of the composite state \(\widehat{\rho}\) and request that it preserve block-diagonality. Conveniently, we can write the general form of the map for the hybrid representation \(\hat{\rho}(x)\) of \(\widehat{\rho}\). With Einstein convention of summation, it reads
\[\hat{\rho}_{\Lambda}(x)=\sum_{y}D_{\beta\alpha}(x,y)\hat{L}_{\alpha}\hat{\rho }(y)\hat{L}_{\beta}^{\dagger}, \tag{5}\]
where \(\{\hat{L}_{\alpha}\}\) is an operator basis in \(\mathcal{H}_{Q}\). The Hermitian matrix \(D\) satisfies
\[\sum_{x}D_{\beta\alpha}(x,y)\hat{L}_{\beta}^{\dagger}\hat{L}_{\alpha}=\hat{I} \tag{6}\]
for all \(x\). If we diagonalize \(D\) at each point \((x,y)\), the form of the map becomes
\[\hat{\rho}_{\Lambda}(x)=\sum_{y,\alpha}\lambda_{\alpha}(x,y)\hat{L}_{\alpha}( x,y)\hat{\rho}(y)\hat{L}_{\alpha}^{\dagger}(x,y), \tag{7}\]
where \(\{\hat{L}_{\alpha}(x,y)\}\) is an operator basis depending on \((x,y)\). Hence all \(\lambda_{\alpha}(x,y)\geq 0\) for all \(\alpha\) and \((x,y)\) otherwise the map \(\Lambda\) cannot be CP. Now we can absorb each factor \(\lambda_{\alpha}\) into \(\hat{L}_{\alpha}\) and return to the full operator formalism. We can write the CP map, preserving the block-diagonal form of \(\widehat{\rho}\) into the standard quantum mechanical form
\[\widehat{\rho}_{\Lambda}=\widehat{L}_{\alpha}\widehat{\rho}\widehat{L}_{ \alpha}^{\dagger}, \tag{8}\]
if we define the following Kraus-operators:
\[\widehat{L}_{\alpha}=\sum_{\alpha}\sum_{y}\hat{L}_{\alpha}(x,y)\otimes|x \rangle\langle y|, \tag{9}\]
where \(\{\hat{L}_{\alpha}(x,y)\}\) is operator basis for each pair \((x,y)\). If \(\Lambda\) is a semigroup and generates time evolution of \(\widehat{\rho}\) then, according to the GKLS theorem [42; 43], the evolution is governed by the quantum master equation of the following from:
\[\frac{d\widehat{\rho}}{dt}=-i[\widehat{H},\widehat{\rho}]+\widehat{L}_{\alpha} \widehat{\rho}\widehat{L}_{\alpha}^{\dagger}-\mathbb{H}\widehat{L}_{\alpha}^{ \dagger}\widehat{L}_{\alpha}\widehat{\rho}, \tag{10}\]
where the \(\widehat{L}_{\alpha}\)'s are called the Lindblad generators. As expected, this quantum master equation preserves block-diagonality by construction hence it obtains a closed equation in the hybrid formalism [19; 20; 21; 22; 25; 27; 28]:
\[\frac{d\hat{\rho}(x)}{dt}=-i[\hat{H}(x),\hat{\rho}(x)]+\sum_{y}\left(\hat{L}_{ \alpha}(x,y)\hat{\rho}(y)\hat{L}_{\alpha}^{\dagger}(x,y)-\mathbb{H}\hat{L}_{ \alpha}^{\dagger}(y,x)\hat{L}_{\alpha}(y,x)\hat{\rho}(x)\right), \tag{11}\]
where \(\mathbb{H}\) means the Hermitian part of the subsequent expression. This is the canonical form of the CP Markovian HME where the classical system is discrete. The hybrid generators \(\hat{L}_{\alpha}(x,y)\) form operator basis at each \((x,y)\), their dependence on \((x,y)\) can otherwise be arbitrary. The backaction is encapsulated by the generators \(\hat{L}_{\alpha}(x,y)\). Note the basic lesson: when the quantum-classical interaction is mutual influence between the subsystems then the hybrid system can never be reversible, the evolution is governed by hybrid master (kinetic) equations. Quantitative lower bounds on their irreversibility (noise) will be derived in sec. III.
The choice of the hybrid Hamiltonian and the hybrid generators is unique upto arbitrary complex functions \(\ell_{\alpha}(x)\) since the HME (11) is invariant for the following shifts:
\[\hat{L}_{\alpha}(x,y) \rightarrow \hat{L}_{\alpha}(x,y)+\ell_{\alpha}(x)\delta(x,y),\] \[\hat{H}(x) \rightarrow \hat{H}(x)-i\frac{i}{2}\big{(}\ell_{\alpha}^{*}(x)\hat{L}_{ \alpha}(x,x)-\mathsf{h.c.}\big{)}, \tag{12}\]
where \(\delta(x,y)\) will be our alternative notation for the usual discrete delta-function. Special cases may allow much larger groups of such 'gauge' freedom when the dependence of the generators on \((x,y)\) is degenerate, like in secs. III, IV of continuous HMEs.
### Stochastic unravelling
Master equations are deterministic. Like any quantum or classical master equation, also our HME (11) possesses statistical interpretation. The solution \(\rho(x)\) of a classical master (kinetic) equation can be decomposed into a unique stochastic process \(x_{t}\) of random trajectories. Quantum master equations can be decomposed into stochastic processes \(\psi_{t}\) of random quantum trajectories, the decomposition is called unraveling. Quantum unravellings are not unique. Here we define the hybrid of the classical and quantum stochastic decompositions (unravellings).
Let us formulate the mathematical condition of hybrid unraveling. If \(\hat{\rho}(z,t)\) is the solution of the HME (11) then it is the stochastic mean \(\mathbb{M}\) over the contribution of hybrid trajectories \((x_{t},\psi_{t})\):
\[\hat{\rho}(z,t)=\mathbb{M}\psi_{t}\psi_{t}^{\dagger}\delta(z,x_{t}), \tag{13}\]
where the \(x_{t}\) and \(\psi_{t}\) are correlated jump stochastic processes. Consider first the unique unraveling of the distribution \(\rho(x)=\mathsf{tr}\hat{\rho}(x)\) of the classical subsystem. The trace of the HME (11) yields the following classical master (kinetic) equation:
\[\frac{d\rho(x)}{dt}=\sum_{\alpha}\sum_{y}\Bigl{(}T_{\alpha}(x,y)\rho(y)-T_{ \alpha}(y,x)\rho(x)\Bigr{)}, \tag{14}\]
with the \(\psi\)-dependent transition (jump) rate from \(x\) to \(y\) for each \(\alpha\):
\[T_{\alpha}(y,x)=\mathsf{tr}\left(\hat{L}_{\alpha}(y,x)\hat{\rho}_{x}\hat{L}_{ \alpha}^{\dagger}(y,x)\right)=\bigl{\langle}\hat{L}_{\alpha}^{\dagger}(y,x) \hat{L}_{\alpha}(y,x)\bigr{\rangle}_{x}. \tag{15}\]
We introduce the total transition rate from \(x\):
\[T(x)=\sum_{\alpha}\sum_{y}T_{\alpha}(y,x). \tag{16}\]
To unravel the quantum subsystem, we need the anti-Hermitian (frictional) hybrid Hamiltonian defined by
\[-i\hat{H}_{\rm fr}(x)=-\frac{1}{2}\sum_{y}\hat{L}_{\alpha}^{\dagger}(y,x)\hat{L}_ {\alpha}(y,x). \tag{17}\]
The corresponding frictional Schrodinger equation is
\[\frac{d\psi}{dt}=\left(-i\hat{H}(x)-i\hat{H}_{\rm fr}(x)+\frac{1}{2}T(x)\right)\psi, \tag{18}\]
where \(\frac{1}{2}T(x)\) restores the norm \(\psi^{\dagger}\psi\) since \(\frac{1}{2}T(x)=-\mathbb{I}(\hat{H}_{\rm fr})\), cf. eqs. (15-17); symbol \(\mathbb{I}\) means the imaginary part.
The hybrid unraveling consists of the following two correlated jump stochastic (piecewise deterministic) processes, one for \(x_{t}\), another for \(\psi_{t}\) (cf. [28]):
\[x = {\rm const.}, \tag{19}\] \[\frac{d\psi}{dt} = \left(-i\hat{H}(x)-i\hat{H}_{\rm fr}(x)\psi+\frac{1}{2}T(x)\right)\psi,\] (20) \[{\rm jumps}\left\{\begin{array}{lcl}x&\to&x^{\prime}\;,\\ \psi&\to&\frac{L_{\alpha}(x,x^{\prime})\psi}{\sqrt{T_{\alpha}(x^{\prime},x)}} \;\;{\rm at\ rate}\;T_{\alpha}(x^{\prime},x).\end{array}\right. \tag{21}\]
The proof that the unraveling satisfies the condition (13) is the following. With the notation \(\hat{P}=\psi\psi^{\dagger}\), the condition reads
\[d\hat{\rho}(z,t)=d\mathbb{M}\hat{P}_{t}\delta(z,x_{t}), \tag{22}\]
where \(d\hat{\rho}(z,t)\) is determined by the HME (11). The unraveling yields two terms for the change of \(\hat{P}\delta(z,x)\) in time \(dt\):
\[\hat{P}\delta(z,x)\to \left(1-T(x)dt\right)\times\left(\hat{P}-i[\hat{H}(x),\hat{P}] dt-i[\hat{H}_{\rm fr}(x),\hat{P}]_{+}dt+T(x)\hat{P}dt\right)\delta(z,x) \tag{23}\] \[+\sum_{\alpha}T_{\alpha}(x^{\prime},x)dt\times\frac{\hat{L}_{ \alpha}(x,x^{\prime})\hat{P}\hat{L}_{\alpha}^{\dagger}(x,x^{\prime})}{T_{ \alpha}(x^{\prime},x)}\delta(z,x^{\prime}).\]
The first term comes from the deterministic eqs. (19,20), the second term represents the average of jumps (21). The \(\psi\)-dependent transition rates \(T_{\alpha}(x^{\prime},x)\) and \(T(x)\) cancel, we are left with an expression linear in \(\hat{P}\delta(z,x)\). Taking its mean obtains this:
\[\frac{d}{dt}\mathbb{M}\hat{P}\delta(z,x)=\mathbb{M}\left(-i[\hat{H}(x),\hat{ P}]-i[\hat{H}_{\rm fr}(x),\hat{P}]_{+}\right)\delta(z,x)+\mathbb{M}\hat{L}_{ \alpha}(x,x^{\prime})\hat{P}\hat{L}_{\alpha}^{\dagger}(x,x^{\prime})\delta(z,x ^{\prime}) \tag{24}\]
If we recall the expression (17) of \(\hat{H}_{\rm fr}\) then we can recognize that \(\mathbb{M}\hat{P}\delta(z,x)\) satisfies the HME (11).
The unraveling (19-21) is ambiguous. The shifts (12) leave the HME (11) invariant but we get different unravelings. Nevertheless, the unraveling becomes invariant and unique with the replacement (cf. [44]):
\[\hat{L}_{\alpha}(x,y)\to\hat{L}_{\alpha}(x,y)-\langle\hat{L}_{\alpha}(x,y) \rangle\delta(x,y). \tag{25}\]
One would think that in eq. (21) the state \(\psi_{t}\) and \(x_{t}\) are jumping together. This is not true if \(\hat{L}_{\alpha}(x,x)\neq 0\) since \(\hat{L}_{\alpha}(x,x)\) generates nontrivial jump of \(\psi_{t}\) and no jump for \(x_{t}\). So, to synchronize the jumps of \(\psi\) and \(x\) either we consider the invariant modification (25) of the unraveling or we just request that \(\hat{L}_{\alpha}(x,x)\) vanishes for all \(\alpha\) and \(x\).
### Monitoring the quantum trajectory
The quantum trajectory \(\psi_{t}\) is not observable in general since its detection inevitably perturbs it. That is the problem of _monitoring_ the quantum system. Can we, just by monitoring the classical subsystem's \(x_{t}\), monitor the evolution of \(\psi_{t}\)? Generally we cannot. If the number of generators \(\hat{L}_{\alpha}(x,y)\) is more than one, detecting a jump of \(x_{t}\) does not tell us which generator made \(\psi_{t}\) jump. The jump of \(x_{t}\) leaves the jump of \(\psi_{t}\) in eq. (21) undetermined. This ambiguity can be fixed in a natural class of special hybrid systems.
Let us add a vectorial structure \(\{x^{\alpha}\}\) to the classical discrete space and assume the following specific form of the hybrid generators \(\hat{L}_{\alpha}(x,y)\):
\[\hat{L}_{\alpha}(x^{\alpha},y^{\alpha})\prod_{\beta\neq\alpha} \delta(x^{\beta},y^{\beta}), \tag{26}\]
also with \(\hat{L}_{\alpha}(x^{\alpha},x^{\alpha})=0\). The jump rates \(T_{\alpha}(x^{\alpha},y^{\beta})\) vanish if \(\alpha\neq\beta\), cf. eq. (15). Now, if we observe a jump of \(x^{\alpha}_{t}\) we can uniquely determine the jump of \(\psi_{t}\). With the vectorized classical variables \(x^{\alpha}\), the unraveling (19-21) of the HME (11) becomes unique and the quantum trajectory can be monitored.
## III From discrete to diffusive hybrid master equation
Although discrete classical systems are important, most classical systems of interest, like the Hamiltonian ones, are continuous. Therefore we are going to construct the continuous limit of the obtained discrete HME (11). As is known, the only continuous classical Markovian process is diffusion with additional deterministic drift. We start with the generic HME (11) on the discrete subset \(\{x^{n}\}\) of the multidimensional continuum and generate a diffusion process in the continuous limit \(\epsilon\to 0\).
The set of hybrid generators \(\hat{L}_{\alpha}(x,y)\) will be completed by linear independent classical ones \(L_{n}(x,y)\). These additional terms are intended to generate diffusion terms for the classical subsystem in the continuous limit. The doubled set of generators will be the following:
\[\hat{L}_{\alpha}(x,y) = \hat{L}(y)\prod_{n}\frac{\delta(x-y,\epsilon)+\delta(y-x, \epsilon)}{\sqrt{2}}, \tag{27}\] \[L_{n}(x,y) = \frac{\delta(x^{n}-y^{n},\epsilon)-\delta(y^{n}-x^{n},\epsilon) }{\sqrt{2}\epsilon}\prod_{m\neq n}\frac{\delta(x-y,\epsilon)+\delta(y-x, \epsilon)}{\sqrt{2}}. \tag{28}\]
We introduce the positive semi-definite complex decoherence matrix \(D_{\rm Q}^{\alpha\beta}\), the positive semi-definite real diffusion matrix \(D_{\rm C}^{nm}\) and the arbitrary complex matrix \(G_{\rm CQ}^{n\alpha}\) of backaction. Consider the following Hermitian block-matrix and let it be positive semi-definite:
\[{\cal D}=\begin{bmatrix}D_{\rm Q}&G_{\rm CQ}^{\dagger}\\ G_{\rm CQ}&D_{\rm C}\end{bmatrix}\geq 0. \tag{29}\]
We can see that nonzero backaction will require both nonzero decoherence and diffusion. In addition to the constraints \(D_{\rm Q}\geq 0,D_{\rm C}=D_{\rm C}^{\dagger}\geq 0\) that we always take for granted, there are further constraints on matrices \(D_{\rm Q},D_{\rm C},G_{\rm CQ}\), equivalent to (29), to be shown in sec. IV.1.
The following HME generates CP maps (since it reduces to the HME (11) if we diagonalize \({\cal D}\)):
\[\frac{d\hat{\rho}(x)}{dt}=-i[\hat{H}(x),\hat{\rho}(x)] +D_{\rm Q}^{\beta\alpha}\sum_{y}\Bigl{(}\hat{L}_{\alpha}(x,y)\hat {\rho}(y)\hat{L}_{\beta}^{\dagger}(x,y)-\mathbb{H}\hat{L}_{\beta}^{\dagger}(x, y)\hat{L}_{\alpha}(y,x)\hat{\rho}(x)\Bigr{)} \tag{30}\] \[+D_{\rm C}^{nm}\sum_{y}\Bigl{(}L_{n}(x,y)L_{m}(x,y)\hat{\rho}(y)- L_{n}(y,x)L_{m}(y,x)\hat{\rho}(x)\Bigr{)}\] \[+\overline{G}_{\rm CQ}^{n\alpha}\sum_{y}\left(L_{n}(x,y)\hat{L}_{ \alpha}(x,y)\hat{\rho}(y)-L_{n}(y,x)\mathbb{H}\hat{L}_{\alpha}(x,y)\hat{\rho} (x)\right).\]
In the continuous limit \(\epsilon\to 0\), the terms with \(\hat{L}_{\alpha},\hat{L}_{\beta}\) contribute to standard GKLS structures:
\[\hat{L}_{\alpha}(x)\hat{\rho}(x)\hat{L}_{\beta}^{\dagger}(x)- \mathbb{H}\hat{L}_{\beta}^{\dagger}(x)\hat{L}_{\alpha}(x)\hat{\rho}(x). \tag{31}\]
The terms with \(L_{n},L_{m}\) yield diffusion of the classical variable \(x\). With the notation \(\epsilon^{n}\) meaning a vector whose only nonzero component is the \(n^{\prime}th\) one, and it is \(\epsilon\), the yield at \(n\neq m\) reads:
\[\frac{1}{2\epsilon^{2}}\Bigl{(}\hat{\rho}(x+\epsilon^{n}+\epsilon^{m})+\hat{ \rho}(x-\epsilon^{n}-\epsilon^{m})-\hat{\rho}(x-\epsilon^{n}+\epsilon^{m})- \hat{\rho}(x+\epsilon^{n}-\epsilon^{m})\Bigr{)}\underset{\epsilon\to 0}{\longrightarrow}\frac{1}{2} \partial_{n}\partial_{m}\hat{\rho}(x). \tag{32}\]
We get the similar yield for \(n=m\):
\[\frac{1}{2\epsilon^{2}}\Big{(}\hat{\rho}(x+\epsilon^{n})+\hat{\rho}(x-\epsilon^{n })-2\hat{\rho}(x)\Big{)}\underset{\epsilon\to 0}{\longrightarrow}\frac{1}{2} \partial_{n}\partial_{n}\hat{\rho}(x). \tag{33}\]
The cross-terms with \(\hat{L}_{\alpha},L_{n}\) generate the non-trivial backaction of the quantum system on the classical part:
\[\frac{1}{2\epsilon}\Big{(}\hat{L}_{\alpha}(x+\epsilon^{n})\hat{\rho}(x+ \epsilon^{n})-\hat{L}_{\alpha}(x-\epsilon^{n})\hat{\rho}(x-\epsilon^{n})\Big{)} +\mathsf{h.c.}\underset{\epsilon\to 0}{\longrightarrow}\partial_{n}\left(\hat{L}_{ \alpha}(x)\hat{\rho}(x)\right)+\mathsf{h.c.} \tag{34}\]
Using these limits in eq. (30) and adding a deliberate classical drift of velocity \(V(x)\), we obtain the continuous limit of the discrete HME (11):
\[\frac{d\hat{\rho}(x)}{dt}= -i[\hat{H}(x),\hat{\rho}(x)]+D_{\rm Q}^{\beta\alpha}\Big{(}\hat{ L}_{\alpha}(x)\hat{\rho}(x)\hat{L}_{\beta}^{\dagger}(x)-\mathbb{H}\hat{L}_{ \beta}^{\dagger}(x)\hat{L}_{\alpha}(x)\hat{\rho}(x)\Big{)} \tag{35}\] \[+\tfrac{1}{2}D_{\rm C}^{nm}\partial_{n}\partial_{m}\hat{\rho}(x) +\left(\overline{G}_{\rm CQ}^{n\alpha}\partial_{n}\left(\hat{L}_{\alpha}(x) \hat{\rho}(x)\right)+\mathsf{h.c.}\right)-\partial_{n}\left(V^{n}(x)\hat{\rho} (x)\right).\]
The three _constant_ parameter matrices \(D_{\rm Q},D_{\rm C},G_{\rm CQ}\) are constrained by the semi-definiteness \(\mathcal{D}\geq 0\) of the block-matrix (29) formed by them.
The above HME is not yet the general diffusive one. Obviously we can add an extra \(x\)-dependent decoherence \(\Delta D_{\rm Q}(x)\) as well as an extra diffusion \(\Delta D_{\rm C}(x)\) as long as \(\mathcal{D}\geq\) holds true after the replacements \(D_{\rm Q}\to D_{\rm Q}+\Delta D_{\rm Q}(x)\) and \(D_{\rm C}\to D_{\rm C}+\Delta D_{\rm C}(x)\). That is, the validity of the diffusive HME (35) extends for \(x\)-dependent parameters \(D_{\rm Q}(x)\) and \(D_{\rm C}(x)\) provided \(\mathcal{D}(x)\geq 0\). In the next section we show that also the backaction matrix \(G_{\rm CQ}\) can depend on \(x\).
## IV Covariant hybrid master equation
The form (35) of the HME is explicitly covariant under the global linear transformations of the operator basis \(\hat{L}_{\alpha}(x)\) and the classical variables \(x\). We are interested in explicit covariant form under local, i.e.: \(x\)-dependent complex linear transformation of the operator basis and under general coordinate transformations of \(x\). The form of such HME (35) should be this (cf. [25]):
\[\frac{d\hat{\rho}}{dt}=-i[\hat{H},\hat{\rho}]+D_{\rm Q}^{\beta\alpha}\Big{(} \hat{L}_{\alpha}\hat{\rho}\hat{L}_{\beta}^{\dagger}-\mathbb{H}\hat{L}_{\beta} ^{\dagger}\hat{L}_{\alpha}\hat{\rho}\Big{)}+\tfrac{1}{2}\partial_{n}\partial_{ m}(D_{\rm C}^{nm}\hat{\rho})+\partial_{n}(\overline{G}_{\rm CQ}^{n \alpha}\hat{L}_{\alpha}\hat{\rho}+\mathsf{h.c.})-\partial_{n}\left(V^{n}\hat{ \rho}\right) \tag{36}\]
where every object is function of \(x\), the fact our above notations hide for the sake of compactness. The coefficients \(D_{\rm Q}(x),D_{\rm C}(x),G_{\rm CQ}(x)\) satisfy the same constraint (29) \(\mathcal{D}(x)\geq 0\) as before, now understood for all \(x\):
\[\mathcal{D}(x)=\left[\begin{array}{c|c}D_{\rm Q}^{\alpha\beta}(x)&\overline{ G}_{\rm CQ}^{m\beta}(x)\\ \hline\\ G_{\rm CQ}^{n\alpha}(x)&D_{\rm C}^{nm}(x)\\ \\ \end{array}\right]\geq 0. \tag{37}\]
We prove the equivalence of the covariant HME (36) with (35). By suitable choice of the operator basis \(\hat{L}_{\alpha}(x)\) and the classical coordinates \(x^{n}\), one can always transform \(G_{\rm CQ}(x)\) into constant matrix. This results in the HME (35) which, as we argued there, is valid for \(x\)-dependent \(D_{\rm Q},D_{\rm C}\).
The covariant diffusive HME (36) is the most general continuous HME to generate CP dynamics. Every object in it is \(x\)-dependent. The generators \(\hat{L}_{\alpha}\) form operator basis for each \(x\), not necessarily orthogonal one. The hybrid Hamiltonian \(\hat{H}\) and the classical drift \(V\) are arbitrary. The Hermitian matrix \(D_{\rm Q}\geq 0\) of decoherence, the real matrix \(D_{\rm C}\geq 0\) of diffusion must form the positive semi-definite block matrix (37) in corners with the matrix \(G_{\rm CQ}\) (and \(G_{\rm CQ}^{\dagger}\)) of backaction. We list three useful alternatives that can always be achieved by trasformation of the reference frames: a fixed operator basis \(\{\hat{L}_{\alpha}\}\), or simultaneously diagonal \(D_{\rm Q}\) and \(D_{\rm C}\) with zeros and ones, or \(G_{\rm CQ}\) with zeros and ones in the main diagonal and zeros elsewhere. (These coordinate transformations may request the embeddig of \(x\) in higher dimensions then they are of.)
In addition to the explicit covariance, there is a further 'gauge' freedom, the descendant of the shifts (12) in the discrete HME:
\[\hat{L}_{\alpha} \rightarrow \hat{L}_{\alpha}+\ell_{\alpha},\] \[\hat{H} \rightarrow \hat{H}-i\frac{2}{2}\big{(}\ell_{\alpha}^{*}\hat{L}_{\alpha}- \mathsf{h.c.}\big{)},\] \[V^{n} \rightarrow V^{\alpha}-(\overline{G}_{\rm CQ}^{n\alpha}\ell_{\alpha}+ \mathsf{h.c.}), \tag{38}\]
where \(\ell_{\alpha}(x)\) is an arbitrary complex function.
### Minimum-noise threshold
We are going to breakdown the condition \(\mathcal{D}\geq 0\) (52) into constraints between \(\mathcal{D}\)'s building blocks. Necessary conditions are the following:
\[\mathsf{range}D_{\rm Q} \geq \mathsf{range}G_{\rm CQ}^{\dagger}G_{\rm CQ}, \tag{39}\] \[\mathsf{range}D_{\rm C} \geq \mathsf{range}G_{\rm CQ}G_{\rm CQ}^{\dagger}. \tag{40}\]
They express that the range of decoherence \(D_{\rm Q}\) cannot be narrower than the range of \(\hat{L}_{\alpha}\)'s that are coupled to the classical \(x^{n}\)'s by backaction \(G_{\rm CQ}\). And similarly, the range of classical diffusion \(D_{\rm C}\) cannot be smaller than the range of \(x^{n}\)'s that are coupled to the \(\hat{L}_{\alpha}\)'s. Nonzero backaction means mandatory noise: both decoherence and diffusion.
Suppose that we have a given nonzero matrix \(G_{\rm CQ}\) of backaction and we are interested in a certain minimum of the total irreversibility, i.e., a certain minimum of the block-matrix \(\mathcal{D}\), implying a certain minimum of the decoherence \(D_{\rm Q}\) and diffusion \(D_{\rm C}\) as we see below. Obviously, the strict positivity \(\mathcal{D}>0\) means more noise than the minimum. We are interested in the maximum degenerate \(\mathcal{D}\) meaning the lowest \(\mathsf{rank}\mathcal{D}\) which is limited by the rank \(r_{\rm CQ}=\mathsf{rank}G_{\rm CQ}\), otherwise \(\mathcal{D}\geq 0\) cannot be true. Therefore we define the minimum-noise threshold by this:
\[\mathsf{rank}\mathcal{D}=\mathsf{rank}G_{\rm CQ}. \tag{41}\]
Then the inequalities (39) saturate, \(\mathsf{range}D_{\rm Q}=\mathsf{range}G_{\rm CQ}^{\dagger}G_{\rm CQ}\) and \(\mathsf{range}D_{\rm C}=\mathsf{range}G_{\rm CQ}G_{\rm CQ}^{\dagger}\), also meaning that \(\mathsf{rank}D_{\rm Q}=\mathsf{rank}D_{\rm C}=r_{\rm CQ}\). Both the number of coupled independent generators \(\hat{L}_{\alpha}\) and coordinates \(x^{n}\) coincide with \(r_{\rm CQ}\).
For a given \(x\), transform \(G_{\rm CQ}\) into the frame where its elements are zero expect for an \(r_{\rm CQ}\times r_{\rm CQ}\) unit matrix in the upper-left corner. Then, because of (39), both \(D_{\rm Q}\) and \(D_{\rm C}\) respectively must have an \(r_{\rm CQ}\times r_{\rm CQ}\) strictly positive matrix in their upper-left corners and zeros elsewhere. Recall the general condition \(\mathcal{D}\geq 0\). If we drop rows and columns of zeros then the said non-zero \(r_{\rm CQ}\times r_{\rm CQ}\) sub-matrices, denoted invariably, must be each other's inverses [12]:
\[D_{\rm C}(x)D_{\rm Q}(x)=I. \tag{42}\]
This is the condition of the minimum-noise threshold (41) in the special frame fitted to the backaction matrix.
We can now identify quantitatively and directly the lower bound of irreversibility that hybrid systems must undergo even if the quantum and classical subsystems were reversible in themselves. At a given backaction strength \(G_{\rm CQ}\) (scaled now to be the unity) and at the minimum noise threshold, the quantum decoherence strength \(D_{\rm Q}\) and classical diffusion strength \(D_{\rm C}\) are inverses of each other, lower decoherence requests higher diffusion and vice versa.
It is possible to decompose the constraint \(\mathcal{D}\geq 0\) as well as the minimum noise condition (41) into covariant relationships (i.e.: valid in any reference frame) for the three parameter matrices. Two equivalent forms of \(\mathcal{D}\geq 0\) are the following [25]:
\[G_{\rm CQ}\frac{1}{D_{\rm Q}}G_{\rm CQ}^{\dagger} \leq D_{\rm C}, \tag{43}\] \[G_{\rm CQ}\frac{1}{D_{\rm Q}}D_{\rm Q}G_{\rm CQ}^{\dagger} = G_{\rm CQ}G_{\rm CQ}^{\dagger}; \tag{44}\]
and
\[G_{\rm CQ}^{\dagger}\frac{1}{D_{\rm C}}G_{\rm CQ} \leq D_{\rm Q}, \tag{45}\] \[G_{\rm CQ}^{\dagger}\frac{1}{D_{\rm C}}D_{\rm C}G_{\rm CQ} = G_{\rm CQ}^{\dagger}G_{\rm CQ}, \tag{46}\]
where \(1/D_{\rm Q}\) and \(1/D_{\rm C}\) are generalized inverses. The threshold condition \({\sf rank}D={\sf rank}G_{\rm CQ}\) of minimum noise corresponds to the saturation of inequalities into equalities. Note that \((1/D_{\rm Q})D_{\rm Q}={\sf range}D_{\rm Q}\) and \((1/D_{\rm C})D_{\rm C}={\sf range}D_{\rm C}\); hence the second lines in eqs. (43,45) correspond to the mentioned identities \({\sf range}D_{\rm Q}={\sf range}G_{\rm CQ}^{\dagger}G_{\rm CQ}\), and \({\sf range}D_{\rm C}={\sf range}G_{\rm CQ}G_{\rm CQ}^{\dagger}\), respectively. Under the above covariant parametric constraints the covariant HME (36) and its unravellings (49,50) include all possible dynamics that contain noise on and above the threshold of consistency. (The appendix A shows the sharpness of the above conditions on a simplest HME.)
### Stochastic unravellings
When we construct unravellings of the HME (36), we follow the steps of sec. II.1 and construct the two correlated stochastic processes for \(x_{t}\) and \(\psi_{t}\) satisfying the condition of unraveling (13). The two processes will diffusive ones this time. First, take the trace of the diffusive HME (35) and obtain the classical Fokker-Planck equation of the classical subsystem:
\[\frac{d\rho(x)}{dt}=\frac{1}{2}D_{\rm C}^{nm}\partial_{n}\partial_{m}\rho(x)- \partial_{n}\left(V^{n}(x)\rho(x)-2\mathbb{R}\overline{G}_{\rm CQ}^{n\alpha}( x)\langle\hat{L}_{\alpha}(x)\rangle\rho(x)\right), \tag{47}\]
where \(\mathbb{R}\) denotes the real part of the subsequent expression. The unraveling of this equation will be a unique Brownian motion with a unique \(\psi\)-dependent drift. To unravel the quantum subsystem, we need the anti-Hermitian (frictional) hybrid Hamiltonian defined by
\[-i\hat{H}_{\rm fr}(x)=-\frac{1}{2}D_{\rm Q}^{\beta\alpha}\Big{(}\Big{(}\hat{L }_{\alpha}^{\dagger}(x)-\langle\hat{L}_{\alpha}^{\dagger}(x)\rangle\Big{)} \left(\hat{L}_{\beta}(x)-\langle\hat{L}_{\beta}(x)\rangle\right)+\Big{(} \langle\hat{L}_{\alpha}^{\dagger}(x)\rangle\hat{L}_{\beta}(x)-{\sf h.c.}\Big{)} \Big{)}. \tag{48}\]
The hybrid unraveling consists of two correlated diffusive stochastic processes, one for \(x_{t}\), another for \(\psi_{t}\):
\[dx^{n} = V^{n}(x)dt-2\mathbb{R}\overline{G}_{\rm CQ}^{n\alpha}(x)\langle \hat{L}_{\alpha}(x)\rangle dt+dW^{n}(x), \tag{49}\] \[d\psi = -i\left(\hat{H}(x)+\hat{H}_{\rm fr}(x)\right)\psi dt+\big{(}\hat{ L}_{\alpha}(x)-\langle\hat{L}_{\alpha}(x)\rangle\big{)}\psi d\overline{ \xi}^{\alpha}(x), \tag{50}\]
where \(dW^{n}(x)\) is real, and \(d\xi^{\alpha}(x)\) is complex zero-mean Ito-differential of auxiliary stochastic processes, correlated as follows:
\[dW^{n}dW^{m} = D_{\rm C}^{nm}dt,\] \[d\xi^{\alpha}d\overline{\xi}^{\beta} = D_{\rm Q}^{\alpha\beta}dt, \tag{51}\] \[dW^{n}d\xi^{\alpha} = G_{\rm CQ}^{n\alpha}dt.\]
Let us use the vector symbols \(dW,d\xi\), then we get the equivalent compact form of correlations:
\[\begin{bmatrix}d\xi d\xi^{\dagger}&d\xi dW^{T}\\ dWd\xi^{\dagger}&dWdW^{T}\end{bmatrix}=\mathcal{D}dt. \tag{52}\]
We prove that the unraveling (49,50) satisfies the condition (13). Like in sec. II.1, we use the notation \(\hat{P}=\psi\psi^{\dagger}\) and the same form (22) of the condition to be proved:
\[d\hat{\rho}(z,t)=d\mathbb{M}\hat{P}\delta(z-x). \tag{53}\]
The Ito-differential on the rhs contains three terms and we are going to express them by the equations of \(d\psi\) and \(dx\) of the unraveling (see also [27]). First, to calculate \(d\hat{P}=d\psi\psi^{\dagger}+\psi d\psi^{\dagger}+d\psi\psi^{\dagger}\) we use the stochastic equation (50) of \(d\psi\):
\[d\hat{P}=-i[\hat{H}(z),\hat{P}]dt+D_{Q}^{\beta\alpha}\left(\hat{L}_{\alpha}(x) \hat{P}\hat{L}_{\beta}^{\dagger}(x)-\mathbb{H}\hat{L}_{\beta}^{\dagger}(x) \hat{L}_{\alpha}(x)\hat{P}\right)dt+\Big{(}\big{(}\hat{L}_{\alpha}(x)-\langle \hat{L}_{\alpha}(x)\rangle\big{)}\hat{P}d\overline{\xi}^{\alpha}(x)+{\sf h.c. }\Big{)}. \tag{54}\]
Second, we calculate \(d\delta(z-x)\) using stochastic equation (49) of \(dx\):
\[d\delta(z-x)=\tfrac{1}{2}D_{\rm C}^{nm}\partial_{n}\partial_{m}\delta(z-x)dt+ \Big{(}V^{n}(x)-2\mathbb{R}\overline{G}_{\rm CQ}^{n\alpha}\langle\hat{L}_{ \alpha}(x)\rangle\Big{)}\,\partial_{n}\delta(z-x)dt+\left(\partial_{n} \delta(z-x)\right)dW^{n}(x), \tag{55}\]
where the partial derivations refer to \(x\) obviously. From here, we get the three terms of \(d\mathbb{M}\hat{P}\delta(z-x)\) after using \(\partial\delta(z-x)/\partial x^{n}=-\partial\delta(z-x)/\partial z^{n}\), then taking the stochastic mean over \(dW,d\xi\) and \(x\):
\[\mathbb{M}d\hat{P}\delta(z-x) = -i[\hat{H}(z),\hat{\rho}(z)]dt+D_{\rm Q}^{\beta\alpha}\left(\hat{ L}_{\alpha}(z)\hat{\rho}(z)\hat{L}_{\beta}^{\dagger}(z)-\mathbb{H}\hat{L}_{\beta}^{ \dagger}(z)\hat{L}_{\alpha}(z)\hat{\rho}(z)\right)dt,\] \[\mathbb{M}\hat{P}d\delta(z-x) = \tfrac{1}{2}D_{\rm C}^{nm}\partial_{n}\partial_{m}\hat{\rho}(z)- \partial_{n}\big{(}V^{n}(z)-2\mathbb{R}\overline{G}_{\rm CQ}^{nm}\langle\hat{L }_{\alpha}(z)\rangle\hat{\rho}(z)\Big{)}dt, \tag{56}\] \[\mathbb{M}d\hat{P}d\delta(z-x) = -\overline{G}_{\rm CQ}^{n\alpha}\partial_{n}\Big{(}\Big{(}\hat{ L}_{\alpha}-\langle\hat{L}_{\alpha}\rangle\Big{)}\,\hat{\rho}(z)\Big{)}dt+{\sf h.c.}\]
By taking the sum of these three equations we recognize that \(\mathbb{M}\hat{P}\delta(z-x)\) satisfies the HME (35).
The hybrid unravelling corresponds to the time-continuous measurement of the observables \(\left(\overline{G}_{\rm CQ}^{n\alpha}\hat{L}_{\alpha}(x)+{\sf h.c.}\right)\). Hybrid unravelling contains an autonomous drift of the classical variables and general feedbacks: the Hamiltonian, the decoherence matrix, the measured observable, and the measurement noise can depend on the measured signal \(x\). These could part of time-continuous measurement but usually they are not. (Except for typical feedback Hamiltonians, linear in \(dx/dt\).)
As we can easily inspect, the unraveling (49,50) is invariant for the shifts (38). But in general, it is not covariant under the linear transformations of the operator basis \(\{\hat{L}_{\alpha}(x)\}\) of the HME! Therefore the unraveling of the HME is not unique, inherits the ambiguity of unravellings [5] of the quantum mechanical GKLS dynamics. Observe that we left the non-Hermitian correlation \(d\xi^{\alpha}(x)d\xi^{\beta}(x)\) unspecified whereas the stochastic processes \((x_{t},\psi_{t})\) depend on it. At deliberate choices of \(d\xi^{\alpha}d\xi^{\beta}\) we get different unravellings of the same HME (36).
Covariance of the unravelling can be achieved if the complex noise \(d\xi^{\alpha}\) is covariant. The simplest way is if we set
\[d\xi^{\alpha}d\xi^{\beta}=0, \tag{57}\]
see [45; 46], also [44]. Another option of covariant (and real) \(d\xi^{\alpha}\) will be shown in sec. II.2.
Just like unravellings of standard GKLS master equations, the hybrid unravellings are not necessarily be in terms of pure states \(\psi_{t}\). We shall discuss generic mixed state unravellings later in this section. Before that, a specific case is considered which is a closest generalization of the pure state unravelings (49,50). Extension from pure states \(\psi_{t}\) for mixed states \(\hat{\sigma}_{t}\) is straightforward since the formalism is very similar. The pure state density operator \(\hat{P}\) gives way to \(\hat{\sigma}\) and the equation of \(d\psi\) is replaced by an equation of \(d\hat{\sigma}\). Accordingly, the definition of unraveling reads
\[\hat{\rho}(z,t)=\mathbb{M}\hat{\sigma}_{t}\delta(z-x_{t}), \tag{58}\]
The eq. (49) of \(dx^{n}\) is the same as before, the eq. (50) give way to the equation of \(d\hat{\sigma}\):
\[d\hat{\sigma}=-i[\hat{H}(x),\hat{\sigma}]dt+D_{\rm Q}^{\beta\alpha}\left(\hat{ L}_{\alpha}(x)\hat{\sigma}\hat{L}_{\beta}^{\dagger}(x)-\mathbb{H}\hat{L}_{\beta}^{ \dagger}(x)\hat{L}_{\alpha}(x)\hat{\sigma}\right)+\left((\hat{L}_{\alpha}(x)- \langle\hat{L}_{\alpha}(x)\rangle)\hat{\sigma}d\overline{\xi}^{\alpha}(x)+{ \sf h.c.}\right). \tag{59}\]
With the replacement \(\hat{P}\rightarrow\hat{\sigma}\), the equations of (47-56) of the previous proof apply exactly in the same form and conclude that the process \((x_{t},\hat{\sigma}_{t})\) unravels the HME (36). Notice the purification feature of (59) known from standard unravellings of GKLS master equations. If the state is not pure, i.e.: \({\sf tr}\hat{\sigma}^{2}<1\) then
\[\frac{d}{dt}\mathbb{M}{\sf tr}\hat{\sigma}^{2} = \frac{2}{dt}\mathbb{M}{\sf tr}(\hat{\sigma}d\hat{\sigma})+\frac{1} {dt}{\sf tr}(d\hat{\sigma})^{2}= \tag{60}\] \[= 2D_{\rm Q}^{\beta\alpha}{\sf tr}\left(\hat{\sigma}\hat{L}_{\alpha }\hat{\sigma}\hat{L}_{\beta}^{\dagger}-\mathbb{H}\hat{\sigma}\hat{L}_{\beta}^{ \dagger}\hat{L}_{\alpha}\hat{\sigma}\right)+{\sf tr}\left((\hat{L}_{\alpha}- \langle\hat{L}_{\alpha}\rangle)\hat{\sigma}^{2}(\hat{L}_{\beta}^{\dagger}- \langle\hat{L}_{\beta}^{\dagger}\rangle)\right)\] \[= 2D_{\rm Q}^{\beta\alpha}{\sf tr}\left(\left(\sqrt{\hat{\sigma}}( \hat{L}_{\beta}-\langle\hat{L}_{\beta}\rangle)\sqrt{\hat{\sigma}}\right)^{ \dagger}\left(\sqrt{\hat{\sigma}}(\hat{L}_{\alpha}-\langle\hat{L}_{\alpha} \rangle)\sqrt{\hat{\sigma}}\right)\right)>0.\]
An arbitrary initial mixed state \(\hat{\sigma}_{t}\) will be purified asymptotically until \({\sf tr}\hat{\sigma}^{2}=0\) and then the mixed state eq. (59) becomes equivalent with the pure state eq. (20).
Given a covariant HME (36), the family of unravellings is larger than the above family of perfect purifying ones. There are partially purifying unravellings if the HME is above the threshold of minimum noise, cf. (45):
\[D_{\rm Q}=G_{\rm CQ}^{\dagger}\frac{1}{D_{\rm C}}G_{\rm CQ}+\Delta D_{\rm Q} \equiv D_{\rm Qmin}+\Delta D_{\rm Q} \tag{61}\]
where \(\Delta D_{\rm Q}\geq 0\). Then the eq. (59) of mixed state unravelling remains the same but the correlation \(d\xi d\xi^{\dagger}=D_{\rm Q}dt\) will be reduced to the minimum noise threshold
\[d\xi d\xi^{\dagger}=D_{\rm Qmin}dt=G_{\rm CQ}^{\dagger}\frac{1}{D_{\rm C}}G_{ \rm CQ}dt. \tag{62}\]
The other correlations \(dWd\xi=G_{\rm CQ}dt\) and \(dWdW^{T}=D_{\rm C}dt\) are unchanged. The price we pay for the reduced noise is illustrated if we group the terms of (59) as follows:
\[d\hat{\sigma}=-i[\hat{H}(x),\hat{\sigma}]dt + \tfrac{1}{2}D_{\rm Qmin}^{\beta\alpha}\left(\hat{L}_{\alpha}(x) \hat{\sigma}\hat{L}_{\beta}^{\dagger}(x)-\mathbb{H}\hat{L}_{\beta}^{\dagger}(x )\hat{L}_{\alpha}(x)\hat{\sigma}\right)+\left((\hat{L}_{\alpha}(x)-\langle\hat {L}_{\alpha}(x)\rangle)\hat{\sigma}d\overline{\xi}^{\alpha}(x)+{\sf h.c.}\right) \tag{63}\] \[+ \tfrac{1}{2}\Delta D_{\rm Q}^{\beta\alpha}\left(\hat{L}_{\alpha}( x)\hat{\sigma}\hat{L}_{\beta}^{\dagger}(x)-\mathbb{H}\hat{L}_{\beta}^{\dagger}(x )\hat{L}_{\alpha}(x)\hat{\sigma}\right).\]
The first line corresponds to the perfect purifying unravelling, at the minimum of noise (62) whereas the decoherence term in the second line counters the purification. A highly mixed state becomes purer, a low level of mixture becomes higher. Mixedness may have a stationary value for certain HMEs and certain unravellings. We notice that the family of mixed state unravellings is even larger since we can always set
\[d\xi d\xi^{\dagger}=\eta D_{\rm Qmin},\ \ \ \ (0\leq\eta\leq 1). \tag{64}\]
The values \(0<\eta<1\) mean mixed state partially purifying unravellings, \(\eta=1\) corresponds to perfect purifying.
### Monitoring the diffusive quantum trajectory
We are going to show that, similarly to the jump trajectories in sec. II.2, also diffusive quantum trajectories \(\psi_{t}\) can be monitored if the classical trajectories \(x_{t}\) are observed. As we shall see, this option of monitoring constrains the parameters of the HME (36) and singles out a unique unraveling among the infinite many.
Monitoring the quantum trajectory \(\psi_{t}\) via monitoring the classical \(x_{t}\) is possible if and only if \(d\psi\) (50) uniquely depends on \(dx\) (49). Hence, the vector \(d\xi\) must be a linear function of the vector \(dW\):
\[d\xi=F_{\rm QC}dW. \tag{65}\]
This deterministic relationship removes the ambiguity of the unravelling because it also specifies \(d\xi d\xi\) that was free correlation, see eqs. (52), in general unravelings. The above relationship should be consistent with the correlations (52). They imply two equations: \(D_{\rm Q}=F_{\rm QC}D_{\rm C}F_{\rm QC}^{\dagger}\) and \(G_{\rm CQ}^{\dagger}=F_{\rm QC}D_{\rm C}\). These equations possesses the solution:
\[F_{\rm QC}=G_{\rm CQ}^{\dagger}\frac{1}{D_{\rm C}}, \tag{66}\]
and a constraint on the HME's parameters:
\[D_{\rm Q}=G_{\rm CQ}^{\dagger}\frac{1}{D_{\rm C}}G_{\rm CQ}. \tag{67}\]
This constraint is the condition that monitoring \(\psi_{t}\) be possible. Namely, we insert the solution (66) into (65) to obtain the desired map of \(dW\) into \(d\xi\):
\[d\xi=G_{\rm CQ}^{\dagger}\frac{1}{D_{\rm C}}dW. \tag{68}\]
It is remarkable that this equation defines covariant \(d\xi\), removes the ambiguity of \(d\xi d\xi\).
The condition (67) of monitoring coincides with the saturated condition (45) but, importantly, the other condition (46) of minimum noise is not necessary for monitoring. Accordingly, the option of monitoring is still guaranteed above the noise threshold: \({\sf range}D_{\rm C}\) can be larger than \({\sf range}G_{\rm CQ}G_{\rm CQ}^{\dagger}\). At some \(x\)'s, there can be some components of \(x\) that are not coupled to the quantum subsystem, represent above threshold noise, redundant for and do not prevent the monitoring of \(\psi_{t}\).
Using (68), we can eliminate \(d\xi\) from the equations of unraveling. Then both \(dx\) and \(d\psi\) (or \(d\hat{\sigma}\)) are driven by the same real noise \(dW\). Accordingly, the eq. (50) becomes the following [27]:
\[d\psi=-i\left(\hat{H}(x)+\hat{H}_{\rm fr}(x)\right)\psi dt+dW^{n}(x)[D_{\rm C} ^{-1}(x)]_{nm}G_{\rm CQ}^{m\alpha}(x)\left(\hat{L}_{\alpha}(x)-\langle\hat{L}_ {\alpha}(x)\rangle\right)\psi. \tag{69}\]
In the special reference frame where \(G_{\rm CQ}^{m\alpha}=\delta^{n\alpha}\) and we consider the minimum noise threshold where the non-degenerate \(D_{\rm Q},D_{\rm C}\) are real and inverses of each other, we have \(d\xi=D_{\rm C}^{-1}dW=D_{\rm Q}dW\), the eqs. (49,50) of unraveling reduce to the following ones :
\[dx^{\alpha} = V^{\alpha}(x)dt+2\mathbb{R}\langle\hat{L}_{\alpha}(x)\rangle dt+dW_{ \alpha}(x), \tag{70}\] \[d\psi = -i(\hat{H}(x)-i\hat{H}_{\rm fr}(x))\psi dt+\big{(}\hat{L}_{\alpha }(x)-\langle\hat{L}_{\alpha}(x)\rangle\big{)}\psi D_{\rm Q}^{\alpha\beta}dW_{ \beta}(x).\]
Notice the new notation \(dW_{\alpha}\equiv dW^{n}|_{n=\alpha}\). We recognize the equations of correlated time-continuous measurements (monitoring) of the observables \(\hat{L}_{\alpha}+\mathsf{h.c.}\) where \(x_{t}\) is the measured signal. Here the model is a bit more general because the signal can have an autonomous drift, the Hamiltonian and the monitored observables can depend on the measured signal.
## V Discussion
In this work we revisited, clarified and completed earlier results on the Markovian master and stochastic equations of hybrid quantum-classical dynamics, paying attention to simplicity and brievity.
The starting concept and the derivation of the HME in sec. II is most similar to the rigorous formulations of Blanchard and Jadczyk [20; 21]. The shift-invariance (12) of the HME and the related nonuniqueness of the unraveling was not recognized before. Using vectorized classical variables is useful alternative to the sophisticated conditions of monitoring the jump quantum trajectory in [28].
Derivation in sec. III of the diffusive HME (35) from the discrete (11) is completely new, in attempt to replace the complicated though perhaps more rigorous procedure of Oppenheim et al. [25]. Our derivation is based on the discrete forerunners of \(\delta(x-y)\) and \(\partial/\partial x^{n}\). These are nontrivial while allowing for an elementary derivation. Importantly, the naive choice \(\hat{L}_{\alpha}(x,y)=\hat{L}_{\alpha}(x)\delta(x-y)\) and \(L_{n}(x,y)=|x\rangle\partial_{n}\langle x|\) instead of (27,28) turn out to be incorrect because off-block-diagonal terms play a role [47]. The naive choice gave correct structure of the HME but below the correct threshold of minimum noise by a factor \(1/2\).
Sec. IV re-derives the HME (36) which was derived already by Oppenheim et al. [25] and Layton et al. [27]. These works did not mention the covariance of their result, neither put it in the usual form of co- and covariant indices. Our work emphasizes and exploits that the HME is explicit covariant for local linear (i.e.: not necessarily unitary) transformations of the Lindblad generators and for general transformations of the classical coordinates.
Sec. IV.1 presents a pretty compact condition (41) of the threshold for minimum noise. The phenomenon and equation (42) of trade-off between decoherence and diffusion was recognized in [12], and has been extended recently for for the general diffusive HME in [25], see eqs. (43-46) using general matrix inverses to cover degenerate matrices of decoherence, diffusion, and backaction; their degeneracies are not exceptions but typical.
Our important new contribution in sec. IV.2 is that the pure state diffusive unravellings of a diffusive HME are always possible and are exactly as ambiguous as the standard unravellings of GKLS master equations. Since the ambiguity coincides with that of the unravelings of pure quantum GKLS master equations, we can fix them in the same way. The choice \(d\xi d\xi=0\) is well-known in theory of quantum state diffusion, used for covariance in [44] and developed by Gisin and Percival [45; 46] in the GKLS and Ito fomalisms. The full multitude of unravellings, applicable to the hybrid dynamics as well, is discussed by Wiseman and the author in [5].
Sec. IV.3 postulates the covariant condition (65) of quantum trajectory monitoring, not used before. The resulting eqs. of monitoring coincide with those in [27]. Their claim that these eqs. are in one-to-one correspondence with the HME is confirmed by covariance when the HME is at the threshold of minimum noise. The option of monitoring is not restricted for the HME of minimum noise, diffusion (not decoherence) can be higher than the threshold.
## VI Conclusion
Rarely stated explicitly, but the interaction between quantum and classical systems has no other consistent mathematical model than time-continuous quantum measurement and feedback, where measurement outcomes form the variables of the classical system. This echoes von Neumann's visionary postulate. To obtain a classical variable correlated with an unknown quantum state the only consistent mathematical model is the von Neumann quantum measurement. Not too surprizing, the equations of hybrid stochastic unravelings, both discrete and continuous, coincide with the respective equations of time-continuous measurement, provided the classical system is identified with the measurement outcomes, as in the elementary case (2).
The unravellings (statistical interpretation) of hybrid master equations are mathematical equivalents of time-continuous quantum measurements as mentioned e.g. in [19]. The advantage of hybrid master equations and unravellings over time-continuous quantum measurement is not yet conceptual. The hybrid formalism may be fairly convenient in many applications, let them be e.g. foundations or improved semi-classical gravity. No doubt, it may develop its own metaphysics as well.
###### Acknowledgements.
The author is obliged for the extended valuable discussions with Jonathan Oppenheim and Isaac Layton. This research was funded by the Foundational Questions Institute and Fetzer Franklin Fund, a donor-advised fund of the Silicon Valley Community Foundation (Grant No's. FQXi-RFPCPW-2008, FQXi-MGA-2103), the National Research, Development and Innovation Office (Hungary) "Frontline" Research Excellence Program (Grant No. KKP133827), and the John Templeton Foundation (Grant 62099).
## Appendix A Diffusive HME of two-level quantum system
Consider a two-level quantum system coupled to a classical system of a single variable \(x\). In Pauli's formalism, we can write the hybrid density (1) in the general form:
\[\hat{\rho}(x)=\tfrac{1}{2}(1+\hat{s}(x))\rho(x), \tag{10}\]
where \(\hat{s}(x)=s_{1}(x)\hat{\sigma}_{1}+s_{2}\hat{\sigma}_{2}+s_{3}\hat{\sigma}_{3}\) and the length \(s=|\mathbf{s}|\) of the Bloch-vector \(\mathbf{s}=(s_{1},s_{2},s_{3})\) must satify \(s\leq 1\). Let the diffusive HME be the following simple one:
\[\frac{d\hat{\rho}(x)}{dt}=\hat{\sigma}_{3}\hat{\rho}(x)\hat{\sigma}_{3}-\hat{ \rho}(x)+G\{\hat{\sigma}_{3},\hat{\rho}^{\prime}(x)\}+\tfrac{1}{2}\hat{\rho}^ {\prime\prime}(x). \tag{11}\]
This corresponds to \(D_{\mathrm{Q}}=D_{\mathrm{C}}=1\), and \(G\) is real. We prove that \(|G|\) cannot be larger than 1.
Substitute eq. (10) and multiply both sides by 2, yielding:
\[\frac{d}{dt}\left((1+\hat{s})\rho\right)=[\hat{\sigma}_{3}(1+\hat{s})\hat{ \sigma}_{3}-(1+\hat{s})]\rho+G\{\hat{\sigma}_{3},[(1+\hat{s})\rho]^{\prime}\}+ \tfrac{1}{2}[(1+\hat{s})\rho]^{{}^{\prime\prime}}. \tag{12}\]
An equivalent form reads:
\[\frac{d\hat{s}}{dt}\rho+\hat{s}\frac{d\rho}{dt}=-2\hat{s}_{\perp}\rho+2G\hat{ \sigma}_{3}\rho^{\prime}+\tfrac{1}{2}\hat{s}\rho^{{}^{\prime\prime}}+\hat{s}^{ \prime}\rho^{\prime}+\tfrac{1}{2}\hat{s}^{{}^{\prime\prime}}\rho. \tag{13}\]
We take trace of both sides, yielding the equation \(d\rho/dt=+2G(s_{3}\rho)^{\prime}+\tfrac{1}{2}\rho^{{}^{\prime\prime}}\). If we substitute it back, we get
\[\frac{d\hat{s}}{dt}\rho=-2\hat{s}_{\perp}\rho+2G\hat{\sigma}_{3}\rho^{\prime}+ \hat{s}^{\prime}\rho^{\prime}+\tfrac{1}{2}\hat{s}^{{}^{\prime\prime}}\rho-2G \hat{s}(s_{3}\rho)^{\prime}, \tag{14}\]
where \(\hat{s}_{\perp}=s_{1}\hat{\sigma}_{1}+s_{2}\hat{\sigma}_{2}\). We multiply both sides by \(\hat{s}\) and take \(\tfrac{1}{2}\) times their trace again:
\[\frac{1}{2}\frac{ds^{2}}{dt}=-2s_{\perp}^{2}\rho+2Gs_{3}(1-s^{2})\rho^{\prime}+ (s^{2})^{\prime}\rho^{\prime}+\tfrac{1}{2}\mathbf{s}\mathbf{s}^{{}^{\prime \prime}}\rho-2Gs^{2}s_{3}^{\prime}\rho. \tag{15}\]
Suppose \(s^{2}=1\), then \(ds^{2}/dt\) cannot be positive and \((s^{2})^{\prime}\) must vanish. Thus we have the following inequality:
\[0\leq=-2s_{\perp}^{2}+\tfrac{1}{2}\mathbf{s}\mathbf{s}^{{}^{\prime\prime}}-2Gs^ {2}s_{3}^{\prime}. \tag{16}\]
Now we insert the anzatz \(\mathbf{s}(x)=(\cos(x),0,\sin(x))\) into the inequality, leading to
\[0\geq\frac{1}{2}\frac{ds^{2}}{dt}=-2\cos^{2}(x)-2G\cos(x)-\tfrac{1}{2}, \tag{17}\]
which must be satisfied for all \(x\). This is equivalent with the upper bound on the backaction coupling:
\[G^{2}\leq 1. \tag{18}\]
|
2308.06728 | Optically thick jet base and explanation of edge brightening in AGN jets | The jet cores in blazars are resolved and found to harbour an edge brightened
structure where the jet base appears extended at sides compared to its
propagation axis. This peculiar phenomenon invites various explanations. We
show that the photosphere of an optically thick jet base in Active Galactic
Nuclei (AGNs) is observed edge brightened if the jet Lorentz factor harbours an
angular dependence. The jet assumes a higher Lorentz factor along the jet axis
and decreases following a power law along its polar angle. For an observer near
the jet axis, the jet has a lower optical depth along its propagation axis
compared to off axis regions. Higher optical depths at the outer region makes
the jet photosphere appear to extend to larger radii compared to a deeper
photosphere along its propagation axis. We tackle the problem both analytically
and numerically, confirming the edge brightening through Monte Carlo
simulations. Other than the edge brightening, the outcomes are significant as
they provide a unique tool to determine the jet structure and associated
parameters by their resolved observed cores. The study paves way to explore the
spectral properties of optically thick cores with structured Lorentz factors in
the future. | Mukesh Kumar Vyas, Asaf Pe'er | 2023-08-13T09:07:22Z | http://arxiv.org/abs/2308.06728v1 | # Optically thick jet base and explanation of edge brightening in AGN jets
###### Abstract
The jet cores in blazars are resolved and found to harbour an edge brightened structure where the jet base appears extended at sides compared to its propagation axis. This peculiar phenomenon invites various explanations. We show that the photosphere of an optically thick jet base in Active Galactic Nuclei (AGNs) is observed edge brightened if the jet Lorentz factor harbours an angular dependence. The jet assumes a higher Lorentz factor along the jet axis and decreases following a power law along its polar angle. For an observer near the jet axis, the jet has a lower optical depth along its propagation axis compared to off axis regions. Higher optical depths at the outer region makes the jet photosphere appear to extend to larger radii compared to a deeper photosphere along its propagation axis. We tackle the problem both analytically and numerically, confirming the edge brightening through Monte Carlo simulations. Other than the edge brightening, the outcomes are significant as they provide a unique tool to determine the jet structure and associated parameters by their resolved observed cores. The study paves way to explore the spectral properties of optically thick cores with structured Lorentz factors in the future.
High energy astrophysics; Active Galactic Nucleus; Relativistic jets; Theoretical models +
Footnote β : journal: ApJ
## 1 Introduction
Blazars are a class of Active Galactic Nuclei (AGNs) where the observer is situated near the jet axis. Radio intereferomety enables a deeper view of the jet base to reveal their structure at launching (Readhead et al., 1978; Cohen et al., 1979; Pearson et al., 1981; Pearson and Readhead, 1981). The cores of extragalactic AGN jets are launched from inner regions of accretion discs within around 100 Schwarzschild radii (Junor et al., 1999; Doeleman et al., 2012). The jet core appears to have an extended structure towards its edges compared to its centre, a phenomenon called limb brightening (Krichbaum et al., 2006, 2014; Kim et al., 2016; Gabuzda, 2021). Some examples include the event horizon telescope (EHT) image of Centaurus A, (Janssen et al., 2021), M87 core with wide opening angle (Walker et al., 2016) etc. Understanding this peculiar shape of the jet base is an intriguing problem.
The edge brightening in AGN jets has several explanations. Clausen-Brown et al. (2011) attributed this phenomenon to skewness in synchrotron emission. Due to helical magnetic fields in the jet, the synchrotron emission is anisotropic and more prominent from off axis regions compared to the jet axis. Hence the outer boundary of the jet appears brighter when the observer is nearly along the axis. Similar argument applies for toroidal magnetic fields. In relativistic magnetohydrodynamic simulations, limb brightening is shown to be caused by the toroidal magnetic fields in jets launched by Blandford-Znajek mechanism (Kramer and MacDonald, 2021; Takahashi et al., 2018). Additionally, the observed limb brightening at large distances from the jet base is reported to arise due to recollimation shocks such as seen in the narrow-line Seyfert galaxy 1\(H\) 0323 + 342 (Doi et al., 2018).
Most common explanation to edge brightening is given by a spine sheath structure of the jet. In this model, a fast moving jet at the centre is surrounded by a slow moving wind like flow. Such two component jets were predicted theoretically (Sol et al., 1989; Henri and Pelletier, 1991; Laing, 1996; Meier, 2003). Later they were seen in several numerical investigations using general relativistic magnetohydrodynamic (GRMHD) simulations of extragalactic jets (Hawley and Krolik, 2006; McKinney, 2006; Hardee, 2007). It was shown that a jet spine is formed along its propagation axis due to magnetic fields threading the ergosphere. This beamed spine is surrounded by a wider sheath generated and driven by the anchored magnetic fields in the accretion discs. The spine-sheath structure is used to explain limb brightening in AGN jets (Komissarov, 1990a, b) such as
Markarian 501 (Giroletti et al., 2004), M87 (Kim et al., 2018) and some radio galaxies (Swain et al., 1998; Giovannini et al., 2001). The phenomenon is attributed to Doppler deboosting (Komissarov, 1990), where the inner region of the jet propagates faster leading to relativistic beaming of the photons along the direction of propagation. This makes a smaller photon flux observed by an off axis observer. The relativistic beaming is less in outer regions of the jet as it propagates slowly leading to a brighter limb compared to the jet axis. Hence, the spine-sheath model used to explain the limb brightening is due to the relativistic beaming from optically thin plasma. It predicts a higher observed intensity from the outer regions compared to inner region.
The spine-sheath model is invoked to explain several other observed phenomena in AGNs including TeV emission from blazars (Ghisellini et al., 2005; Tavecchio and Ghisellini, 2008; Ghisellini and Tavecchio, 2008), efficient neutrino productions in AGNs (Tavecchio et al., 2014; Novikova et al., 2023) and a broadband emission from PKS 1127-145 (Siemiginowska et al., 2007).
However, the conventional explanation to limb brightening by spine-sheath jet structure is sometimes debated and considered to be a strived explanation (Gabuzda, 2021). The explanation may not be sufficient to explain limb brightening in general for several reasons. Jets with persistent direction change do show limb brightening (example includes Mrk 501). When the jet swings, limb brightening is likely to disappear following the Doppler boosting explanation in spine sheath model (see Gabuzda, 2021, for a review). Additionally, sometimes the limb brightening is seen at large distances from the jet base (Doi et al., 2018) which requires an explanation beyond the Doppler beaming due to spine sheath model.
In recent years, in the images of resolved jet cores, bright surfaces indicating optically thick emission region is visible near the jet base (Krichbaum et al., 2006, 2014; Kim et al., 2016). Such a surface indicates the transition from optically thick to optically thin region, called a photosphere. The location of this photosphere is close to the central black hole (BH) when viewed close to the jet axis and extends to larger radii at larger polar angles, thereby having a concave shape (Boccardi et al., 2017; Janssen et al., 2021). Motivated by these observations of the photospheric surface, here we provide an alternate explanation to the limb brightening with an angle dependent Lorentz factor profile for the jet. In our explanation, it is caused by the emission from optically thick region in an angle dependent jet.
The optically thick region in AGN jets is due to synchrotron self absorption (Blandford and Konigl, 1979; Ma et al., 2008; Gabuzda et al., 2018; Banasinski and Bednarek, 2022), and as we show here due to high particle density for Compton scattering as well. At parsec scale, the electron density in AGN jets is around few hundred to few thousand particles per cm\({}^{-3}\). This value is inferred by various observational techniques such as core shift analysis (Lobanov, 1998), Faraday rotation (Lisakov et al., 2021) and spectral analysis of AGN jets (Lee et al., 2016). To estimate the density near the jet base, we can assume an inverse square variation of electron density with distances (\(n\propto r^{-2}\)). It is not only a natural representation of the density with distance, it is an observational requirement as well (Konigl, 1981; Lobanov, 1998; Lisakov et al., 2021). Thus, calculating particle density close to the jet base (few Schwarzschild radii) we show that the jet base should be optically thick for Compton scattering. Furthermore, the evidence of high mass loading in jets (Qiu et al., 2021) suggests the possibility of treating the jet core to be optically thick. As we show below, for plausible measured densities, the photospheric radius in many AGN jets extends much beyond the Schwarzschild radius.
For an on axis observer, a jet having a decreasing Lorentz factor with polar angle leads to an angle dependent optical depth along its angular extent. Due to higher Lorentz factor along the jet axis, the jet stem is optically thin and the photosphere appears deeper compared to optically thicker region at off axis (Abramowicz et al., 1991; Pe'er, 2008; Vyas and Pe'er, 2023). This makes the photosphere extended at larger polar angles as compared to the jet axis, thus creating limb brightening in jets.
We develop a semi analytic model to estimate the photosphere in this framework. Additionally, we perform Monte Carlo simulations for photon scattering at the jet base and conclude that a jet with a structured Lorentz profile produces an edge brightened photosphere for all observers at small observing angles. Alternatively, the observed images of these jets allow us to infer the jet structure and paves way to investigate other spectral features of these jets in light of their photospheric emission. In section 2 below, we formulate the theoretical analysis for estimating the jet photosphere as a function of coordinates. In section 3 we discuss the results obtained and conclude the study in section 4.
## 2 Photosphere of a Relativistic Fluid with Angle Dependent Lorentz factor (\(\Gamma\))
### Existence of a Compton photosphere at AGN jet base
We consider the AGN jets to be optically thick for Compton scattering at the base. The optical depth for scattering of a photon propagating a distance \(dr\) by a fluid element with density \(n\) is \(d\tau=\sigma ndr\). Here, it is safe to take the Thomson cross section, namely \(\sigma=\sigma_{T}\). For approximately constant outflow velocity and conical jets, one can assume an inverse square law for the particle density, \(n(r)=n_{0}(r_{0}/r)^{2}\). Integrating the optical depth from the jet base at \(\approx 2r_{s}\) (where \(r_{s}\) is the Schwarzschild radius of the central BH) to the observer at infinity, the required condition for optically thick plasma
is
\[\tau_{b}=\int d\tau\sim 10^{3}\left(\frac{10^{6}}{m_{\rm BH}}\right)\times\left( \frac{n_{0}}{100}\right)\times\left(\frac{r_{0}}{1{\rm parsec}}\right)^{2}>1. \tag{1}\]
Here, \(m_{BH}\) is black hole mass in units of solar mass, and \(n_{0}\) is given in units of particles/cm\({}^{3}\). Using observationally estimated values of \(n_{0}\), \(r_{0}\) and \(m_{\rm BH}\), one can assert the optical depth in AGN jet bases. For example in the quasar 3C 273, having BH mass \(m_{\rm BH}=6.59\times 10^{9}\)(Paltani & Turler, 2005), the estimated particle density at distance \(7\times 10^{20}\) cm is \(125\) cm\({}^{-3}\)(Lisakov et al., 2021). The optical depth at the jet base is therefore \(\tau_{b}=10^{4}\). Similar analysis for quasar 3C 207 gives \(\tau_{b}=7.87\). [Density and distance are estimated by Sambruna et al. (2004), while the black hole mass is estimated in Tang et al. (2012)]. For 3C 345 it turns out to be 5.63 [Density and distance are taken from Sambruna et al. (2004) while the black hole mass is estimated in Uchiyama et al. (2007)]. Core shift analysis gives particle densities at around parsec scale to be \(1500\) cm\({}^{-3}\)(Lobanov, 1998) for various sources. This leads to a typical optical depth at the jet base in range \(\tau_{b}\sim 3-3\times 10^{3}\) for black hole masses \(m_{\rm BH}=10^{6}-10^{9}\). However, it is not the case with all the AGN jets and sometimes the core turns out to be optically thin. One such example is PKS (\(1136-135\)), the estimated optical depth at the jet base is \(0.21\) [\(n_{0}\) and \(r_{0}\) estimated by (Sambruna et al., 2004) and the black hole mass is taken from (Uchiyama et al., 2007)].
It is important to mention that the density estimates in an AGN jet has a large uncertainty associated with the applied method. For example, the density estimates in M87 jet using rotation measure at distance \(r_{0}=156\) parsec is \(n_{0}=1.6\times 10^{-3}\) cm\({}^{-3}\) giving \(\tau_{b}=0.06\). While from X-ray energy spectral analysis, the density estimates come out to be four orders of magnitude greater leading to \(\tau_{b}\sim 600\)(Osone, 2023). In these estimates, the black hole mass of M87 is \(6.5\times 10^{9}\) solar mass, taken from Event Horizon Telescope Collaboration et al. (2019). In the estimates above, the typical calculated particle densities close to the jet base, at the distance of a few Schwarzschild radii reach up to \(10^{10-12}\) cm\({}^{-3}\). Such high densities in AGN atmospheres are reported by various authors. For example, the density inside the jet in 3C 84 is estimated at \(10^{3}-10^{5}\) cm\({}^{-3}\) at \(0.07-0.14\) parsec (Kino et al., 2021; Nagai et al., 2017; Kino et al., 2018). The intermediate line regions near AGN jets have densities \(10^{5}\) cm\({}^{-3}\) to \(10^{11.5}\) cm\({}^{-3}\) at characteristic distances of 0.1 parsec (Adhikari et al., 2017, 2016). Particle densities in the region above AGN accretion discs are shown to be \(\approx 10^{15}\) cm\({}^{-3}\)(Adhikari et al., 2016). Hence we conclude that the optically thick cores in AGN jets exist in many sources and therefore the photospheric properties of such cores need to be explored.
### Jet structure
The existence of a spine-sheath structure in AGN jets has both theoretical and observational basis. Here we consider an angle-dependent jet velocity profile, \(\Gamma=\Gamma(\theta)\). In order to avoid abrupt change in Lorentz factor between spine and sheath, we consider a smooth transition following a power law decay with polar angle \(\theta\), given as
\[\Gamma(\theta)=\Gamma_{\rm min}+\frac{\Gamma_{0}-\Gamma_{\rm min}}{\sqrt{ \left(\frac{\theta}{\theta_{0}}\right)^{2p}+1}}. \tag{2}\]
Here \(\Gamma_{0}\) and \(\Gamma_{\rm min}\) are the maximum and minimum values of the jet Lorentz factor assigned to the spine and sheath regions respectively. The angle \(\theta_{\rm i}\) is a constant, separating the inner, faster jet core from the outer, slower sheath. The parameter \(p\) is a jet profile index that determines the steepness of decrease in \(\Gamma\) with \(\theta\). This profile implies an inner jet region (at angles \(\theta\ll\theta_{j}\)) having constant Lorentz factor \(\Gamma=\Gamma_{0}\) and outer jet region, at angles larger than \(\theta=\theta_{\rm e}=\theta_{\rm f}\Gamma_{0}^{1/p}\) has \(\Gamma=\Gamma_{\rm min}\). Within the region \(\theta_{\rm i}-\theta_{\rm e}\), the Lorentz factor decays following a power law \(\Gamma\propto\theta^{-p}\). This profile is analogous to the evolution of \(\Gamma\) obtained by McKinney (2006) in their simulations [see their Figure 9, first panel]. This continuous transition of Lorentz factor between the spine and sheath arises from mutual interaction and mixing of particles in these regions. However, the conclusions of our study are independent of the form of the transition and are valid for a step function transition as well (obtained here for \(p\rightarrow\infty\)).
### Theoretical model : Estimation of optical depths in a structured jet
Consider that the photons are emitted deep inside the jet. These photons escape once they reach the photospheric radius \(R_{\rm ph}\) where the optical depth for scattering between \(R_{\rm ph}\) and infinity equals unity. In such a case, the shape of \(R_{\rm ph}\) as a function of the polar coordinates \((\theta,\phi)\) determines the appearance of the jet base in the observations. The observed shape is sensitive to the given observer's location, specified here by polar coordinates \(\theta_{0}\) and \(\phi_{\rm o}\). We calculate here the angular dependence of \(R_{\rm ph}\) for a given observer's location. We calculate \(R_{\rm ph}\) both analytically and numerical simulations as follows.
Define a cylindrical coordinate system centered at the plasma expansion center (base of the jet) and assume that the observer is located at plus infinity on the \(z\)-axis. Since the observer is, in general, off the jet axis, the jet outflow is not symmetric in this coordinate system. Consider a photon that was emitted at point \(z_{\rm min}\) along the \(z\) axis and at radial distance \(r_{\rm min}\) from it, namely at distance \(r=\left(r_{\rm min}^{2}+z_{\rm min}^{2}\right)^{1/2}\) from the center. Assume that this photon propagates towards the observer (along the \(+z\) direction). The optical depth as mea
sured along the ray traveling in the \(+z\) direction and reaching the observer is (Abramowicz et al., 1991; Pe'er, 2008),
\[\tau(r_{\rm min},z_{\rm min})=\int_{z_{\rm min}}^{\infty}n^{\prime}\sigma_{\rm T }\Gamma[1-\beta\cos\theta_{\rm f}]dz. \tag{3}\]
Here, \(n^{\prime}\) is the electron number density in the local comoving frame, \(\sigma_{\rm T}\) is Thomson scattering cross section, \(\beta\) is the local fluid velocity in units of light speed \(c\), and \(\theta_{\rm f}\) is the angle between the velocity vector of the local fluid element and the observer's angular location.
The comoving number density of the plasma moving with outflow rate \(\dot{M}_{\rm j}\) is defined as (Lundman et al., 2013),
\[n^{\prime}(r,\theta)=\frac{1}{m_{p}c\beta\Gamma r^{2}}\frac{d\dot{M}_{\rm j}}{ d\Omega}. \tag{4}\]
Here, \(m_{p}\) is proton mass and \(d\dot{M}_{\rm j}/d\Omega\) is differential mass outflow rate in the jet. This differential mass outflow rate is connected to the (differential) jet luminosity (\(dL_{\rm j}/d\Omega\)) as
\[\frac{d\dot{M}_{\rm j}}{d\Omega}=\frac{1}{c^{2}\Gamma}\frac{dL_{\rm j}}{d \Omega}. \tag{5}\]
It has been observed that the jet luminosity is not isotropic and has angle dependence. In various observations of extragalactic jets, \(L_{j}\propto\theta^{-2}\) is found to be a typical behaviour of jet luminosity (Lipunov et al., 2001; Zhang and Meszaros, 2002; Rossi et al., 2002; Salafia and Ghirlanda, 2022). In the context of our jet model, we assume that the jet luminosity is constant at small angles, \(\theta\leq\theta_{\rm j}\), and it decreases with inverse square law at larger angles. The angle-independent luminosity at small angles, \(\theta\leq\theta_{\rm j}\), can be understood, since the jet has constant Lorentz factor and hence the flow is radial and steady, leading to angle-independent luminosity. The luminosity can therefore be approximated by
\[\frac{dL_{\rm j}}{d\Omega}=\frac{L_{0}}{2\pi\left[\left(\frac{\theta}{\theta_{ \rm j}}\right)^{2}+1\right]}=\frac{L_{0}}{2\pi f(\theta)}. \tag{6}\]
Here, \(f(\theta)\) is
\[f(\theta)=\left(\frac{\theta}{\theta_{\rm j}}\right)^{2}+1. \tag{7}\]
Hence the differential outflow rate in the jet is
\[\frac{d\dot{M}_{\rm j}}{d\Omega}=\frac{L_{0}}{2\pi c^{2}\Gamma f(\theta)}. \tag{8}\]
The total mass outflow rate can be obtained by integrating over the spatial angle \(d\Omega\). One can express it in terms of the accretion efficiency in the disk, \(\eta=L_{\rm a}/\dot{M}_{\rm a}c^{2}\) where \(\dot{M}_{\rm a}\) is the disk accretion rate and \(L_{\rm a}\) is the accretion luminosity, using
\[\dot{M}_{\rm j}=\frac{L_{0}}{2\pi c^{2}}\int\frac{d\Omega}{\Gamma f(\theta)}= \frac{m_{0}L_{\rm a}}{\eta c^{2}}. \tag{9}\]
Here, \(m_{0}=\dot{M}_{\rm j}/\dot{M}_{\rm a}\) is the jet mass loading parameter, typically found between 200-500 (Qiu et al., 2021). As for the accretion efficiency \(\eta\), it is found in range 0.016 - 0.14 (Bian and Zhao, 2003). Theoretically, it is estimated to be 0.056 for a non rotating black hole while it can be as high as 0.32 in the case of a maximally rotating black hole (Laor and Netzer, 1989).
The luminosity along the jet axis \(L_{0}\) can be expressed in terms of the disk luminosity, \(L_{\rm a}\) as
\[L_{0}=\frac{2\pi m_{0}L_{\rm a}}{\eta\mathcal{I}}, \tag{10}\]
with \(\mathcal{I}\) defined as \(\mathcal{I}\equiv\int\frac{d\Omega}{\Gamma f(\theta)}\). Using the expression of \(L_{0}\), Equation 8 becomes
\[\frac{d\dot{M}_{\rm j}}{d\Omega}=\frac{m_{0}L_{\rm a}}{\eta c^{2}\Gamma}\frac{ 1}{f(\theta)\mathcal{I}}, \tag{11}\]
From Equation 4, the comoving density is
\[n^{\prime}=\frac{m_{0}L_{\rm a}}{\eta m_{p}c^{3}\beta\Gamma r^{2}r^{2}f( \theta)\mathcal{I}}. \tag{12}\]
Using this expression in Equation 3, the optical depth can be expressed as
\[\tau=\int_{z_{\rm min}}^{\infty}\frac{m_{0}L_{\rm a}\sigma_{T}[1-\beta\cos \theta_{\rm f}]}{\eta m_{p}c^{3}\beta\Gamma r^{2}f(\theta)\mathcal{I}}dz. \tag{13}\]
If the density is \(n^{\prime}=n_{0}^{\prime}=n_{0}/\Gamma_{0}\) given at distance \(r=r_{0}\) at \(\theta=\theta_{\rm j}\), then from Equation 12,
\[\frac{m_{0}L_{\rm a}}{\eta m_{p}c^{3}\mathcal{I}}=2n_{0}r_{0}^{2}\beta_{0} \Gamma_{0}. \tag{14}\]
Here, \(\beta_{0}\) is bulk jet velocity along the jet axis. Using Equation 14 in 13,
\[\tau=\int_{z_{\rm min}}^{\infty}\frac{2n_{0}r_{0}^{2}\beta_{0}\Gamma_{0} \sigma_{T}[1-\beta\cos\theta_{\rm f}]}{\beta\Gamma r^{2}f(\theta)}dz. \tag{15}\]
The observer, situated at polar angle \(\theta_{\rm o}\), will see the surface of the plasma when the optical depth along the observer's direction equals unity. Considering the fact that \(r\sin\theta_{\rm f}\) is constant along the ray parallel to \(\theta\), one can convert the integration from \(dz\) to \(d\theta_{\rm f}\) using \(dz=-r^{2}d\theta/(r\sin\theta_{\rm f})\), where \(\theta_{\rm f}\) is the angle between the local propagation direction of flow and the observer's location \(\theta_{\rm o}\). This gives
\[\tau=\int_{0}^{\theta_{\rm o}}\frac{2n_{0}r_{0}^{2}\beta_{0}\Gamma_{0}\sigma_{ T}[1-\beta\cos\theta_{\rm f}]}{\beta\Gamma r\sin\theta_{\rm f}f(\theta)}d\theta_{ \rm f}. \tag{16}\]
The photons escape to infinity from surface at which \(\tau=1\) and that surface is visible to the observer. In general, \(\theta_{l}\) varies along the photon path and is thus \(\theta_{l}\neq\theta_{0}+\theta_{\rm j}\). The surface of the photosphere therefore depends on both the polar and azimuthal location of the observer (_i.e.,_\(\theta_{\rm o}\) and \(\phi_{\rm o}\)). Equation 16 can be solved to determine the photospheric radius for unit optical depth along the direction of the observer \(\theta_{\rm o}\). In general, it is a function of the jet's angular coordinates [_i.e.,_\(R_{\rm ph}=R_{\rm ph}(\theta,\phi)\)]. The unique photospheric appearance is determined by specifying the jet parameters according to Equation 2 and the input parameters given particle density \(n^{\prime}=n_{0}/\Gamma_{0}\) at radius \(r=r_{0}\). From various observational estimates, it is found to be in the range of one to few \(\times 100\) cm\({}^{-3}\) at around 1 parsec (Konigl, 1981; Lobanov, 1998; Lee et al., 2016; Lisakov et al., 2021).
### Numerical simulations
We carry Monte Carlo simulations to confirm the predictions of the above discussed theoretical model. In the numerical code, approximately 6 million monoenergetic photons with energy \(10^{-10}\) (in units of rest mass of electron) are injected deep inside the jet where the matter is optically thick. Initially the photons have random directions and they propagate while Compton scatter with the electrons in the jet.
Given a photon's four vector, the code computes the location of the coming scattering event in the following way. Initially, the photon's location is given in Cartesian coordinate system. As a first step, the photon location is transformed into a cylindrical coordinate system (\(r_{c},\theta_{c},z_{c}\)). This coordinate system is chosen such that the centre of the black hole is at \(r_{c}=z_{c}=0\), and the \(z_{c}\) axis is along the propagation direction of the photon. In these coordinates, the photon's initial location is described by \((r_{\rm min},z_{\rm min})\). 1 The photon's initial radial distance from the center of the black hole is \(r=\sqrt{r_{\rm min}^{2}+z_{\rm min}^{2}}\) (see Figure 1).
Footnote 1: As the photon propagates along the \(z_{c}\) direction, \(r_{\rm min}\) and \(\theta_{c}\) are not changed, and we omit discussing \(\theta_{c}\) for clarity.
The photon propagates along the \(z_{c}\) axis (dashed line in Figure 1) until it scatters with an electron \(e_{2}\) located at \(z_{\rm max}\). The photon travel distance \(\Delta z=z_{\rm max}-z_{\rm min}\) is determined as follows. The probability of a photon to travel a distance \(\Delta z\) along which the optical depth is \(\tau\) without being scattered is \(P_{sc.}(\tau)=e^{-\tau}\). Therefore, the optical depth \(\tau\) is drawn from a logarithmic distribution. Then the code calculates \(\delta\tau\) along the photon trajectory until reaching the randomly selected optical depth. If the randomly selected optical depth is larger than the optical depth for escaping to infinity, the photon assumes to escape, otherwise the scattering position is calculated along the \(z_{c}\) axis such that the travel distance \(\Delta z\) is such that the optical depth along this distance is equal to \(\tau\). Finally, this scattering location is transformed back to the original Cartesian coordinate system.
For the scattering process, a cold electron that moves at bulk velocity \(\Gamma=\Gamma(\theta)\) as given by Equation 2 is considered. The scattering occurs at a random angle in the electron's rest frame, considering the full, Klein-Nishina, angle-dependent cross section for the process. For a complete description of the scattering calculations, see Pe'er (2008).
A photon scatters multiple times until it escapes the system once it reaches the location where the local optical depth fulfills the escape criteria. The escape direction of each photon is stored and contributes to the observed flux as seen by an observer along the photon escape direction. The escape location marks the position of the photospheric radius perceptible to that observer. This implies that for a photon at the photosphere, \(z_{\rm max}=\infty\) as the photon has its last scattering there before it escapes to infinity.
Thus, a large number of photons map the entire photospheric surface for all observers. The photospheric surface as seen by observers located in different directions depends on the viewing angle, and is calculated by binning the data into certain observer's locations.
Further details of the simulation code are found in Pe'er (2008); Lundman et al. (2013); Vyas et al. (2021).
## 3 Results
As an example, we consider an environment characterized by a particle density \(n_{0}=80\) cm\({}^{-3}\) at \(r_{0}=1\) parsec. In Figure 2, we plot the photospheric radius \(R_{\rm ph}\) with horizontal distance from the jet axis \(r_{c}=\sqrt{x^{2}+y^{2}}\). The observer is assumed along the jet axis (polar angle \(\theta_{\rm o}=0\)). The estimated values of \(R_{\rm ph}\) from the analytic calculations (Equation 16) are shown by blue curve while the inferred values from Monte Carlo simulations are overplotted with black dots. The edge brightening of the jet due to relativistic effects on the apparent optical depth is perceptible as the photospheric radius extends up to larger distances for off axis regions compared to the jet axis. Considering a black hole mass equivalent to 10 million solar mass, (with Schwarzschild radius \(\sim 3\times 10^{12}\) cm), the photosphere is well above the horizon, extending up to one order of magnitude above it. However, the extent of the magnitude primarily depends upon considered value of \(n_{0}\) and could extend much further for larger densities.
To show the appearance of the jet to an observer, analytic results are shown in Figure 3 (upper panel) where we project the photospheric radius \(R_{\rm ph}\) as measured in Cartesian coordinates on the \(x-y\) plane. Here, the observer is assumed on axis, namely situated along the \(z\) axis (at polar angle \(\theta_{\rm o}=0\)). This makes the surface symmetric around the \(x\) and \(y\) coordinates. In producing this plot, we have assumed a jet profile
as given in Equation 2, with parameters \(\Gamma_{0}=10\), \(\theta_{\rm j}=0.1\) rad and profile index \(p=1.2\).
The important outcome is the increase of \(R_{\rm ph}\) at the jet's edges up to several times compared to the jet axis (at \(x=y=0\)). The blue region at the centre of the jet compared to yellow region at outskirts signifies a deeper photosphere at the centre. In the lower panel, we have plotted the photospheric radius \(R_{\rm ph}\) as a function of the polar angle \(\theta\) (black solid curve). In the inner regions of the jet, the photospheric radius \(R_{\rm ph}\) is well approximated as a parabolic function of the polar angle, obeying the relation \(R_{\rm ph}=a\theta^{2}+b\) where \(a=5\times 10^{13}\) cm and \(b=2.55\times 10^{12}\) cm for the jet profile presented here (blue dashed curve).
In the upper panel of figure 4, we show the photospheric radius as seen by an off-axis observer. The photospheric radius \(R_{\rm ph}\) is plotted on the \(x-y\) plane for jet parameters \(\Gamma_{0}=15\), \(\theta_{\rm j}=0.1\) rad, \(p=1.2\). The observer assumes \(\theta_{\rm o}=0.30\) rad
Figure 1: Geometry of a photons scattering (dashed line) between the two electrons \(e_{1}\) and \(e_{2}\). The first electron is situated at location \(z_{\rm min}\) while the second electron at \(z_{\rm max}\) in cylindrical coordinates chosen such that the z axis is along the photon path. At photospheric radius, \(z_{\rm max}=\infty\).
Figure 3: Upper Panel: \(R_{\rm ph}\) as a function of \(x\) and \(y\) coordinates (in cm). Lower panel: \(R_{\rm ph}\) as a function of the polar angle \(\theta\). The color gradient shows the luminosity of the photosphere decreasing from blue to yellow shade. Chosen parameters \(\Gamma_{0}=10\), \(\theta_{\rm j}=0.1\) rad, \(p=1.2\) for the observerβs location at \(\theta_{\rm o}=0.0\) rad. \(n_{0}=80\) cm\({}^{-3}\) at \(r_{0}=1\) parsec
and an azimuthal location \(\phi_{\rm o}=\pi/2\), making the surface symmetric around the \(x\) axis, while asymmetry appears along \(y\) axis. To indicate the observed asymmetry of the photospheric radius and observed flux for such an off-axis observer, we plot the photospheric radius \(R_{\rm ph}\) along the \(y\) axis in the middle panel of figure 4 for \(x=0\) (black dots). The color bar shows the simulated photon flux \(dN/dA\), where \(dA(y)=dxdy\) is the differential cross section shown along the \(y\) axis. The base of the photospheric surface (minimum of the last scattering surface) shifts towards positive \(y\) axis. The bright spot peaks near the dip (the smallest photospheric radius, at \(5\times 10^{12}\) cm). It asymmetrically extends towards both sides. The arm along the left side (further away from the observer) appears twice as long compared to right side, which is purely an effect of the observer's location. Such asymmetry of the limb brightened arms is widely observed in off axis jets (Walker et al., 2016). The shorter arm appears brighter compared to the longer one. In the bottom panel, we show the variation of the simulated photon flux along \(y\) coordinate at \(x=0\) for different choices of the observer's location at \(\theta_{\rm o}=0.1,0.2\) and \(0.3\) rad. The asymmetry in the flux decay at both sides increases as the observer is further away from the jet axis. Subsequently, the peak in the flux also shifts further along \(y\) axis. Further, the implication of this asymmetry in the flux is that by looking at the ratio of the observed signal at both sides, one can infer the viewing angle of the jet.
## 4 Conclusions
In this letter, we have studied the photospheric appearance of a relativistic AGN jet with polar angle dependent Lorentz factor profile. Using density estimates of AGN jets, we argue that the jet cores near their launching sites, and up to a few- few tens of Schwarzschild radii can generally be optically thick for Compton scattering. Photons undergo multiple scattering with electrons in this region, before they escape from it to infinity. The last scattering surface represents the innermost region that can be observed.
Here we considered an angle dependent Lorentz factor profile, with jet propagating faster in the middle compared to larger polar angles. We have further considered a cold plasma. However, the last scattering surface, or the structure of photosphere obtained in this work remains unaffected if the plasma were hot and relativistic. Furthermore, as discussed in section 2.1, the optical depths at the jet base are around 10 and hence the photons decouple from the plasma after encountering only a few scatterings before escaping. Thus, complete thermalization does not have time to occur and most of the photon population remains at low energies, enabling the resolved cores to be observed at radio frequencies.
We show that for an observer situated near the jet axis, the jet core appears relatively empty in the middle with a
Figure 4: Top panel: Analytic estimation of \(R_{\rm ph}\) as a function of \(x\) and \(y\) coordinates for \(\theta_{\rm o}=0.3\) rad; Middle panel: for \(\theta_{\rm o}=0.3\) rad, we plot the variation of \(R_{\rm ph}\) along the \(y\) axis for \(x=0\) with black spheres. The colored plot is the normalized photon flux \(dN/dA\) calculated by simulations at a given location \(x=0,y\). Bottom panel: variation of the simulated photon flux along the \(y\) coordinate at \(x=0\) for different observing angles \(\theta_{\rm o}=0.1\) rad (black), \(\theta_{\rm o}=0.2\) rad (blue) and \(\theta_{\rm o}=0.3\) rad (red) curves. For larger viewing angles the asymmetry is larger. Chosen parameters in all the panels are \(\Gamma_{0}=10\), \(\theta_{\rm j}=0.1\) rad, \(p=1.2\). \(n_{0}=80\) cm\({}^{-3}\) at \(r_{0}=1\) parsec.
deeper photosphere while it appears extended at the edges. The reason of this appearance is that the optical depth for scattering is smaller for photons emitted along the jet axis, compared to photons emitted from larger angles. As a result, the last scattering location of the photons is closer to the jet base along the jet axis.
Such an appearance is consistent with various observations of the resolved jet cores. Hence, we provide a natural explanation to the observed edge brightening feature of the AGN jet cores. As the appearance of a jet base depends upon jet parameters, the analysis hands over a tool to infer the jet structure in AGNs directly from their observations.
The conventional explanation of the limb brightening due to Doppler deboosting model is criticized based on the fact that it cannot explain persisting limb brightening in a swinging jet. Thus, when the jet points towards the observer, the limb brightening should vanish which is in contrast with observations of Mrk 501 (Gabuzda, 2021). From Figure 3 we show that our model predicts a limb brightening for an on axis jet as well. Thus it is a viable mechanism that explains different aspects of the limb brightening phenomenon, that are difficult to explain otherwise.
The relative flux variation at both sides of the jet depends upon the observer's location as shown in the bottom panel of Figure 4. This enables the determination of the observer's location from the observed asymmetry in the flux variation. Additionally, the flux decays by two to three orders of magnitude between the centre of the jet and the off axis regions or the limb. This is consistent with the typically observed flux variation in the resolved cores of AGN jets (See Figure 1 of Hada et al., 2013).
The analysis presented here paves a way for considering the presence of a photospheric component in blazars. The existence of an observed photospheric component may have additional, exciting implications on the observed AGN properties, such as the spectra and polarization. These will be explored in future works.
## Acknowledgments
For this project, we acknowledge the support by European Union (EU) via ERC consolidator grant 773062 (O.M.J.).
|
2306.06284 | Everybody Compose: Deep Beats To Music | This project presents a deep learning approach to generate monophonic
melodies based on input beats, allowing even amateurs to create their own music
compositions. Three effective methods - LSTM with Full Attention, LSTM with
Local Attention, and Transformer with Relative Position Representation - are
proposed for this novel task, providing great variation, harmony, and structure
in the generated music. This project allows anyone to compose their own music
by tapping their keyboards or ``recoloring'' beat sequences from existing
works. | Conghao Shen, Violet Z. Yao, Yixin Liu | 2023-06-09T22:24:05Z | http://arxiv.org/abs/2306.06284v1 | # Everybody Compose: Deep Beats To Music
###### Abstract.
This project presents a deep learning approach to generate monophonic melodies based on input beats, allowing even amateurs to create their own music compositions. Three effective methods - LSTM with Full Attention, LSTM with Local Attention, and Transformer with Relative Position Representation - are proposed for this novel task, providing great variation, harmony, and structure in the generated music. This project allows anyone to compose their own music by tapping their keyboards or "recoloring" beat sequences from existing works.
neural networks, music generation +
Footnote β : [leftmargin=*] β : [leftmargin=*
contains many musical styles, which is enough for our model to generalize well. 1 shows the distribution of note pitches in the dataset. The note pitches follow the normal distribution with the mean around 78, which resembles the note pitch distribution of a typical classical piano performance.
**Data Preprocessing.** A MIDI file consists of a sequence of MIDI events, and in piano performance, each event can be one of the four types: NOTE_START, NOTE_END, REST, VELOCITY_SHIFT. The first two control the position and relative length of notes, and the last two controls the timing. In prior work such as (Kumar et al., 2017), and (Bianchi et al., 2017), the sequence model learns and generates those events directly, but our model uses a novel and simpler representation. We first use the note-seq library (Bianchi et al., 2017) to convert the MIDI events to an overlapping interval of notes. Since our current model supports only monophonic melody, we have applied a melody inference algorithm in (Bianchi et al., 2017) to make those intervals disjoint without losing the overall musical structure. This algorithm divides the continuous time space into frames, and for each frame, computes the possible melody event with the highest likelihood, and then uses Viterbi Algorithm to compute the most likely sequence of melody events using the likelihood computed. In the case of chords, the algorithm favors the highest note in the chord.
**Features Representation.** The next step is to convert the disjoint time interval to a sequence. Our model represents the piano performance as a sequence of disjoint notes. The feature \(X\) is a sequence of "beats" where \(X^{(t)}\) is a tuple such that \(X^{(t)}_{0}\) is the rest time after the release of previous note at timestep \(t-1\), and \(X^{(t)}_{1}\) is the duration of current note at time \(t\). The label \(y\) is a sequence of note pitches, ranging from 0 to 127, where \(y^{(t)}\) corresponds to the note pitch at time \(t\) whose beat is \(X^{(t)}\). Since no performance uses the pitch 0, we use 0 to represent the start of a sequence \(y^{(0)}\). Doing so, for each sample, the feature has shape (sequence length, 2), and the label has shape (sequence length,).
**Optimization.** A typical music performance can have more than 2000 disjoint notes, but many sequence models for NLP cannot handle such long sequences. For example, RNN and even LSTMs can suffer vanishing gradient problem when the sequence length is greater than 128, and transformers will take an extremely long time to train and infer when a sequence becomes long because its runtime is quadratic to the sequence length (Kumar et al., 2017). To overcome this issue, we implemented random slicing in our DataLoader, wherein for each epoch, we randomly take a slice of fixed length for each sample. This method helps our model to converge faster without losing generality. In addition, we realized that preprocessing takes a significant time during the training process - it takes around 3 hours on AWS m2.xlarge instance. To alleviate this problem, we host the preprocessed data on Cloudflare, so now everyone takes only 5 seconds to download and can directly start training then.
**User Input and "Recoloring".** We have written a sampling utility that allows users to write beats by tapping their keyboard. Then, the beats will be converted to beats sequence and fed to our sampling algorithms for note inference. The user will then get a MIDI file to hear the generated melody. The sampling utility also supports "recoloring", where it takes a sample from the dataset, extra its beats, and inference a new melody from the beats.
## 4. Methods
### Baseline: Decoder Only Vanilla RNN
We implemented an autoregressive decoder-only vanilla RNN model as our baseline. It takes input beats Sequence \(X\) and outputs notes sequence \(Y\) with the same length. For every time stamp \(t\), we concatenate the input \(x^{(t)}\) with the note embedding \(y^{(t-1)}\) to allow teacher forcing. Then the concatenation result is fed through a dense layer before feeding into the two-layer single-directional recurrent neural network. Then the output of the recurrent neural network is forwarded through another dense layer with softmax activation to output notes prediction. We also add a residual connection between the output of concatenation and before the RNN layers allowing an alternative information flow path. The baseline model suffers from the vanishing gradient problem. Also, the model can't access future beats information, and it is too simple to capture the long-term dependencies in the music.
### LSTM with Full Attention
We implemented an LSTM model with Full Attention which is similar to the architecture in (Bianchi et al., 2017). To mitigate the vanishing gradient problem in the baseline model, we replaced the vanilla RNN cell with LSTM cell, which introduces separate cell states to encode long-term memory and extra gates to decide how much past information is kept(Bianchi et al., 2017). The attention mechanism is added to allow the decoder to utilize the most relevant information in the encoder output.
The model uses a pre-attention bidirectional LSTM and a post-attention single-directional LSTM. The pre-attention bidirectional LSTM takes the input of beats sequence \(X\), and outputs a sequence of annotations \((h^{(1)},h^{(2)},...\ h^{(T)})\). The context vector \(context^{(t)}\) is computed as a weighted sum of annotations: \(context^{(t)}=\sum_{j=1}^{T}\alpha_{ij}h^{(j)}\). The attention weight \(\alpha_{tj}\) is computed with the previous hidden state of post-attention LSTM and annotations through a one-hidden layer neural network. Then the context vector at the current timestamp is concatenated with the note embedding in the previous timestamp to feed into the post-attention LSTM to obtain the output notes. One downside of the model is the cost of training is quadratic due to the introduction of the attention mechanism.
### LSTM with Local Attention
We have designed a model architecture called LSTM with Local Attention that performs better than the attention model and is much easier to train. Our architecture contains a bidirectional LSTM as the first layer and a single-directional LSTM as the second layer. The bidirectional accepts input sequence \(X\) and outputs the final hidden state and context sequence \(h\) where the \(c^{(t)}\) is a concatenation of the hidden state of forward direction and backward direction at timestep \(t\). The standard LSTM is an autoregressive model where
Figure 1. Distribution of note pitches in MAESTRO Dataset
the input at timestep \(t\) is a concatenation of previous note \(y^{(t-1)}\) and hidden state \(h^{(t)}\). The critical difference is that the attention at timestep \(t\) is local - instead of taking a weighted average of the entire context sequence \(\sum_{j=1}^{T}\alpha_{tj}h^{(j)}\), the second LSTM only takes \(h^{(t)}\). This idea works because of the unique properties in our problem settings - the input beats and output notes sequence have the same length, and unlike machine translation where the target word may correspond to different positions in the source word, beats and notes have a strong one-to-one relationship, so using the context vector at the same timestep gives enough information to infer the next note. This model is also inspired by the encoder-decoder, where the initial state of the second LSTM is set to be the final hidden state of the first bidirectional LSTM, allowing the decoder to have a better awareness of the overall beat structures.
### Transformer with Relative Position Representation
Transformers avoid the dependence on the recurrence architecture and utilize self-attention to allow global dependencies between inputs and outputs. Our Transformer model utilizes an encoder-decoder architecture. The encoder takes in a sequence of beats as input, projects them into a dense vector, and feeds the vectors into a self-attention sub-layer and a feedforward sub-layer. Each sub-layer is followed by a residual connection and layer normalization to facilitate information propagation back into deeper layers. The decoder is autoregressive, taking in notes predicted so far, projecting them into embeddings, and then passing the vector to a self-attention sub-layer, an encoder-decoder attention sub-layer for reference to the encoded state, and a feedforward sub-layer. During training, an input mask is used by the decoder to prevent it from accessing future inputs. Additionally, each sub-layer is followed by a residual connection and layer normalization to improve gradient flow. Finally, a generator linear layer decodes the feedforward output to the space of possible notes. To increase the representation power of the network, multiple encoder and decoder layers are stacked.
\[RelativeAttention(Q,K,V)=Softmax(\frac{QK^{T}+QE^{T}}{\sqrt{D}})V \tag{1}\]
To represent the sequence order, Transformers add sinusoidal positional encodings to their inputs, aiding the models in learning the absolute position of each input element. While absolute position representation helps learn the global timing and pitch of a melody, relative distances are also valuable in capturing pairwise relationships between input elements. Music Transformer (Deng et al., 2017) applies the idea of relative self-attention (Han et al., 2017) in the music generation space, reducing the memory requirements from \(O(T^{2}D)\) to \(O(TD)\). Thus, inspired by the success of Music Transformer(Deng et al., 2017) and LSTM with Local Attention, we implement relative position representation to facilitate learning pairwise relationships. In this algorithm, a separate relative position embedding \(E^{r}\) of shape \(num\_heads\times T\times embed\_dim\) is learned for each possible pairwise distance, separately for each attention head. In Equation 1, Query, Key, and Value matrices are denoted as Q, K, and V, respectively. An additional \(QE^{T}\) of shape \(T\times T\) is added in calculating attention weights. We extend the implementation of Music Transformer, which models MIDI events and employs a decoder-only architecture, to an encoder-decoder architecture with relative attention in both modules to handle beats sequence as inputs and notes sequence as outputs.
### Sampling and Searching
**The State Machine Philosophy**. We have defined an interface for sampling, where it abstracts each model as a _state machine_ such that in each time step, it takes the current state and the previous sampled note, and outputs the next state and the probability distribution of the next note. It also has access to constants that do not change during the sampling or beam search. In LSTM, the state contains the hidden state of the LSTM cell and the current position, and constants contain the context sequence. In the transformer model, the state contains only the current position, and constants contain the encoder memory. This state machine model helps us to apply a search algorithm to different models without code duplication and allows us to keep track of states in beam search easily.
**Stochastic Search and Heuristics**. Randomness have played an important factor in increasing the quality of generated melodies. In some prior work like (Han et al., 2017), (Han et al., 2017), randomness has helped boosted the variability of the generated artifacts and balanced exploration and exploitation during the search. In our work, we applied randomness in our search, where in each timestep, we query the state machine, get the distribution of the next note, and randomly select a note according to the queried distribution. We call this process _stochastic search_. We have used several heuristics to ensure the quality of the generated notes while keeping the added creativity from randomness. We have used _top-p sampling_, where we only consider the top \(k\) projection of probability mass for sampling, and _top-k sampling_, where we only consider the top \(k\) choices. We also used _temperature_\(T\) to adjust probability such that \(\forall r:\tilde{P}(y^{(t+1)}=r|\mathbf{x},\mathbf{y})\propto P(y^{(t+1)}=r| \mathbf{x},\mathbf{y})^{\frac{1}{2}}\). Doing so, \(T>1\) gives more confidence to the note with larger likelihood, reducing the variability, and \(T<1\) makes the distribution more uniform, increasing the variability. We also designed a heuristic called _repeat decay_\(\gamma\), where we reduce the likelihood of repeating the previous note by \(\gamma\). That is: \(\forall r:\tilde{P}(y^{(t+1)}=r|y^{(t)}=r,\mathbf{x},\mathbf{y})=(1-\gamma)P(y^ {(t+1)}=r|\mathbf{x},\mathbf{y})\). Doing so, we upper bound the probability of repeating the same note \(N\) times by a constant \((1-\gamma)^{N-1}\), which decreases exponentially in \(N\), making the generated melodies less repetitive and more interesting. In addition, we allow users to fix a few notes at the beginning as a hint.
**Hybrid Beam Search**. To better balance creativity and the objective of maximizing sequence likelihood, we have combined the idea of stochastic search and beam search. In detail, suppose we have \(N\) beams. There are two modes - _beam mode_ and _stochastic mode_ for selecting the next \(N\) beams. The _beam mode_ is the same as the original beam search, where for each beam, we query the model state machine and get the state and the conditional distribution for the next beam, and among the \(N^{2}\) beams, calculate the likelihood for each corresponding sequence by summing the log of conditional likelihood, and select the top \(N\). In _stochastic_ mode, for each beam, we sample the next note as the next beam and take the adjusted conditional likelihood according to the sampling heuristics. For each time-step, the sampler chooses _beam mode_ and _stochastic
mode_ randomly according to a hyperparameter \(p\) where \(p\) is the probability of choosing beam mode.
## 5. Results and Discussion
We perform hyperparameter tuning on various parameters such as learning rate, embedding dimension, hidden state dimension, and the number of encoder/decoder layers. The train/validation accuracy for the best-performing configuration for each of our methods is reported in Table 1. We establish a competitive baseline with a 39.08% validation accuracy. LSTM with Full Attention achieves a 3.08% improvement over the baseline, while the local attention mechanism brings a significant 8.59% improvement. The vanilla Transformer achieves a modest 37.97% validation accuracy, while the addition of relative position representation yields an 8.92% improvement, illustrating the importance of learning relative pairwise relationships in note generation. We observe that Transformer models take longer to converge, and often require deeper stacked layers than those of LSTMs to reach comparable performance, resulting in more intensive computing costs.
For qualitative analysis, we use both beats in the dataset and user-inputted beats to generate notes and utilize SuperCollider to play the midi notes generated by the models. We then examine the music quality in terms of the variety of notes used, the harmony of their sequence, and their smoothness. Also, musical composition heavily relies upon local and long-range context to construct periodicity and structures at different time scales. Our baseline model tends to generate simpler melodies and lacks long-range coherence due to the vanishing gradient problem and the inability to see the future beats. The LSTM with Full Attention model is able to generate smoother and more plausible melodies compared to the baseline model. A piece of melody may recur in the generated sequence, showing an improvement in long-term coherence. While it may continuously generate sequences of descending or ascending notes when the input beat sequence is long. The LSTM with Local Attention model achieves the best overall musical quality, with rich musical elements and great harmony, while it lacks long-term patterns because the memory capacity in the decoder of LSTM is bounded by the number of hidden states which can be limited. The Transformer with Relative Position Representation model better captures long-range coherence but tends to capture less local variation than the LSTM with Local Attention model, which suggests that the recurrence architecture empowers a better understanding of the local context. Sample generated melodies are available here 1.
Footnote 1: [https://tinyurl.com/everybodycompose](https://tinyurl.com/everybodycompose)
## 6. Conclusion and Future Work
This study proposes three effective methods - LSTM with Full Attention, LSTM with Local Attention, and Transformer with Relative Position Representation - for the novel task of translating simple beats to music with great variation, harmony, and structure. We enable everybody, including amateurs and musicians, to compose their own music by tapping their keyboards or "recoloring" beat sequences from existing works. Since music quality is subjective, for future work, we plan to conduct a larger-scale user study to gather feedback from both novices and professionals in order to iterate our models. To further increase the variation and diversity of generated music, we aim to extend our model output space from notes to chords.
|
2303.03522 | Expectiles In Risk Averse Stochastic Programming and Dynamic
Optimization | This paper features expectiles in dynamic and stochastic optimization.
Expectiles are a family of risk functionals characterized as minimizers of
optimization problems. For this reason, they enjoy various unique stability
properties, which can be exploited in risk averse management, in stochastic
optimization and in optimal control.
The paper provides tight relates of expectiles to other risk functionals and
addresses their properties in regression. Further, we extend expectiles to a
dynamic framework. As such, they allow incorporating a risk averse aspect in
continuous-time dynamic optimization and a risk averse variant of the
Hamilton-Jacobi-Bellman equations. | Rajmadan Lakshmanan, Alois Pichler | 2023-03-06T22:15:19Z | http://arxiv.org/abs/2303.03522v1 | # Expectiles In Risk Averse Stochastic Programming and Dynamic Optimization
###### Abstract
This paper features expectiles in dynamic and stochastic optimization. Expectiles are a family of risk functionals characterized as minimizers of optimization problems. For this reason, they enjoy various unique stability properties, which can be exploited in risk averse management, in stochastic optimization and in optimal control.
The paper provides tight relates of expectiles to other risk functionals and addresses their properties in regression. Further, we extend expectiles to a dynamic framework. As such, they allow incorporating a risk averse aspect in continuous-time dynamic optimization and a risk averse variant of the Hamilton-Jacobi-Bellman equations.
**Keywords:** Expectiles - multistage stochastic optimization - dynamic optimization - stochastic processes
**Classification:** 90C08, 90C15, 60G07
## 1 Introduction
Classical dynamic programming problems involve the expectation in the objective. The expectation is a risk neutral assessment of random outcomes. In many situations, specifically in economic environments, a risk averse assessment or risk management is much more favorable and desirable. For this reason there have been attempts to develop risk averse dynamic programming principles and risk averse Hamilton-Jacobi-Bellman equations.
Non-linear expectations (\(g\)-expectations, cf. Pardoux and Peng (1990); Coquet et al. (2002); Peng (1992, 2004, 2010)) have been considered, e.g., to incorporate the aspect of risk to dynamic equations. A seemingly simpler approach involves risk measures (or risk functionals) instead of non-linear expectations, as risk measures are able to assess the risk associated with a random outcome (cf. Ruszczynski and Yao (2015, 2020)). By construction, risk measures are defined on random variables. For dynamic programming, they need to be extended to stochastic processes. The increments of stochastic processes are random variables so that composing risk measures over
time and accumulating the corresponding risk is a promising approach to extend risk functionals from random variables to stochastic processes.
Specifically, this paper addresses expectiles in stochastic and dynamic optimization. Expectiles constitute a family of risk measure with unique properties. We demonstrate how they can be employed to incorporate risk aversion in dynamic programming and to develop risk averse Hamilton-Jacobi-Bellman equations.
Cont et al. (2008) point out the importance of estimating risk measures in a robust way. In this context, Gneiting (2011) proves that the Average Value-at-Risk, the most important risk measure in theory and practice, is not elicitable, that is, it is not possible to describe the risk measure as minimizer. More generally, Ziegel (2014) proves that the only elicitable spectral risk measure is the (trivial) expectation. Bellini et al. (2014) finally provide a proof that only expectiles constitute elicitable risk measures.
Expectiles have been introduced earlier in Newey and Powell (1987) as
\[e_{\alpha}(X)\coloneqq\operatorname*{arg\,min}_{x\in\mathbb{R}}\mathds{E}\, \ell_{\alpha}(X-x), \tag{1.1}\]
where \(X\) is a \(\mathbb{R}\)-valued random variable, \(\alpha\in(0,1)\) and the scoring function (loss function) is1
Footnote 1: \(x_{+}\coloneqq\max(0,x)\)
\[\ell_{\alpha}(x)\coloneqq\alpha\cdot x_{+}^{2}+(1-\alpha)(-x)_{+}^{2}=\begin{cases} \alpha\cdot x^{2}&\text{if $x\geq 0$,}\\ (1-\alpha)\,x^{2}&\text{if $x\leq 0$.}\end{cases} \tag{1.2}\]
The characterization as a minimizer in the definition (1.1) applies for \(X\in L^{2}\). The first order condition (cf. (1.3) below) is an equivalent characterization of the expectile, which applies - more generally - for \(X\in L^{1}\supset L^{2}\).
**Definition 1.1** (Expectiles, cf. (Newey and Powell, 1987)).: For \(X\in L^{1}\) and a risk level \(\alpha\in[0,1]\), the expectiles of a random variable \(X\) is the unique solution of the equation
\[\alpha\,\,\mathds{E}(X-x)_{+}=(1-\alpha)\,\mathds{E}(x-X)_{+}, \tag{1.3}\]
where \(x\in\mathbb{R}\).
_Remark 1.2_.: In an alternative way, replacing the objective in (1.1) by \(\mathds{E}\big{(}\ell_{\alpha}(X-x)-\ell_{\alpha}(X-x_{0})\big{)}\) for some fixed \(x_{0}\in\mathbb{R}\) extends the definition to \(X\in L^{1}\) as well, so that expectiles are well-defined for \(X\in L^{1}\), even as minimizers.
For \(\alpha=\nicefrac{{1}}{{2}}\), the expectile is the expectation, \(e_{\nicefrac{{1}}{{2}}}(X)=\mathds{E}\,X\). It follows from symmetry of the loss function \(\ell_{\alpha}\) (i.e., \(\ell_{\alpha}(x)=\ell_{1-\alpha}(-x)\)) that
\[e_{\alpha}(X)=-e_{1-\alpha}(-X), \tag{1.4}\]
so that the expectile involves both tails, the lower and the upper tail of the distribution of the random variable \(X\). For \(X\in L^{\infty}\), the expectile approaches the essential supremum for increasing risk level, \(e_{\alpha}(X)\to\operatorname*{ess\,sup}X\) as \(\alpha\to 1\).2 More generally, we have the monotone behavior
Footnote 2: The essential supremum of \(X\) is the smallest _number_\(c\in\mathbb{R}\) so that \(X\leq c\) a.s.
\[\mathds{E}\,X\leq e_{\alpha}(X)\leq e_{\alpha^{\prime}}(X)\leq\operatorname* {ess\,sup}X \tag{1.5}\]
for \(\nicefrac{{1}}{{2}}\leq\alpha\leq\alpha^{\prime}\leq 1\).
Outline of the paper.In the following Section 2 we elaborate that expectiles constitute a risk measure, and we provide tight relations to other risk measures. Next, we introduce conditional risk functionals in Section 3. These are important for risk management in discrete and in continuous time. In continuous time (Section 4), we consider the risk-averse generator, which turns out to be a non-linear differential operator. We finally employ expectiles for dynamic optimization problems in Section 5 and conclude in Section 6.
## 2 Elicitable risk measures
The expectile \(e_{\alpha}(\cdot)\) is a risk measure as introduced in Artzner et al. (1999). That is, the mapping \(X\mapsto e_{\alpha}(X)\), provided that \(\alpha\geq\nicefrac{{1}}{{2}}\), satisfies the following four axioms formulated for (convex) risk measures \(\mathcal{R}\colon\mathcal{Y}\to\mathbb{R}\), where \(\mathcal{Y}\) is an appropriate linear space of \(\mathbb{R}\)-valued random variables (for example \(\mathcal{Y}=L^{1}(P)\)) on the probability space \((\Omega,\mathcal{F},P)\):
1. \(\mathcal{R}(X)\leq\mathcal{R}(Y)\) for all \(X\leq Y\) almost everywhere,
2. \(\mathcal{R}(X+Y)\leq\mathcal{R}(X)+\mathcal{R}(Y)\) for all \(X\), \(Y\in\mathcal{Y}\),
3. \(\mathcal{R}(\lambda\,X)=\lambda\,\mathcal{R}(X)\) for all \(\lambda>0\), and
4. \(\mathcal{R}(c+X)=c+\mathcal{R}(X)\) for all \(c\in\mathbb{R}\).
The expectile is a risk functional satisfying the Axioms (i)-(iv) above (Appendix 7 presents a brief proof for the subadditivity (ii), while the other assertions are evident). Further, the expectile \(e_{\alpha}(\cdot)\) is the only risk measure which can be expressed as a minimizer - as in (1.1) - in addition. We will elaborate below that the expectile is not a spectral risk measure. The natural space (cf. Pichler (2013)) of expectiles is \(\mathcal{Y}=L^{1}\), cf. also the discussion in Section 1 above. In what follows - unless stated differently - we will always assume that \(\mathcal{Y}=L^{1}\).
Explicit expressions for the expectiles are available only in exceptional cases. For the uniform distribution in the interval \([0,1]\), \(U\sim\mathcal{U}[0,1]\), e.g., the expectile is \(e_{\alpha}(U)=\frac{\alpha-\sqrt{\alpha(1-\alpha)}}{2\alpha-1}\).
To extend expectiles to a risk measure in continuous time employing the Wiener process (Brownian motion), we shall frequently need the expectile of the normal distribution, for which at least the following series expansion is available.
**Example 2.1**.: An explicit expression for the expectile of normally distributed random variables, \(X\sim\mathcal{N}(\mu,\sigma^{2})\), is not available. It holds that
\[e_{\alpha}(X)=\mu+\sigma\sqrt{\frac{8}{\pi}}\Big{(}\alpha-\frac{1}{2}\Big{)}+ \sigma\frac{8\sqrt{2}}{\sqrt{\pi}^{3}}\Big{(}\alpha-\frac{1}{2}\Big{)}^{3}+ \mathcal{O}\Big{(}\alpha-\frac{1}{2}\Big{)}^{5}. \tag{2.1}\]
Proof.: The general assertion derives from the standard normal distribution. Denoting the density of the standard normal distribution by \(\varphi(t)=\frac{1}{\sqrt{2}\pi}e^{-t^{2}/2}\) and by \(\Phi(x)=\int_{-\infty}^{x}\varphi(t)\;dt\) its antiderivative, it holds that
\[\mathds{E}(X-t)_{+}=\int_{t}^{\infty}(x-t)\varphi(x)\;dx=\varphi(t)-t\big{(}1 -\Phi(t)\big{)}\]
\[\mathds{E}(t-X)_{+}=\int_{-\infty}^{t}(t-x)\varphi(x)\;dx=t\,\Phi(t)+\varphi(t),\]
which follows readily by employing the identity \(\varphi^{\prime}(x)=-x\,\varphi(x)\). Based on (1.3) define now
\[f(\alpha,e) \coloneqq\alpha\cdot\big{(}\varphi(e)-e\big{(}1-\Phi(e)\big{)} \big{)}-(1-\alpha)\cdot\big{(}e\,\Phi(e)+\varphi(e)\big{)} \tag{2.2}\] \[=(2\alpha-1)\big{(}\varphi(e)+e\,\Phi(e)\big{)}-\alpha\;e\]
so that the expectile \(e_{\alpha}\) of the normally distributed random variable \(X\) satisfies \(f(\alpha,e_{\alpha})=0\) for every \(\alpha\in(0,1)\). We now apply the implicit function theorem.
As \(e_{\nicefrac{{1}}{{2}}}(X)=\mathds{E}\,X=0\) for the normal distribution it holds that \(f(\nicefrac{{1}}{{2}},e_{\nicefrac{{1}}{{2}}})=0\). Further, the partial derivatives of \(f\) at \((\alpha,e)\) are \(f_{\alpha}(\nicefrac{{1}}{{2}},0)=\sqrt{\frac{2}{\pi}}\) and \(f_{e}(\alpha,e)=-\frac{1}{2}\) so that the first term in assertion (2.1) follows with the implicit function theorem. The coefficient for the next term \(\big{(}\alpha-\nicefrac{{1}}{{2}}\big{)}^{2}\) is zero, because the function (2.2) is odd with respect to the center \(\nicefrac{{1}}{{2}}\), as the normal distribution is symmetric, cf. (1.4). The remaining coefficient is found by differentiating the function (2.2) further. We omit the rather technical computations here, as our further results build on the first two terms only.
**Example 2.2**.: For a log-normal random variable \(X\) with \(\log X\sim\mathcal{N}(\mu,\sigma^{2})\), the expectiles are
\[e_{\alpha}(X)=e^{\mu+\frac{\sigma^{2}}{2}}+\left(e^{\sigma^{2}}-1\right)e^{2 \mu+\sigma^{2}}\left(\alpha-\frac{1}{2}\right)4\sqrt{e}\big{(}2\Phi(\nicefrac{ {1}}{{2}})-1\big{)}+\mathcal{O}\Big{(}\alpha-\frac{1}{2}\Big{)}^{2}.\]
Proof.: As above, the proof again relies on explicitly available expressions
\[\mathds{E}(X-t)_{+}=\int_{\log t}^{\infty}\big{(}e^{x}-t\big{)}\varphi(x)\;dx =\sqrt{e}\,\Phi(1-\log t)-t\,\Phi(-\log t)\]
and
\[\mathds{E}(t-X)_{+}=\int_{-\infty}^{\log t}\big{(}t-e^{x}\big{)}\varphi(x)\; dx=\sqrt{e}\,\Phi(1-\log t)-\sqrt{e}+t\,\Phi(\log t).\]
The statement follows again by the implicit function theorem.
### Tight comparison with important risk measures
In what follows, we shall compare expectiles with important risk measures and give the tightest-possible estimates and the smallest spectral risk measure enveloping the expectiles.
The Average Value-at-Risk is the smallest convex envelope of the Value-at-Risk (cf. Follmer and Schied (2004)). The Average Value-at-Risk can be stated in the equivalent forms (cf. Pflug (2000))
\[\mathsf{AV@R}_{\alpha}(X) \coloneqq\frac{1}{1-\alpha}\int_{\alpha}^{1}F_{X}^{-1}(\alpha) \mathrm{d}\alpha\] \[=\min\left\{q+\frac{1}{1-\alpha}\,\mathds{E}(X-q)_{+}\colon q\in \mathbb{R}\right\}, \tag{2.3}\]
where
\[\mathsf{V@R}_{\alpha}(X)\coloneqq F_{X}^{-1}(\alpha)\coloneqq\inf\bigl{\{}x\colon P (X\leq x)\geq\alpha\bigr{\}} \tag{2.4}\]
is the Value-at-risk.
The Average Value-at-risk is the fundamental building block in the Kusoka representation (cf. Kusuoka (2001)) and the most important risk functional in actuarial practice. Notice as well that the Average Value-at-Risk is the _minimum objective_ of an optimization problem (problem (2.3)), while the expectile in (1.1) is the _minimizer_ of an optimization problem.
_Remark 2.3_ (Quantiles).: Similarly to the expectile, the Value-at-Risk defined in (2.4) is a minimizer of an optimization problem, specifically the problem
\[\min_{q\in\mathbb{R}}\operatorname{\mathds{E}}\,\tilde{\ell}_{\alpha}(X-q)\]
with scoring function
\[\tilde{\ell}_{\alpha}(x)\coloneqq\begin{cases}-(1-\alpha)\,x&\text{if}\,x \leq 0,\\ \quad\alpha\cdot x&\text{if}\,x\geq 0\end{cases}=\Bigl{(}\alpha-\frac{1}{2} \Bigr{)}x+\frac{1}{2}\left|x\right|,\]
well-known from quantile regression. Indeed, the first order condition is \(0=\frac{\partial}{\partial q}\operatorname{\mathds{E}}\,\ell_{\alpha}(X-q)= \alpha\operatorname{\mathds{E}}\mathds{1}_{\{X>q\}}-(1-\alpha)\operatorname{ \mathds{E}}\mathds{1}_{\{X\leq q\}}=\alpha-P(X\leq q)\) and hence the assertion. However, by violating (ii) above, the Value-at-Risk is _not_ a convex risk functional.
**Definition 2.4** (Spectral risk measure, cf. Acerbi and Simonetti (2002); Acerbi (2002)).: Let \(\sigma\colon[0,1)\to\mathbb{R}_{\geq 0}\) be a non-negative, non-decreasing function with \(\int_{0}^{1}\sigma(u)\,\mathrm{d}u=1\). Then
\[\mathcal{R}_{\sigma}(X)=\int_{0}^{1}F_{X}^{-1}(\alpha)\sigma(\alpha)\,\mathrm{ d}\alpha,\qquad X\in\mathcal{Y},\]
is a risk measure. \(\mathcal{R}_{\sigma}\) is called a the _spectral risk measure_ and the function \(\sigma\) is called the _spectrum_ of \(\mathcal{R}_{\sigma}\).
The expectiles are not a spectral risk measure themselves. But for every expectile, there is a smallest spectral risk measure.
**Proposition 2.5** (Enveloping risk measure).: _If \(\mathcal{R}_{\sigma}(X)\) is any spectral risk measure with_
\[e_{\alpha}(X)\leq\mathcal{R}_{\sigma}(X) \tag{2.5}\]
_for every random variable \(X\in\mathcal{Y}\), then \(e_{\alpha}(X)\leq s_{\alpha}(X)\leq\mathcal{R}_{\sigma}(X)\) for all \(X\), where_
\[s_{\alpha}(X)\coloneqq\int_{0}^{1}F_{X}^{-1}(u)\frac{\alpha(1-\alpha)}{\bigl{(} \alpha-u(2\alpha-1)\bigr{)}^{2}}\,\mathrm{d}u; \tag{2.6}\]
_that is, \(s_{\alpha}\) is the smallest spectral risk measure larger than \(e_{\alpha}\)._
Proof.: Above all, \(s_{\alpha}(\cdot)\) is a spectral risk functional, as \(u\mapsto\frac{\alpha\left(1-\alpha\right)}{\left(\alpha-u\left(2\alpha-1\right) \right)^{2}}\) is a non-negative, increasing function and \(\int_{0}^{1}\frac{\alpha\left(1-\alpha\right)}{\left(\alpha-u\left(2\alpha-1 \right)\right)^{2}}\,\mathrm{d}u=1\).
Bellini et al. (2014, Proposition 9) provide the Kusuoka representation
\[e_{\alpha}(X)=\max_{\gamma\in[1/\rho,1]}\gamma\,\mathds{E}\,X+(1-\gamma)\, \mathsf{AV@R}_{\frac{\rho-\frac{1}{\rho}}{\rho-1}}(X) \tag{2.7}\]
for expectiles, where \(\beta=\frac{\alpha}{1-\alpha}\). Define the functions \(\Sigma_{\gamma}(u)\coloneqq\gamma(1-u)+(1-\gamma)\min\left(1,\frac{1-u}{1- \frac{\rho-\frac{1}{\gamma}}{\rho-1}}\right)\) and \(\Sigma(u)\coloneqq\frac{\alpha\left(1-u\right)}{\alpha-u\left(2\alpha-1 \right)}\). Both functions coincide at \(u=0\), \(u=1\) and \(u=\frac{\alpha\left(1+\gamma\right)-1}{\left(2\alpha-1\right)\gamma}\); indeed \(\Sigma_{\gamma}(0)=\Sigma(0)=1\), \(\Sigma_{\gamma}(1)=\Sigma(1)=0\) and
\[\Sigma\left(\frac{\alpha\left(1+\gamma\right)-1}{\left(2\alpha-1\right) \gamma}\right)=\Sigma_{\gamma}\left(\frac{\alpha\left(1+\gamma\right)-1}{ \left(2\alpha-1\right)\gamma}\right)=\frac{\alpha\left(1-\gamma\right)}{2 \alpha-1}. \tag{2.8}\]
As \(\Sigma_{\gamma}\) is piece wise linear and \(\Sigma\) concave, it follows that \(\Sigma_{\gamma}(u)\leq\Sigma(u)\) for all \(u\in[0,1]\). With integration by parts it follows further that
\[\gamma\,\mathds{E}\,X+(1-\gamma)\,\mathsf{AV@R}_{\frac{\rho- \frac{1}{\rho}}{\rho-1}}(X) =-\int_{0}^{1}F_{X}^{-1}(u)\,\mathrm{d}\Sigma_{\gamma}(u) \tag{2.9}\] \[=-\left.F_{X}^{-1}(u)\Sigma_{\gamma}(u)\right|_{u=0}^{1}+\int_{0 }^{1}\Sigma_{\gamma}(u)\,\mathrm{d}F_{X}^{-1}(u)\] \[\leq-\left.F_{X}^{-1}(u)\Sigma(u)\right|_{u=0}^{1}+\int_{0}^{1} \Sigma(u)\,\mathrm{d}F_{X}^{-1}(u)\] \[=-\int_{0}^{1}F_{X}^{-1}(u)\,\mathrm{d}\Sigma(u)\] \[=s_{\alpha}(X)\]
and thus \(e_{\alpha}\leq s_{\alpha}\). The assertion follows, as for every \(u\in(0,1)\) there is \(\gamma\in\left(\frac{1-\alpha}{\alpha},1\right)\) (\(\gamma=\frac{1-\alpha}{u(1-2\alpha)+\alpha}\)) so that \(\Sigma_{\gamma}(u)=\Sigma(u)\) by (2.8) above (cf. Figure 1 for illustration).
We have the following comparison with the Average Value-at-Risk. The comparison is sharp in the sense that the risk rates cannot be improved.
**Corollary 2.6**.: _For every random variable \(X\in L^{1}\) it holds that_
\[e_{\frac{1}{2-\alpha}}\left(X\right)\leq\mathsf{AV@R}_{\alpha}(X),\qquad \alpha\in[0,1], \tag{2.10}\]
_and_
\[\frac{\alpha}{3\alpha-1}\,\mathds{E}\,X+\frac{2\alpha-1}{3\alpha-1}\,\mathsf{ AV@R}_{2-\frac{1}{\alpha}}\left(X\right)\leq e_{\alpha}(X)\leq\mathsf{AV@R}_{2- \frac{1}{\alpha}}\left(X\right) \tag{2.11}\]
_for every \(\alpha\in[\nicefrac{{1}}{{2}},1]\)._
_For non-negative random variables (\(X\geq 0\) a.s.) we further have_
\[\mathsf{AV@R}_{\alpha}(X)\leq\frac{1}{1-\alpha}e_{\frac{1}{2-\alpha}}\left(X\right) \tag{2.12}\]
_and_
\[e_{\alpha}(X)\leq\frac{\alpha}{1-\alpha}\,\mathds{E}\,X. \tag{2.13}\]
_The risk rates in the preceding equations (2.10)-(2.13) are optimal, they cannot be improved._
_Remark 2.7_.: The preceding corollary might give the impression that \(e_{\alpha}\) is 'weak' in the sense that it attains smaller values than the average value at risk and is comparable to the risk neutral expectation. However, it holds that \(e_{\alpha}(X)\to 1\) for \(\alpha\to 1\), as follows readily from (1.3). Further, we have that the Average Value-at-Risk is a lower bound for the expectiles in view of (2.12), so that expectiles are at least as'strong' as the Average Value-at-Risk.
Proof of Corollary 2.6.: Employing the notation of the proof of Proposition 2.5 and \(\Sigma_{\alpha}(u)\coloneqq\min\left(1,\frac{1-u}{\frac{1}{\alpha}-1}\right)\), we have that \(\Sigma(u)\leq\Sigma_{\alpha}(u)\). As in the proof above we conclude that \(\mathcal{R}_{\alpha}(X)\leq\mathsf{AV@R}_{\alpha}(X)\) and with (2.5) that (2.10). The inequality (2.10) is tight, as \(\Sigma^{\prime}_{\gamma}(1)\xrightarrow[\gamma\to 1]{}\Sigma^{\prime}(1)\).
As for the remaining inequality choose \(\gamma=\frac{\alpha}{3\alpha-1}\) in (2.7), and replace \(\alpha\) by \(\frac{1}{2-\alpha}\) in (2.11) to obtain (2.10).
The inequality \(\min\left(1,\frac{1-u}{1-\alpha}\right)\leq\frac{1}{1-\alpha}\Sigma_{\gamma}(u)\) is evident for every \(u\in[0,1]\), and the remaining assertion (2.12) follows by the same reasoning as above. However, for inequality (2.9) to hold true it is essential that \(X\geq 0\) a.s.
Holder's inequality, applied to (2.6), gives
\[\mathds{E}\,X\leq s_{\alpha}(X)\leq\int_{0}^{1}F_{X}^{-1}(u)\,\mathrm{d}u \cdot\max_{u\in[0,1]}\frac{\alpha(1-\alpha)}{\left(\alpha-u(2\alpha-1)\right) ^{2}}=\mathds{E}\,X\cdot\frac{\alpha}{1-\alpha}\]
and thus (2.13).
Conditional and Dynamic Risk Measure
Risk functionals - as discussed above - are employed to assess the risk of a random outcome. For this reason, they have the economic interpretation of an insurance premium, while the random outcome is the random insurance benefit (the random variable). While the premium is known beforehand, the insurance benefit (the random outcome) is not, it is revealed later.
Conditional risk measures are employed in risk management over time, they address stochastic processes instead of random variables. Nested risk measures, which are compositions of risk functionals over time, enjoy the economic interpretation of risk premiums for insurance on a rolling horizon basis. For a discussion of nested risk functionals we may refer to Cheridito and Kupper (2011); Riedel (2004); Shapiro (2012); Ruszczynski and Shapiro (2006) and Pichler and Schlotter (2019).
### The conditional expectile
Definition 1.1 allows extending the expectile to conditional expectiles, which are conditioned on some \(\sigma\)-algebra. This constitutes a major building block to extend the definition of expectiles from random variables to stochastic processes.
**Definition 3.1** (Conditional expectiles).: Let \(X\in L^{1}\) be a random variable and \(\mathcal{G}\) be a sub \(\sigma\)-algebra of \(\mathcal{F}\), \(\mathcal{G}\subset\mathcal{F}\) and \(\alpha\) a \(\mathcal{G}\)-measureable variable with values in \([0,1]\). The \(\mathcal{G}\)-measureable random variable \(Z\) satisfying
\[\alpha\cdot\mathds{E}\big{(}\big{(}X-Z\big{)}_{+}\mid\mathcal{G}\big{)}=(1- \alpha)\cdot\mathds{E}\big{(}\big{(}Z-X\big{)}_{+}\mid\mathcal{G}\big{)}\qquad \text{a.s.} \tag{3.1}\]
is called the _conditional expectile_ (i.e., the conditional version of (1.3)) and denoted \(Z=e_{\alpha}(X\mid\mathcal{G})\). As usual for the conditional expectation, we shall also write \(e^{\mathcal{G}}(X)\coloneqq e(X\mid\mathcal{G})\) and \(e^{Y=y}(X)\coloneqq e(X\mid Y=y)\) for the conditional expectile and its versions.
The solution of the problem (3.1) exists and is unique for the same reasons as for the usual expectile, and \(e_{\alpha}(X\mid\mathcal{G})\in L^{1}\), as \(\big{(}e_{\alpha}(X\mid\mathcal{G})-X\big{)}_{+}\) and \(\mathds{E}\big{(}\big{(}X-e_{\alpha}(X\mid\mathcal{G})\big{)}_{+}\mid\mathcal{ G}\big{)}\) exist in (3.1).
_Remark 3.2_.: Based on the properties of the conditional expectation (cf. Section 2), we have the following properties of the conditional expectile.
1. \(e_{\alpha}^{\mathcal{G}}(X)\leq e_{\alpha}^{\mathcal{G}}(Y)\) a.e. for all \(X\leq Y\) almost everywhere,
2. \(e_{\alpha}^{\mathcal{G}}(X+Y)\leq e_{\alpha}^{\mathcal{G}}(X)+e_{\alpha}^{ \mathcal{G}}(Y)\) a.e.,
3. \(e_{\alpha}^{\mathcal{G}}(\lambda\,X)=\lambda\,e_{\alpha}^{\mathcal{G}}(X)\) for all \(\lambda>0\) and \(\lambda\) which is \(\mathcal{G}\)-measurable,
4. \(e_{\alpha}^{\mathcal{G}}(c+X)=c+e_{\alpha}^{\mathcal{G}}(X)\) for all \(\mathbb{R}\)-valued \(c\) measurable with respect to \(\mathcal{G}\).
In what follows, we shall consider the conditional expectile for a single \(\sigma\)-algebra first and discuss regression. Next, we consider filtrations \(\mathcal{F}=(\mathcal{F}_{t})_{t\in\mathcal{T}}\), typically generated by a stochastic process \(X=(X_{t})_{t\in\mathcal{T}}\).
### Conditional expectiles in stochastic optimization and regression
Stochastic optimization and most typical problems in machine learning (as the training of neural networks) as well as specific problems in inverse problems (cf. Lu and Pereverzev (2013)) consider the problem
\[\begin{split}&\text{minimize }f_{0}(x)\coloneqq\mathds{E}\,f(x,\xi)\\ &\text{subject to }x\in\mathcal{X},\end{split} \tag{3.2}\]
where the objective is a risk neutral expectation, \(f\colon\mathcal{X}\times\mathbb{R}^{m}\to\mathbb{R}\) is a function, \(\mathcal{X}\subset\mathbb{R}^{d}\) is closed and \(\xi\) is a random variable with values in \(\mathbb{R}^{m}\). Sample average approximation builds on independent realizations \(\xi_{i}\) of identically distributed random variable \(\xi\), \(i=1,\dots,\) to solve (3.2) in real world applications. To this end, the empirical version
\[\hat{f}_{n}(x)\coloneqq\frac{1}{n}\sum_{i=1}^{n}f(x,\xi_{i})\]
is considered instead of the expectation \(\mathds{E}\,f(x,\xi)\) in (3.2) for varying \(x\in\mathcal{X}\).
We consider the measure points (observations) \(X\in\mathcal{X}\) to be random (with measure \(P\)) as well and intend to 'learn' the function \(f_{0}\) based on observations
\[\big{(}X_{i},f(X_{i},\xi_{i})\big{)},\quad i=1,\dots,n, \tag{3.3}\]
where \((X_{i},\xi_{i})\) are revealed jointly (cf. Dentcheva and Lin (2021) for further motivation in stochastic optimization and an alternative approach); even more generally, we consider the iid observations
\[(X_{i},f_{i}),\quad i=1,\dots n, \tag{3.4}\]
which is (3.3) with \(f_{i}\coloneqq f(X_{i},\xi_{i})\).
To model (3.4), let \(\rho\) be the probability measure of the joint distribution \((X,f)\) and denote the marginal measure by \(P(A)\coloneqq\rho(A\times\mathbb{R})\). Then there exists a regular conditional probability kernel (cf. Kallenberg (2002)) so that
\[\rho(A\times B)=\int_{A}\rho(f\in B|\,x)\,P(dx). \tag{3.5}\]
The bivariate measure \(\rho\) in (3.5) is not an artifact. Indeed, denote the conditional measures of \(f\) given \(X\) by the Markov kernel \(\rho\colon\mathcal{X}\times\mathcal{B}(\mathbb{R})\to[0,1]\), that is, \(\rho(f\in A\mid X=x)=\rho(x,A)\), then \((X,f)\) jointly follow the composed measure (3.5),
\[(X,f)\sim\rho,\]
and hence both approaches are equivalent.
For a random vector \((X,f)\in\mathbb{R}^{d}\times\mathbb{R}\) with law \(\rho\) set
\[f_{0}(x)\coloneqq\mathds{E}(f\mid X=x); \tag{3.6}\]
this definition notably corresponds to
\[f_{0}(x)=\mathds{E}(f(X,\xi)\mid X=x)\]
in the setting (3.3) above. For this reason, the stochastic optimization problem (3.2) is equivalent to3
Footnote 3: The essential infimumness \(\inf(f\mid X)\) is the largest random variable \(g\), measurable with respect to \(\sigma(X)\) (the \(\sigma\)-algebra generated by \(X\)), so that \(g\leq f\), cf. Follmer and Schied (2004, Definition A.34). Measurability is the crucial difference in comparison to the (unconditional) essential supremum in Footnote 2.
\[\operatorname*{\mathrm{ess}}_{x\in X}\mathds{E}(f\mid X=x), \tag{3.7}\]
where \((X,f)\) is a random variable with law \(\rho\), provided that \(\operatorname*{\mathrm{supp}}P=\mathcal{X}\), where
\[\operatorname*{\mathrm{supp}}P\coloneqq\bigcap\big{\{}A\colon A\text{ is closed and }P(A)=1\big{\}}\]
is the support.4
Footnote 4: Cf. Ruschendorf (2014) for the support of the marginal measure \(P\).
Note, however, that not every random vector \((X,f)\) can be recast as in (3.3) for a function \(f\) and a random \(\xi\). For this reason, the problem formulation (3.7) is more general than the genuine problem (3.2).
### Risk assessment with conditional expectiles
To incorporate risk in the assessment, consider the conditional expectation (3.6) and define
\[f_{\alpha}(x)\coloneqq e_{\alpha}(f\mid X=x),\]
where \(e_{\alpha}^{\sigma(X)}\) is the conditional expectile introduced in Section 3.1 above. Based on (1.5), we have that
\[f_{\alpha}(x)\geq f_{0}(x),\qquad\text{for }\alpha\geq\nicefrac{{1}}{{2}}, \;x\in\mathcal{X}.\]
The function \(f_{\alpha}\) intentionally _overestimates_ (overrates) the risk-free assessment \(f_{0}\) and the surplus \(f_{\alpha}-f_{0}\) is the amount attributed to risk aversion.
To solve the risk averse version of the stochastic optimization problem (3.7),
minimize
\[e_{\alpha}(f\mid X=x)\] \[\text{subject to }x\in\mathcal{X},\]
just find an estimator for \(\hat{e}_{\alpha}\) for \(e_{\alpha}\) first and then solve
minimize
\[\hat{e}_{\alpha}(x)\] \[\text{subject to }x\in\mathcal{X}.\]
The substitute \(\hat{e}_{\alpha}(\cdot)\) is chosen in an adequate space of functions. Dentcheva and Lin (2021) consider the Nadaraya-Watson kernel estimator to solve the problem. Here, we exploit the problem by using reproducing kernel Hilbert spaces (RKHS) with kernel function \(k\), where we may refer to Berlinet and Thomas-Agnan (2004) for details.
**Definition 3.3**.: For a kernel function \(k\colon\mathcal{X}\times\mathcal{X}\to\mathbb{R}\), the RKHS space \(\mathcal{H}_{k}\) is the completion of the functions \(f(x)=\sum_{i=1}^{\ell}w_{i}\,k(x,x_{i})\) with respect to the inner product
\[\big{\langle}k(\cdot,x_{i})\mid k(\cdot,x_{j})\big{\rangle}=k(x_{i},x_{j}), \qquad i,j=1,\ldots,\ell,\]
where \(x_{i}\) and \(x_{j}\in\mathcal{X}\).
The regularized problem is
\[\text{minimize }\frac{1}{n}\sum_{i=1}^{n}\ell_{\alpha}\big{(}\hat{e}_{\alpha} (X_{i})-f_{i}\big{)}+\lambda\|\hat{e}_{\alpha}\|_{k}^{2}, \tag{3.8}\]
where \(\hat{e}_{\alpha}(\cdot)\in\mathcal{H}_{k}\). It follows from the generalized representer theorem (cf. Scholkopf et al. (2001)), that the function \(e_{\alpha}\) is given by \(e_{\alpha}(\cdot)=\frac{1}{n}\sum_{i=1}^{n}w_{i}\,k(\cdot,X_{i})\), that is, the supporting points are exactly the points \(X_{i}\), \(i=1,\ldots,n\), where measurements \(f_{i}\), \(i=1,\ldots,n\), are available. It might be convenient in some situations to find the best approximation located at the points \(\tilde{x}_{j}\), \(j=1,\ldots,\tilde{n}\), that is, the function
\[\hat{e}_{\alpha}(\cdot)=\frac{1}{\tilde{n}}\sum_{j=1}^{\tilde{n}}w_{j}\,k( \cdot,\tilde{x}_{j}),\]
for fewer or special design points \(\tilde{x}_{j}\), \(j=1,\ldots,\tilde{n}\). We describe the equations for this generalized problem.
The first order conditions of problem (3.8) for the weights \(w_{j}\), \(j=1,\ldots,\tilde{n}\), are
\[0=\frac{1}{n}\sum_{i=1}^{n}2\cdot\left(\frac{1}{\tilde{n}}\sum_{ j^{\prime}=1}^{\tilde{n}}w_{j^{\prime}}k(X_{i},\tilde{x}_{j^{\prime}})-f_{i} \right)\cdot\left\{\begin{array}{cl}\alpha&\text{if }f_{i}\leq\hat{e}_{ \alpha}(X_{i})\\ 1-\alpha&\text{if }f_{i}\geq\hat{e}_{\alpha}(X_{i})\end{array}\right\}\cdot\frac{1} {\tilde{n}}k(\tilde{x}_{j},X_{i})+\] \[\qquad\qquad+2\frac{\lambda}{\tilde{n}^{2}}\sum_{j=1}^{\tilde{n} }w_{j}\,k(\tilde{x}_{i},\tilde{x}_{j}). \tag{3.11}\]
Define \(\tilde{K}\coloneqq\big{(}k(\tilde{x}_{\ell},\tilde{x}_{j})\big{)}_{\ell,j=1}^ {n}\), \(K\coloneqq\big{(}k(X_{i},\tilde{x}_{j})\big{)}_{i=1,\,j=1}^{n,\tilde{n}}\) and
\[A(w)\coloneqq\text{diag}\big{(}a_{i}(w),\;i=1,\ldots,n\big{)}\]
with entries
\[a_{i}(w)=\begin{cases}\alpha&\text{if }f_{i}\leq\frac{1}{\tilde{n}}\sum_{j=1}^ {n}w_{j}\,k(X_{i},\tilde{x}_{j}),\\ 1-\alpha&\text{if }f_{i}\geq\frac{1}{\tilde{n}}\sum_{j=1}^{n}w_{j}\,k(X_{i}, \tilde{x}_{j})\end{cases}\]
on the diagonal. Then the equations (3.11) rewrite as
\[\left(\frac{\lambda}{\tilde{n}^{2}}\tilde{K}+\frac{1}{n^{2}\tilde{n}}K^{\top} A(w)K\right)w=\frac{1}{n\tilde{n}}K^{\top}A(w)f.\]
This equation is not linear in \(w\), as \(A(w)\) depends in a nonlinear way on \(w\). However, the problem can be solved by inverting the matrix to obtain a fixed point equation. With that, the equation can be iterated, and the algorithm converges after finitely many iterations, cf. (3.10) in Algorithm 1. Figure 2 displays a typical result of expectile regression. Farooq and Steinwart (2018) is a starting point in investigating convergence properties of the expectile regression problem.
**Input:** Measurements \((f_{i},X_{i})\), \(i=1,\ldots n\), and support points \(\tilde{x}_{j}\), \(j=1,\ldots,\tilde{n}\).
**Output:** The weights \(w_{j}\), \(j=1,\ldots,\tilde{n}\), of the function
\[\hat{e}_{\alpha}(\cdot)=\frac{1}{\tilde{n}}\sum_{j=1}^{\tilde{n}}w_{j}k(\cdot,\tilde{x}_{j}) \tag{3.9}\]
minimizing (3.8).
Set
\[K_{ij}\coloneqq k(X_{i},\tilde{x}_{j})\]
for \(i=1,\ldots n\) and \(j=1,\ldots\tilde{n}\), and
\[\tilde{K}_{ij}\coloneqq k(\tilde{x}_{i},\tilde{x}_{j})\]
for \(i\), \(j=1,\ldots,\tilde{n}\).
**while**_change of the weights \(w\) encountered_**do**
**for**\(i=1\)**to**\(n\)**do**
update
\[A_{ii}\leftarrow\begin{cases}\alpha&\text{if }f_{i}\leq\frac{1}{\tilde{n}}\sum_{j =1}^{\tilde{n}}w_{j}\,k(X_{i},\tilde{x}_{j}),\\ 1-\alpha&\text{else}\end{cases}\]
**end**
update
\[w\gets w-\left(\frac{\lambda}{\tilde{n}^{2}}\tilde{K}+\frac{1}{n^{2} \tilde{n}}K^{\top}AK\right)^{-1}\cdot\left(\frac{\lambda}{\tilde{n}^{2}} \tilde{K}w+\frac{1}{n^{2}\tilde{n}}K^{\top}AKw-\frac{1}{n\tilde{n}}K^{\top}Af\right) \tag{3.10}\]
**end**
**Result:** The best approximating function (3.9).
**Algorithm 1:** Newton-like iteration to solve (3.8)
_Remark 3.4_.: Note that the inverted matrix in (3.10) is the derivative of the right-hand side with respect to \(w\), as \(A\) is constant for small changes in \(w\). For this reason, the iteration in Algorithm 1 is a Newton iteration in essence, although the function (1.2) is not differentiable. As \(A(w)\) is constant for small variations of \(w\), thus (3.10) vanishes locally.
## 4 Risk aversion in stochastic processes
The considerations on the expectile in the preceding sections are based on random variables. The conditional variant in the expectile regression is achieved with a single \(\sigma\)-algebra. In what follows, we generalize the expectile for stochastic processes - in a discrete time setting first, and then in continuous time.
### Nested expectile in discrete time
Consider a stochastic process \(X=(X_{t_{i}})_{i=0}^{n}\) in discrete time, where \(0\rightleftarrows t_{0}<t_{1}<\cdots<t_{n}=T\). For a dissection in time consider the increments
\[X_{T}=X_{t_{0}}+(X_{t_{1}}-X_{t_{0}})+\cdots+(X_{t_{n}}-X_{t_{n-1}}).\]
The stochastic process \(X\) is adapted to the filtration \(\mathcal{F}\), that is, \(X_{t}\) is measurable for every \(\mathcal{F}_{t}\), \(t\geq 0\), so most often we just may choose \(\mathcal{F}_{t_{i}}\coloneqq\sigma(X_{t_{j}}\colon j\leq i)\). As well, we shall denote the sequence of \(\sigma\)-algebras by \(\mathcal{F}_{t_{0}}\colon t_{n}\).
In what follows, we shall associate a certain risk for the time period \(\Delta t\coloneqq t_{i+1}-t_{i}\) to come. For convenience in the presentation in what follows, we introduce the _rescaled_ version of the expectile as
\[\tilde{e}_{\beta}(\cdot)\coloneqq e_{\frac{1+\sqrt{\beta}}{2}}(\cdot)\]
(i.e., \(\alpha-\frac{1}{2}=\frac{\sqrt{\beta}}{2}\)). The main reason for the rescaling is that \(e_{\nicefrac{{1}}{{2}}}(X)=\mathds{E}\,X\), while \(\mathsf{AV@R}_{0}(X)=\mathds{E}\,X\), e.g. To ensure consistent parametrizations with other risk measures, we rescale the risk level so that \(\tilde{e}_{0}(X)=\mathds{E}\,X\) is associated with the risk-free assessment, while \(\tilde{e}_{1}(X)=\operatorname{ess}\sup X\) is the total risk averse assessment. The varying dynamic (\(\sqrt{\beta}\) instead of \(\beta\)) turns out to be the natural choice in the continues time situation addressed below.
**Definition 4.1** (Nested expectile).: Let \(\left(\Omega,\mathcal{F}=(\mathcal{F}_{t_{i}})_{i=1}^{n},P\right)\) be a filtered probability space and \(\beta\colon\{t_{0},\ldots,t_{n}\}\to[0,1]\) be stochastic process adapted to the filtration \(\mathcal{F}=(\mathcal{F}_{t_{i}})_{i=1}^{n}\). The nested expectile of the process with respect to the filtration \(\mathcal{F}_{t_{0}\colon t_{n}}\), denoted \(\tilde{e}_{\beta(\cdot)}^{\mathcal{F}_{t_{0}\colon t_{n}}}\), is
\[\tilde{e}_{\beta(\cdot)}^{\mathcal{F}_{t_{0}\colon t_{n}}}(X)\coloneqq X_{0}+ \tilde{e}_{\beta(t_{0},X_{t_{0}})\cdot(t_{1}-t_{0})}^{\mathcal{F}_{t_{0}}} \left(X_{t_{1}}-X_{t_{0}}+\cdots+\tilde{e}_{\beta(t_{n-1},X_{t_{n-1}})\cdot(t _{n}-t_{n-1})}^{\mathcal{F}_{t_{n-1}}}\left(X_{T}-X_{t_{n-1}}\right)\right),\]
Figure 2: The expectile \(\tilde{e}_{90\,\%}(\cdot)\) based on \(n=1000\) observations overestimates the conditional expectation
or slightly more explicitly
\[\tilde{e}_{\beta(\cdot)}^{\mathcal{F}_{t_{0}\cdot t_{n}}}\left(X\right)=X_{0}+ \tilde{e}_{\beta(t_{0},X_{t_{0}})\cdot(t_{1}-t_{0})}^{\mathcal{F}_{t_{n-2}}} \left(\begin{array}{c}X_{t_{1}}-X_{t_{0}}+\\ \cdots+\tilde{e}_{\beta(t_{n-2},X_{t_{n-2}})\cdot(t_{n-1}-t_{n-2})}^{\mathcal{ F}_{t_{n-2}}}\left(\begin{array}{c}X_{t_{n-1}}-X_{t_{n-2}}\\ +\tilde{e}_{\beta(t_{n-1},X_{t_{n-1}})\cdot(t_{n}-t_{n-1})}^{\mathcal{F}_{t_{n- 1}}}\left(X_{T}-X_{t_{n-1}}\right)\end{array}\right)\end{array}\right).\]
Nested Risk measures have been considered by Philpott et al. (2013); Philpott and de Matos (2012), e.g. In discrete time, fundamental properties of the Average Value-at-Risk have been elaborated by Xin and Shapiro (2012), although for deterministic risk rates only and for random variables instead of stochastic processes. The definition above is dynamic, as the risk rate \(\beta\) is an adapted process itself. Note that the risk rate at time \(t\) may be chosen to reflect the history of observations up to \(t\), it may depend on \(\{t_{i}\leq t\colon i=1,\ldots,n\}\).
We consider the following example, which prepares for the Wiener process.
**Example 4.2** (Random walk, cf. Pichler and Schlotter (2022)).: Consider a random walk process starting at \(X_{0}\) with independent Markovian increments
\[X_{t_{t+1}}-X_{t_{i}}\sim\mathcal{N}(0,t_{i+1}-t_{i}) \tag{4.1}\]
and constant risk rate \(\beta(t,x)=\beta\). With (4.1) and the asymptotic formula (4.1) for the normal distribution, we have that
\[X_{t_{1}}+\tilde{e}_{\beta}^{\mathcal{F}_{t_{1}}}(X_{t_{2}}-X_{t _{1}}) =X_{t_{1}}+\sqrt{t_{i+1}-t_{i}}\sqrt{\frac{2}{\pi}}\sqrt{\beta(t_ {i+1}-t_{i})}+o\big{(}t_{t+1}-t_{i}\big{)}\] \[=X_{t_{1}}+\sqrt{\frac{2\beta}{\pi}}(t_{i+1}-t_{i})+o\big{(}t_{t+ 1}-t_{i}\big{)}. \tag{4.2}\]
Nesting these expressions as in Definition 4.1 gives the explicit expression
\[\tilde{e}_{\beta(\cdot)}^{\mathcal{F}_{t_{0}\cdot T}}\left(X\right)=X_{0}+ \sqrt{\frac{2\beta}{\pi}}T+o(T), \tag{4.3}\]
where \(T\) is the terminal time, while
\[\tilde{e}_{0}^{\mathcal{F}_{t_{0}\cdot T}}\left(X\right)=X_{0}\]
for the risk rate \(\beta=0\). The amount attributed to the risk averse assessment in (4.3) thus accumulates linearly with time.
_Remark 4.3_ (Tower property).: We emphasize as well that Definition 4.1 explicitly involves time, the risk \(\beta(t_{i})\cdot(t_{i+1}-t_{i})\) is associated to the time interval starting at \(t_{i}\) and ending at \(t_{i+1}\). With a further point in between, \(t_{i+1/2}\), the components of the risk functionals above are
\[\tilde{e}_{\beta(t_{i})\cdot(t_{i+1/2}-t_{i})}^{\mathcal{F}_{0}}\left(X_{t_{i +1/2}}-X_{t_{i}}+\tilde{e}_{\beta(t_{i+1/2})\cdot(t_{i+1}-t_{i+1/2})}^{\mathcal{ F}_{t_{i+1}/2}}\left(X_{t_{i+1}}-X_{t_{i+1/2}}\right)\right)\]
instead of
\[\tilde{e}_{\beta(t_{i})\cdot(t_{i+1}-t_{i})}^{\mathcal{F}_{0}}\left(X_{t_{i+1 }}-X_{t_{i}}\right).\]
With that, the risk rates accumulate over time: accumulated risk rates are \(\beta(t_{i})(t_{i+\nicefrac{{1}}{{2}}}-t_{i})+\beta(t_{i+\nicefrac{{1}}{{2}}})(t_{i +1}-t_{i-\nicefrac{{1}}{{2}}})\) in the first case. This amount indeed coincides with \(\beta(t_{i})(t_{i+1}-t_{i})\) (this is the risk rate in the second case), provided that \(\beta(t_{i})=\beta(t_{i+\nicefrac{{1}}{{2}}})\), i.e., the risk assessment does not vary over time.
For the expectation, the corresponding property is the tower property, that is, \(\operatorname{\mathds{E}}\bigl{(}\operatorname{\mathds{E}}X\mid\mathcal{G} \bigr{)}\bigr{)}=\operatorname{\mathds{E}}X\).
### The nested expectile in continuous time
In order to assign risk to a stochastic process in continuous time, we consider the nested formulation introduced above for decreasing time-steps.
**Definition 4.4** (Nested expectile).: Let \(X=(X_{t})_{t\leq T}\) be a stochastic process adapted to \(\mathcal{F}=(\mathcal{F}_{t})_{t\leq T}\) and \(\beta=(\beta_{t})_{t\leq T}\) be cadlag (i.e., right continuous, with left limits) and adapted. With the nested expectile defined in Definition 4.1, the nested expectile is
\[\tilde{e}_{\beta}^{\mathcal{F}}(X)=\lim_{\max\Delta t\to 0}\tilde{e}_{\beta_{t }\sim n}^{\mathcal{F}_{t}\sim n}(X), \tag{4.4}\]
provided that the limit with respect to decreasing mesh sizes \(\max\Delta t\coloneqq\max_{i=1}^{n}t_{i+1}-t_{i}\) exists.
**Example 4.5** (State independent risk rates).: Example 4.2 generalizes for a state independent, but time dependent Riemann integrable risk rate \(\beta(x,t)=\beta(t)\). As above, we obtain that
\[X_{t_{1}}+\tilde{e}_{\beta}^{\mathcal{F}_{t_{1}}}(X_{t_{2}}-X_{t_{1}})=X_{t_{ 1}}+\sqrt{\frac{2\beta(t_{i})}{\pi}}(t_{i+1}-t_{i})+o\bigl{(}t_{t+1}-t_{i} \bigr{)}\]
and thus
\[\tilde{e}_{\beta(\cdot)}^{\mathcal{F}_{t}\sim n}(X)=X_{0}+\sqrt{\frac{2}{\pi }}\int_{0}^{T}\sqrt{\beta(t)}\,\mathrm{d}t\]
for \(\Delta t\to 0\), as \(\beta\) is Riemann integrable. Again, this is an explicit expression for the total risk aversion of the entire random walk process with increments (4.1).
**Definition 4.6** (Risk generator).: Let \((X_{t})_{t\geq 0}\) be a stochastic process adapted to the filtration \(\sigma(X)\) and \(\beta(t,x)\) be a risk rate. The risk generator is
\[\mathcal{G}_{\beta}f(x,t)\coloneqq\lim_{h\to 0}\frac{\tilde{e}_{\beta(t,X_{t})}^{ \sigma(X)}\left(f(X_{t+h})\,|\,X_{t}=x\right)-f(x)}{h},\]
provided that the limit exists.
Note, that \(\mathcal{G}\) is an operator, which maps the (smooth) function \(f\) to \(\mathcal{G}_{\beta}f\), which is a function again. In contrast to the risk-neutral generator, the risk generator \(\mathcal{G}_{\beta}\) is possibly not linear, as we will see in what follows.
**Proposition 4.7**.: _Let \(X_{t}\) follow the stochastic differential equation_
\[\mathrm{d}X_{t}=\mu(t,X_{t})\,\mathrm{d}t+\sigma(t,X_{t})\,\mathrm{d}W_{t} \tag{4.5}\]
_with respect to the Wiener process (Brownian motion) \((W_{t})_{t\geq 0}\) and the functions \(\mu\) and \(\sigma\) be Lipschitz, i.e., \(|\mu(t,x)-\mu(t,y)|+|\sigma(t,x)-\sigma(t,y)|\leq K|x-y|\) so that strong solutions of (4.5) exist. For a smooth function \(f\), the risk generator is_
\[\mathcal{G}_{\beta}f(t,x) =\frac{\partial f(t,x)}{\partial t}+\mu(t,x)\cdot\frac{\partial f (t,x)}{\partial x}+\frac{1}{2}\sigma(t,x)^{2}\cdot\frac{\partial^{2}f(t,x)}{ \partial x^{2}}\] \[\qquad+\sqrt{\frac{2}{\pi}\beta(t,x)}\cdot\left|\sigma(x,t)\cdot \frac{\partial f(t,x)}{\partial x}\right|. \tag{4.6}\]
Proof.: The proof follows Oksendal (2003, Section 7.3) (another valuable reference is Karatzas and Shreve (1991)).
Consider the stochastic process \(Y_{t}\coloneqq f(t,X_{t})\). From Ito's rule we deduce that
\[Y_{t+\Delta t}=Y_{t} +\int_{t}^{t+\Delta t}\left(\frac{\partial f}{\partial t}(s,X_{s} )+\mu(s,X_{s})\frac{\partial f}{\partial x}(s,X_{s})+\frac{1}{2}\sigma(s,X_{s })^{2}\frac{\partial^{2}f}{\partial x^{2}}(s,X_{s})\right)\mathrm{d}s\] \[+\int_{t}^{t+\Delta t}\sigma(s,X_{s})\frac{\partial f}{\partial x }(s,X_{s})\mathrm{d}W_{s},\]
where the second part is a martingale with increments following the Wiener process. Following the proof of the Ito formula in Oksendal (2003, p. 46ff), the functions \(\mu\) and \(\sigma\) are approximated by the constants \(\mu(s,X_{s})\approx\mu(t,X_{t})\) and \(\sigma(s,X_{s})\approx\sigma(t,X_{t})\) for \(s\in[t,t+\Delta t)\) so that
\[Y_{t+\Delta t}-Y_{t}= \left(\frac{\partial f}{\partial t}(t,X_{t})+\mu(t,X_{t})\frac{ \partial f}{\partial x}(t,X_{t})+\frac{1}{2}\sigma(t,X_{t})^{2}\frac{ \partial^{2}f}{\partial x^{2}}(t,X_{t})\right)\Delta t\] \[\qquad+\sigma(t,X_{t})\frac{\partial f}{\partial x}(t,X_{t}) \cdot\big{(}W_{t+\Delta t}-W_{t}\big{)}.\]
\(Y_{t+\Delta t}-Y_{t}\) is a normally distributed random variable with mean
\[Y_{t}+\left(\frac{\partial f}{\partial t}(t,X_{t})+\mu(t,X_{t})\frac{ \partial f}{\partial x}(t,X_{t})+\frac{1}{2}\sigma(t,X_{t})^{2}\frac{ \partial^{2}f}{\partial x^{2}}(t,X_{t})\right)\Delta t\]
and variance
\[\left(\sigma(t,X_{t})\frac{\partial f}{\partial x}(t,X_{t})\right)^{2}\Delta t.\]
We deduce from (2.1) that
\[\tilde{e}^{X_{t}}_{\beta\cdot\Delta t}(Y_{t+\Delta t})-Y_{t} =\left(\frac{\partial f}{\partial t}(t,X_{t})+\mu(t,X_{t})\frac{ \partial f}{\partial x}(t,X_{t})+\frac{1}{2}\sigma(t,X_{t})^{2}\frac{ \partial^{2}f}{\partial x^{2}}(t,X_{t})\right)\Delta t\] \[\qquad+\left|\sigma(t,X_{t})\frac{\partial f}{\partial x}(t,X_{t })\right|\sqrt{\Delta t}\cdot\sqrt{\frac{8}{\pi}}\left(\frac{1+\sqrt{\beta(t, x)\,\Delta t}}{2}-\frac{1}{2}\right).\]
Now, by the definition of the risk generator (4.4), we get the assertion.
_Remark 4.8_.: The drift (4.2) in Example 4.2 now turns out to be a specific case of the general relation revealed by (4.6), both reveal the same pattern: any risk averse assessment adds the additional drift term
\[\sqrt{\frac{2}{\pi}\beta(t,x)}\cdot\left|\sigma(x,t)\cdot\frac{\partial f(t,x)} {\partial x}\right|.\]
For the absolute value \(|\cdot|\) in the expression, the additional drift term cannot be negative and always points in one direction, the direction of risk. This is in line with risk aversion, as deviations in the different directions are associated with profits and (for the other direction) losses. Further, the coefficient \(\beta\) models the amount of local risk aversion.
The behavior (4.6) has been found with other risk measures as well, for example for the Entropic Value-at-Risk, cf. Pichler and Schlotter (2022). For this reason, various results from the literature extend to the nested expectile.
## 5 The risk averse control problem
While the classical theory on dynamic optimization builds on the risk-neutral expectation (cf. Fleming and Soner (2006)), we take risk into consideration to the optimal control problem and derive a risk averse variant of the Hamilton-Jacobi-Bellman equation. In what follows we derive the governing equations formally by adapting the presentation from Pichler and Schlotter (2022) for expectiles.
Consider the stochastic differential equation
\[\mathrm{d}X_{t}^{u}=\mu\big{(}t,X_{t}^{u},u(t,X_{t}^{u})\big{)}\mathrm{d}t+ \sigma\big{(}t,X_{t}^{u},u(t,X_{t}^{u})\big{)}\mathrm{d}W_{t} \tag{5.1}\]
driven by an adapted control policy \(u(t,X_{t})\), where \(u\) is a measurable function. It is the objective to minimize the risk-averse expectation of the accumulated costs,
\[\int_{t}^{T}c\big{(}s,X_{s},u(s,X_{s})\big{)}\mathrm{d}s+\Psi\big{(}X_{T} \big{)},\]
where \(\Psi(\cdot)\) is a terminal cost. Recall that the nested expectiles accumulate costs and risk so that it is the objective to minimize the value function
\[V^{u}(t,x)\coloneqq\tilde{e}_{\beta(\cdot)}^{\sigma(X)}\left(\int_{t}^{T} \big{(}s,X_{s}^{u},u(s,X_{s}^{u})\big{)}\mathrm{d}s+\Psi\big{(}X_{T}^{u}\big{)} \right|X_{t}^{u}=x\right)\]
among all policies \(u\in\mathcal{U}\) chosen in a suitable set, where \(X_{t}^{u}\) solves the stochastic differential equation (5.1) for the policy \(u\).
**Proposition 5.1**.: _The value function_
\[V(t,x)\coloneqq\inf_{u(\cdot)\in\mathcal{U}}V^{u}(t,x),\]
_solves the differential equation_
\[\frac{\partial V}{\partial t}(t,x)=\mathcal{H}_{\beta}\left(t,x,\,\frac{ \partial V}{\partial x},\frac{\partial^{2}V}{\partial x^{2}}\right) \tag{5.2}\]
_with terminal condition \(V(T,x)=\Psi(x)\), where_
\[\mathcal{H}_{\beta}(t,x,g,A)\,\coloneqq\sup_{u\in U}\left\{-c(t,x,u)-g\cdot\mu(t,x,u)-\frac{1}{2}A\,\sigma(t,x,u)^{2}-\sqrt{\frac{2}{\pi}\beta(t,x)}\cdot\left|g \cdot\sigma(t,x,u)\right|\right\} \tag{5.3}\]
_is the Hamiltonian, cf. Fleming and Soner (2006, Section IV, (3.2))._
To accept the assertion recall that
\[\frac{1}{h}\tilde{e}_{\beta(\cdot)}^{\sigma(X)} \left(\int_{t}^{t+h}c\left(s,X_{s}^{u},u(s,X_{s}^{u})\right) \mathrm{d}s+V(t+h,X_{t+h})-V(t,x)\right|X_{t}^{u}=x\right)\] \[\xrightarrow[h\to 0]{}c(t,x,u)+\mathcal{G}_{\beta}V(t,x)\]
by the definition of the risk generator. While the left-hand side vanishes by the dynamic programming principle for the optimal policy, it follows for the right-hand side that
\[0=\inf_{u\in U}c(t,x,u)+\mathcal{G}_{\beta}V(t,x).\]
With Proposition 4.7, this leads to the equation (5.2) with Hamiltonian (5.3).
The fundamental equation (5.2) is the Hamilton-Jacobi-Bellman (HJB) partial differential equation. It is essential to observe that the HJB equation has the additional term
\[\sqrt{\frac{2}{\pi}\beta(t,x)}\cdot\left|\sigma(t,x,u)\frac{\partial V}{ \partial x}\right|\]
involving the gradient; the total gradient in the Hamiltonian (5.2) thus comes with the coefficient
\[\mu(t,x,u)+\sqrt{\frac{2}{\pi}\beta(t,x)}\cdot\sigma(t,x,u)\cdot\mathrm{sign }\left(\sigma(t,x,u)\frac{\partial V}{\partial x}\right).\]
That is, risk aversion increases the trend \(\mu\) by the amount \(+\sqrt{\frac{2}{\pi}\beta(t,x)}\cdot\sigma(t,x,u)\), while letting the volatility \(\sigma\) of the process unaffected.
In typical situations, \(\frac{\partial V}{\partial x}\) does not change its sign. For this reason, the classical theory on viscosity solutions on existence of solutions of (5.2) applies directly, without modifications. As well, explicit solutions of specific equations are known. In these situations, the explicit results can be adapted to the risk averse situation, cf. Pichler and Schlotter (2021) for applications from financial mathematics.
## 6 Summary
This paper exploits the unique properties of expectiles in stochastic and in dynamic optimization. We start by giving tight comparisons with common risk measures first. Next, we define the conditional expectile. The conditional expectile can be nested to extend the scope of risk functionals (risk measures) to stochastic processes in discrete and in continuous time. For the
random walk process or stochastic processes driven by a stochastic differential equation, explicit evaluations of the nested risk functional are available.
The risk generator is defined in analogy to the generator for stochastic processes. The risk generator involves an additional term which is caused by risk. With that, the risk generator is a non-linear differential operator. The aspect of risk augments the Hamiltonian via an additional term, which is responsible for risk only and the risk averse Hamilton-Jacobi-Bellman equations thus derive accordingly.
|
2310.02704 | Extending Isabelle/HOL's Code Generator with support for the Go
programming language | The Isabelle proof assistant includes a small functional language, which
allows users to write and reason about programs. So far, these programs could
be extracted into a number of functional languages: Standard ML, OCaml, Scala,
and Haskell. This work adds support for Go as a fifth target language for the
Code Generator. Unlike the previous targets, Go is not a functional language
and encourages code in an imperative style, thus many of the features of
Isabelle's language (particularly data types, pattern matching, and type
classes) have to be emulated using imperative language constructs in Go. The
developed Code Generation is provided as an add-on library that can be simply
imported into existing theories. | Terru StΓΌbinger, Lars Hupel | 2023-10-04T10:17:17Z | http://arxiv.org/abs/2310.02704v3 | # Extending Isabelle/HOL's Code Generator with support for the Go programming language
###### Abstract
The Isabelle proof assistant includes a small functional language, which allows users to write and reason about programs. So far, these programs could be extracted into a number of functional languages: Standard ML, OCaml, Scala, and Haskell. This work adds support for Go as a fifth target language for the Code Generator. Unlike the previous targets, Go is not a functional language and encourages code in an imperative style, thus many of the features of Isabelle's language (particularly data types, pattern matching, and type classes) have to be emulated using imperative language constructs in Go. The developed Code Generation is provided as an add-on library that can be simply imported into existing theories.
Keywords:Theorem provers Code generation Go programming language.
## 1 Introduction
The interactive theorem prover _Isabelle_ of the LCF tradition [12] is based on a small, well-established and trusted mathematical inference kernel written in Standard ML. All higher-level tools and proofs, such as those included in the most commonly-used logic _Isabelle/HOL_, have to work through this kernel.
Some (but by far not all) of the tools available to users in _Isabelle/HOL_ feel immediately familiar to anyone with experience in functional programming languages: it is possible to define data types (via the **datatype** command), functions (via **fun** and **function**), as well as type classes and instances akin to Haskell (via **class** and **instance**).
Unlike most other languages, Isabelle's nature as a theorem prover means that it is easy to formalise and prove propositions about the programs written in Isabelle/HOL. To allow use of such programs outside of the proof assistant's environment, Isabelle comes equipped with a _Code Generator_, allowing users to extract source code in Haskell, Standard ML, Scala, or OCaml, which can then be compiled and executed. This translation of code works by first translating into an intermediate language called _Thingol_, which is shared between all targets; code of the intermediate language is then transformed into code in the individual
target languages via the principle of _shallow embedding_, that is, by representing constructs of the source language using only a well-defined subset of the target language [6, 7].
_Go_ is a programming language introduced by Google in 2009. It is a general-purpose, garbage-collected, and statically typed language [4]. In contrast to the existing targets of Isabelle's Code Generator, it is not a functional language, and encourages programming in an imperative style. However, it is a very popular language, and many large existing code bases have been written in it.
ContributionsThis paper extends Isabelle's Code Generation facility with support for Go. We demonstrate a translation scheme from programs in Thingol to programs in Go, and thereby allow Isabelle users to also generate Go code from their theory files. This scheme requires some novel approaches to the encoding of functional programming constructs, such as pattern matching, in an imperative programming language (SS4).
This extension to the Code Generator is supplied as a stand-alone theory file that can easily be imported into existing developments,1 making it immediately usable in other contexts.
Footnote 1: Available at [https://doi.org/10.5281/zenodo.8401392](https://doi.org/10.5281/zenodo.8401392)
or
The motivation for this work stems from G+D's internal use of both ecosystems: Isabelle for formalization purposes, and Go for the real-world implementation. This naturally lead to a formalization gap, which this project sought to close (SS5).
Related workThis paper describes the first attempt at translating Isabelle formalizations into a non-functional programming language. Prior work in leveraging imperative features in the Code Generator [2] has targeted the existing, functional programming languages, and thereby could reuse much of the existing infrastructure. There is also unpublished work on adding support for F# to the Code Generator [1], another functional language.
## 2 The intermediate language Thingol
Isabelle's Code Generation pipeline works in multiple stages. Crucially, all definitions made in Isabelle are first translated into an abstract intermediate language called _Thingol_, which is the last step shared between all target languages. The final stage then uses a shallow embedding to translate the Thingol program into source code of the target language.
Consequently, Thingol's design naturally reflects the features common to previous target languages, and is based on a simply-typed \(\lambda\)-calculus with ML-style polymorphism. Perhaps surprisingly, Thingol also supports type classes, which can be mapped easily to Haskell and Scala, but less easily to the other targets.
Thingol distinguishes between terms and declarations (Figure 1). Terms are simple \(\lambda\)-expressions with the addition of case expression for pattern matching on datatypes. Declarations are top-level items that introduce datatypes, functions, as well as type classes and their instances to a program.
While there is no formal semantics of Thingol, it can be thought of as a _Higher-Order Rewrite System_ (HRS) [10, 11]. It provides a convenient abstraction over the target languages' semantics. Because a HRS does not have a specified evaluation order, the Code Generator cannot guarantee total, but only partial correctness. Therefore, users--except when generating Haskell code--should be careful to avoid relying on lazy evaluation.
## 3 A fragment of Go
Go is a high-level, statically typed language. However, it is not a functional language, and differs in many aspects from the already-existing target languages of Isabelle's Code Generator.
However, many of the features unique to Go are not needed by the generator; since the translation works as a shallow embedding into the target language, it suffices to use those features of Go which can be used to represent the various statements of Thingol. Only those will be presented in the following, along with--if necessary--discussion why we did not pursue alternative features or solutions.
In effect, this leaves many of Go's most interesting features (e.g., channels or methods) entirely unused. The fragment used by the Code Generator could even be understood as a "functional subset" of the Go language, meaning that it picks only those features that closely align with those of the (functional) pre-existing code generation targets available in Isabelle as well as those of Thingol.
We also exclude any treatment of Go's package system, which is typically used to structure modular programs. For the purposes of our implementation, the user can choose between only one flat package containing the entire program,
Figure 1: Thingol syntax overview
or one package per Isabelle theory.4 We only take into account that top-level names must start with an upper-case letter if they are to be accessible to other Go packages, which requires occasional renaming.
Footnote 4: Though in this case we require the theoriesβ dependencies to be acyclic.
### Syntax
The syntax fragment given in Figure 2 is inspired by that of Featherweight Generic Go [5], but differs in some important aspects:
1. Methods are not included; instead we use "ordinary" top-level functions.
2. Go distinguishes syntactically between expressions and statements in Go, whereas Featherweight Go does not. We retain the distinction and discuss conversion between them in SS3.4.
3. Type parameters can be declared with an interface constraint. However, in our fragment the only available constraint is the unconstrained any, as Go's other constraints are not useful for our translation (SS4.5).
4. We use Go 1.18's syntax for generics, which differs from the proposal put forward in Featherweight Generic Go.
### Declarations
A (top-level) declaration \(D\) can define either a new type or function. Within one package, the order of declarations does not matter; any item may reference any other.
Figure 2: A fragment of Goβs syntax
Structure typesA declaration of the form type\(t_{S}\big{[}\overline{\alpha\ c}\big{]}\)struct\(\{\overline{A\ \tau}\}\) introduces a new type constructor with fields \(\overline{A}\) of types \(\overline{\tau}\) to the program. It may be polymorphic and take type arguments \(\overline{\alpha}\) which can be freely referenced within the \(\overline{\tau}\) types. Since it is not possible to omit any constraint \(c\) for the type variables \(\alpha\), we use the (unrestricted) any constraint.
Note that there is no analogous construct to Thingol's sum types; that is, it is not possible to a have a structure type which has more than one constructor.
Interface typesA declaration of the form type\(t_{I}\big{[}\overline{\alpha\ c}\big{]}\)interface\(\{\}\) introduces a new (empty) interface type to the program. While Go supports non-empty interfaces containing methods, we do not use this feature.
Unlike interfaces in typical object-oriented languages such as Java, Go's interfaces are structural in nature: at runtime, any struct value conforms to an interface if (and only if) the struct implements a superset of the declared methods of the interface.
This implies that empty interfaces correspond to a "top" type that can denote arbitrary values. This perhaps odd choice will become obvious when introducing the translation scheme of data types (SS4.2). Similarly, non-empty interfaces are not useful for the translation of type classes (SS4.5).
FunctionsA declaration func\(f\big{[}\overline{\alpha\ c}\big{]}\)(\(\overline{x\ \tau}\)) (\(\overline{\gamma}\))\(\{\ s\ \}\) introduces a new function \(f\) to the program. The type parameters \(\overline{\alpha}\) can be referenced within both argument types \(\overline{\tau}\) and the return types \(\overline{\gamma}\).
Unlike in Thingol, a function cannot have multiple equations nor perform pattern matching on its arguments. Instead there is only one list of argument names \(\overline{\alpha}\), which are in scope for the (unique) function body \(s\).
An unusual feature of Go is its concept of functions which return more than one value:
func foo() (bool, int, string) { return false, 42, "bar" } func main() { x, y, z := foo() } At first glance this might seem analogous the tuples present in Standard ML, with foo() returning a single value of the tuple (bool, int, string). But this is not the case, as Go has no concept of tuples. Instead, the function itself returns multiple values, which must be immediately assigned names (or discarded) at the function's call site. Thus a call like no_tuples := foo() is not allowed.
### Expressions
Expressions \(e\) can have several forms: variables, function application, and function abstraction are familiar from the \(\lambda\)-calculus. The others may require a bit more explanation.
Structure literal
A literal of the form \(t_{s}|\overline{\alpha}|\{\overline{e}\}\) gives a value of the struct type with name \(t_{S}\) applied to type arguments \(\overline{\alpha}\), i.e., it produces a new value of the type \(t_{s}\big{[}\overline{\alpha}\big{]}\) in which the fields are set to the evaluated forms of the expressions \(\overline{e}\). Note that the field names present in the declaration of a struct type are absent: while they could be used, Go does not require them. We omit them in the interest of shorter code.
Field selectionAn expression \(e.A\) selects the field named \(A\) of an expression \(e\), which must have a fitting struct type \(\tau_{S}\) that was declared with a field name \(A\), and returns the value of that field. This is the only place outside a structure type's declaration that field names are used.
Type conversionAn expression \(\tau_{I}(e)\) evaluates to a value of the interface type \(\tau_{I}\) which contains the evaluated form of \(e\) as its inner value. The original type \(\sigma\) of \(e\) is kept and will not be erased at runtime; it can be recovered using a type assertion statement (see the next section). This expression can also be thought of as an "upcast".
### Statements
This section introduces the statements that are used by the Code Generator. All statements of the defined fragment will always end in a return, and thereby return from the current function. We utilize this convention so that it is always possible to embed a statement into an expression by wrapping it into an immediately-called function abstraction func () \(\tau\) {\(s\)}(). All forms except for the type assertion should appear familiar from similar languages.
ReturnThe return keyword evaluates one or more expressions \(\overline{e}\), then returns from the current function. The number of expressions given must match the number of return types given in the function's head.
If statementA statement of the form if (\(e\)) {\(s_{1}\)}; \(s_{2}\) will evaluate \(e\), which must have a boolean type. If it evaluates to the built-in value true, then \(s_{1}\) is evaluated. Since all statements must end in a return, it will then return from the current function. Otherwise, \(s_{2}\) is evaluated.
This statement could equivalently be written as if (\(e\)) {\(s\)} else {\(s_{2}\)}, which has the same semantics. The version without the explicit else branch is preferred to limit nesting of statements in the generated code.
Type assertionA statement of the form \(x\), \(y\) := \(e\).(\(\sigma\)) can be thought of as the inverse operation of type conversions, i.e., a "downcast". For an expression \(e\) of an interface type \(\tau_{I}\), the assertion checks whether the inner value contained within the interface value has type \(\sigma\). The result of the check is assigned to the boolean variable \(y\). If it was successful, \(x\) will be bound to that inner value; if not, it will be nil, Go's null pointer. Note that the type of \(x\) is \(\sigma\).
## 4 Translation scheme
In this section, we will discuss the concrete translation schemes employed for Thingol programs. In the interest of brevity, we omit "uninteresting" steps, i.e., purely syntactic mappings, and focus on the non-trivial steps.
Likewise we will no discuss the treatment of variable names, which we preserve wherever possible; the only change made is to ensure that those names which should be exported are made upper-case (as required by Go). In some cases the translation will introduce new names; as Isabelle's Code Generator already provides a way to generate guaranteed-unused and therefore "fresh" names when needed, their choice will also not be discussed.
### Types, terms and statements
We define three translations \(\mathrm{type}(\tau)\), \(\mathrm{expr}(t)\), and \(\mathrm{stmt}(t)\). The first is a straightforward syntactic mapping of types. In the remainder of the chapter, we will informally equate Thingol types \(\tau\) with their Go translation \(\mathrm{type}(\tau)\) and write both simply as \(\tau\). For now, we exclude any mapping of common types (e.g. integers) to built-in Go types; this topic will be revisited later (SS4.6).
The other two translations--expr and stmt--are used for converting Thingol terms into Go expressions and statements, respectively. Which one is used thus depends on what Go expects in each particular context; for example, terms used as function arguments use \(\mathrm{expr}\); a term which is a function body uses stmt. Semantically, \(\mathrm{expr}\) and \(\mathrm{stmt}\) are related roughly as follows:
\[\mathrm{stmt}(t) \equiv\mathtt{return}\ \mathrm{expr}(t)\mbox{;}\] \[\mathrm{expr}(t) \equiv\mathtt{func}()\ \tau\ \{\mathtt{stmt}(t)\}\mbox{()}\]
AbstractionsThe translation of a \(\lambda\)-abstraction \(\lambda(x::\tau)\). \((t::\gamma)\) demonstrates the distinction between expressions and statements:
\[\mathrm{expr}(\lambda(x::\tau).\ \ (t::\gamma))=\mathtt{func}\ (x\ \tau)\ \ \gamma\ \ \{\mathtt{stmt}(t)\}\]
Although curried abstractions are unusual in Go, no effort is made to uncurry them (this is in constrast with the treatment of top-level functions, which are uncurried SS4.4).
Applications of top-level functionsApplications \(t\) are more tedious to translate. Since definitions of top-level functions are uncurried (SS4.4), we first have to check if \(t\) is a call to such a function, i.e., if \(t\) has the shape \(\big{(}\cdots((f[\overline{\tau}_{i}]\ \ a_{1})\ \ a_{2})\cdots\big{)}\ \ a_{n}\), where \(f\) references a top-level function or data type constructor which takes \(m\) arguments.
If so, we have to consider three cases:
* Fully-satured application; all arguments are passed into \(f\)
* Unsatured application; need to \(\eta\)-expand
* Over-satured application. This occurs if \(f\) returns another function, with \(a_{1}\) to \(a_{m}\) being the immediate arguments to \(f\) and any remaining \(a_{m+1}\) to \(a_{n}\) as curried arguments. The latter will be passed individually.
As will be described later (SS4.5), translating Isabelle's type classes through the dictionary construction may introduce additional (value-level) parameters for a top-level function, and corresponding additional arguments \(d_{1}\) to \(d_{r}\) to each application of the same function. These are inserted before the user-defined parameters.
Altogether, we arrive at the following scheme when \(f\) references a function:
\[\mathrm{expr}(t)=f\,[\![\tau_{1},\ldots,\tau_{i}]\,(\!d_{1},\ldots d_{r},a_{1}, \ldots,\!a_{m})\,(\!a_{m+1})\ldots(\!a_{n})\]
Finally, if \(f\) references a data type constructor of a type \(\tau\) rather than a function, the case \(n>m\) cannot occur. However, we must wrap the constructor into a type conversion to type \(\tau\), and use slightly different syntax for passing the arguments:
\[\mathrm{expr}(t)=\kappa\,(\!f\,[\![\tau_{1},\ldots,\tau_{i}]\,\{\!d_{1}, \ldots d_{r},a_{1},\ldots,\!a_{m}\})\]
Lambda applicationsIf an application \(t=t_{1}\;\;t_{2}\) is not a call to a top-level function, then the translation is straightforward:
\[\mathrm{expr}(t_{1}\;\;t_{2})=\mathtt{expr}(t_{1})(\mathtt{expr}(t_{2}))\]
### Data types
Recall that a data type \(\kappa\) defined in Thingol consists of type parameters \(\overline{\alpha}_{i}\) and constructors \(\overline{f}_{i}\). Each \(f_{i}\) gets translated into its own separate struct type.
As was discussed in SS 3, Go knows no sum types, thus the translation has to simulate their behaviour using other means. For a data type, we generate an additional unconstrained interface type \(\delta\), meant to represent any constructor \(f_{i}\) of \(\kappa\).
Since \(\delta\) is left unconstrained, there is no language-based guarantee on the Go side that a value \(x\) of type \(\delta\) was actually constructed using one of the constructors \(f_{i}\), i.e., that the inner value of \(x\) actually is of one of the struct types. We assume that the generator operates only already type-correct intermediate code, so it will still never go wrong. However, programmers writing wrapper code to interact with the generator's output must be careful not to pass values of a wrong type, lest they produce run-time errors.
If the data type \(\kappa\) has exactly one constructor \(f_{1}\), then no additional interface type \(\delta\) is generated.
ConstructorsDefining a struct type for an individual constructor is straightforward. A constructor \(f\) with fields of types \(\tau_{1}\) to \(\tau_{i}\) is translated into Go as a struct with the same name and fields: type \(f\)struct \(\{\overline{A}\;\;\tau_{i}\}\), where the \(\overline{A}_{i}\) are newly-invented names for each of the fields, as no field names are present in Thingol. Note that those generated field names are entirely unimportant (access happens only through destructors, and the names are not required when constructing a value); the only requirement imposed on them is that each \(\overline{A}_{i}\) of the same struct are distinct.
DestructorsAlong with each constructor's struct type, the translation generates a destructor function \(f\_\)dest which will be used as a helper function in the translation of Thingol's case expressions. Those destructors are synthetic and not present in the Thingol representation. Their sole purpose is to unpack and return the individual fields in a struct type, exploiting Go's multiple return types.
func \(f\_\)dest (q \(f\)) (\(\tau_{1}\),..., \(\tau_{n}\)) { return q.\(A_{1}\),..., q.\(A_{n}\) } We need those destructors for technical reasons (SS4.3). Note that they ignore the interface type \(\kappa\) and operate on the individual structure types \(\overline{f}\) directly; the pattern match translation thus has to unpack the inner value before invoking the destructor.
ExampleAs a simple example, consider the definition of natural numbers in Isabelle, here reusing Thingol pseudo-syntax (Figure 1):
data nat = Zero | Suc nat Translated into Go, this produces the following output:
type Nat any; type Zero struct { }; type Suc struct { A Nat; }; func Suc_dest(p Suc)(Nat) { case return q.A; } Note the use of the unconstrained interface type Nat to represent a faux sum type that is supposed to only contain Zero and Suc values. Constructing the number 1 would look as follows: Nat(Suc{Nat(Zero{})). While type-correct according to Go, the value Nat(Suc{nil}) could easily cause run-time exceptions in other parts of the generated code, particular where it simulates a pattern match on it (SS4.3). Programmers must thus be careful not to introduce such values when hand-writing wrapper code. Furthermore, the translation omits the destructor for Zero, because the structure has no fields that could be unpacked.
A slightly more involved example is the polymorphic list datatype. In Isabelle, it is defined as follows:
data \(\alpha\) list = Nil | Cons \(\alpha\) (\(\alpha\) list) The resulting Go code now contains generic annotations:
type List[a any] interface {}; type Nil[a any] struct { }; type Cons[a any] struct { A a; Aa List[a]; }; func Cons_dest[a any](p Cons[a])(a, List[a]) { return p.A, p.Aa
### Case expressions
Thingol's case expressions implement pattern matching on a value, in a way which will be immediately familiar to anyone acquainted with other functional languages such as Standard ML or Haskell: they inspect a term \(t\) (the expression's _scrutinee_) and match it against a series of clauses \(\overline{p_{i}\to b_{i}}\). Each clause contains a pattern \(p_{i}\) and a term \(t_{i}\) that is to be evaluated if the pattern matches the scrutinee. Syntactically, patterns are a subset of terms; they can only be composed of variables and fully-satisfied applications of data type constructors to sub-patterns \(f\)\(\overline{p}_{i}\) constructed of the same subset.
Since Go has no comparable feature, a data type pattern in a case expression is translated into a series of (possibly nested) if-conditions and calls to destructor functions. The bodies of the innermost if-condition then correspond to the translated terms \(t_{i}\), which must be in statement-form, i.e., ending in a return-statement. Thus, if the pattern could be matched, further patterns will not be executed. Naturally, using return in this manner implies that a case expression must always either be in tail position, or else be wrapped into an anonymous function if it does not (SS3).
If the pattern did not match, execution will continue with either the next block of if-conditions generated from the next clause, or encounter a final catch-all call to Go's built-in panic function, which aborts the program in case of an incomplete pattern where no clause could be matched (incomplete patterns are admissable in Isabelle's logic, see Hupel [8] for a detailed description). This panic can also be encountered if an external caller exploited the lossy conversion of sum types as described above and supplied, e.g., a nil value as a scrutinee.
Taken together, an entire case expression is translated as a linear sequence of individual clauses, followed by a panic:
\[\text{stmt(case $t::\tau$ of $[\overline{p\to b}]$)}=\overline{\text{stmt($p\to b$)}};\text{ panic("Match\_failed")};\]
Let us now consider the concrete translation for variable and constructor patterns.
Variable patternWe assign the scrutinee \(t\) to the variable \(x\) to make it available in the scope of \(b\).
\[\text{stmt}(x\to b)=\{x\ \ :=\ \text{expr}(t)\text{; \ stmt}(b)\}\]
Constructor patternThe pattern is of the form \(f[\overline{\tau}_{i}][\overline{s}_{k}]\). If all sub-patterns \(\overline{s}_{k}\) are variable patterns, the translation is once again straightforward:
\[\text{stmt}(f[\overline{\tau}_{i}][\overline{s}_{k}]\to b)=\{m\text{,}A_{1} \text{,}\ldots\text{,}A_{k}\text{:= }f\_\text{dest}(t)\text{; if ($m$) \{stmt}(b)\}\}\]
Nested constructor patterns are translated in the same way, but pushed inwards into the body of the if-statement generated above:
\[\text{stmt}(f[\overline{\tau}_{i}][\overline{s}_{k}]\to b) =\{m\text{,}A_{1}\text{,}\ldots\text{,}A_{k}\text{:= }f\_\text{dest}(t)\text{; if ($m$) \{inner\}\}\] \[\text{inner} =\text{stmt(case $A_{1}$ of $s_{1}\to(\ldots\to(\text{case $A_{k}$ of $s_{k}\to b$))}$)}\]
In other words, the sub-patterns are treated as if they were further nested case expressions. This results in a total nesting depth of one level per constructor.
Within the innermost if, the body \(b\) of the pattern's clause is translated as statement to ensure it returns from the current function.
Optimizing the nesting levelThe translation described in this section can translate arbitrary patterns, but comes at the price of potentially exponential code blow-up. Even a single pattern consisting of just a constructor and \(k\) fields, none of which are proper patterns, will still produce \(k\) levels of nested if-statements. But if the fields themselves are again data type constructors with sub-patterns, the number of nested levels quickly increases further.
In real-world applications, we can reduce the blow-up by optimizing constructor patterns without arguments. Instead of calling a destructor function, we can emit an equality check, since there are no fields to extract. Multiple equality checks can be joined together using Go's conjunction operator &&.
ExampleConsider a function hd2 that takes a list and returns (optionally) the second element of the list. Using the Thingol pseudo-syntax, this can be defind as follows (assuming standard definition of the option type):
fun hd2 :: \(\forall\alpha.\alpha\,\texttt{list}\Rightarrow\alpha\,\texttt{option}\) where hd2 \(xs\) = case \(xs\) of Nil \(\Rightarrow\) None | Cons \(x\) Nil \(\Rightarrow\) None | Cons \(x\) (Cons \(y\)\(xs\)) \(\Rightarrow\) Some \(y\) This is translated into Go as follows:
func Hd2[a any] (x0 List[a]) Option[a] { if (x0 == (List[a](Ni[a]{}))) { return (Option[a](None[a]{})); } q, m := x0.(Cons[a]); if (m) { _, c := Cons_dest(q); if (c == (List[a](Ni[a]{}))) { return (Option[a](None[a]{})); } } q, m := x0.(Cons[a]); if (m) { _, p := Cons_dest(q); q, m := p.(Cons[a]); if (m) { ya, = := Cons_dest(q); return (Option[a](Some[a]{ya})); }
panic("match_failed"); } This piece of generated code benefits from the optimization described above (in the first and second clauses). Also, observe that some bound variables are unused and have to be generated as _, because unused variables are a compile error in Go.
### Top-level functions
Unlike lambdas that occur within terms, top-level functions in Thingol can have multiple clauses and pattern-match on their arguments, neither of which is supported in Go. It is thus necessary to translate them differently: all equations of the same function will have to be merged, with the pattern matching on their parameters again pushed inwards into the then combined, single function body.
Further, treating them differently from in-term lambda expression also allows the generator to uncurry them, creating code that is much closer to an idiomatic style in Go.
Merging multiple clausesThingol allows Haskell-style function definition comprising multiple clauses. But in Go, all parameters of functions must be simple variables. Thus, if any of the parameters patterns \(\overline{p_{i}}\) is a proper pattern, a fresh name \(x_{i}\) for it is invented. Likewise, if a parameter is a variable binding instead of a proper pattern, but has multiple different names in two clauses, the name \(x_{i}\) used in the first clause is picked as the name of the parameter in Go.
Pattern matchingThe combined function body then consists of a pattern match translation as described earlier.5 Each equation is then treated as a clause of a synthetic case-expression; since functions can pattern match on multiple parameters, we again push inwards and translate as if a nested series of case-expressions were present.
Footnote 5: The already-existing Scala target uses a similar transformation.
ExampleThe following Thingol definition is semantically equivalent to the example from the previous section, but written using multiple equations. Due to the transformation applied by the Code Generator, the generated Go code is identical.
fun hd2 :: \(\forall\alpha.\alpha\,\texttt{list}\Rightarrow\alpha\,\texttt{option}\) where hd2 Nil = None hd2 (Cons \(x\) Nil) = None hd2 (Cons \(x\) (Cons \(y\) _xs_) _ys_) = Some _y_
Special case: top-level constants
Unsurprisingly, Thingol accepts top-level definitions that are not functions, for example:6
Footnote 6: Readers familiar with Isabelle syntax may be surprised about this notation; while Isabelle/HOL distinguishes between the **fun** and **definition** keywords, Thingol has no such distinction.
fun a :: nat where a = 10
For those, we have to battle yet another Go restriction: Go admits top-level variable declarations, but only for monomorphic types, and it disallows function calls in their definitions. Therefore, we must treat such Thingol definitions as if they were nullary functions. While this changes nothing of the semantics of the translated program, it does incur a (potentially significant) runtime cost: constants will be evaluated each time they are used, instead of only once when the program is initialized.
### Dictionary construction
On the surface, Isabelle's Haskell-style type classes and Go's interfaces share many of the same features, and are sometimes considered to be near-analogous [3]. However, translating type classes into interfaces does not work. This is caused by an implementation concern: Go directly compiles methods into virtual tables for dynamic dispatch. An interface in Go declares multiple _methods_, where each method type must take the generic value as zeroth (i.e. implicit) parameter. Isabelle (and Haskell) do not have such a restriction, as can be observed from the following examples, which are valid in Isabelle:
class foo where foo :: unit \(\Rightarrow\)\(\alpha\)
class bar where bar :: (\(\alpha\)\(\Rightarrow\)\(\alpha\)) \(\Rightarrow\) unit
Naively translated into Go, both would be rejected by the its compiler. The first class declares a function that does not take an \(\alpha\) parameter at all, whereas the second class' function does not take a simple \(\alpha\) parameter (but a parameter whose type contains \(\alpha\)).
As a practical example, consider that while the class for semigroups would be admissible as an interface (having a single method (+) :: \(\alpha\Rightarrow\alpha\Rightarrow\alpha\)), monoids would not be (unit :: \(\alpha\) does not even have any parameters).
To avoid the additional complexity of treating all these cases separately, we resort to using a dictionary construction [7, 8] in all cases. Since the existing SML target of the Code Generator has to deal with the same issue, all required infrastructure is already in place: Thingol's terms come with enough annotations to resolve all type class constraints during translation and replace the implicit instance arguments of functions making use of type classes by explicit dictionary values, which we represent as one data type per type class.
Thus only relatively few things are left to do in Go:
1. declare a data type for each type class, called its _dictionary_ type
2. translate type class constraints on functions into explicit function arguments of dictionary types
3. translate type class instances into either a value of the type class's dictionary type, or, if the instance itself takes type class constraints, to a function producing such a value when given values of dictionary types representing these constraints
4. any time a top-level function is used, the already-resolved type class constraints must be given as explicit arguments
Example: Consider the following definition of a semigroup together with a function operating on it:
class semigroup where (+) :: \(\alpha\Rightarrow\alpha\Rightarrow\alpha\) where
class monoid \(\subseteq\) semigroup where zero :: \(\alpha\)
fun sum :: \(\alpha\) :: monoid list \(\Rightarrow\)\(\alpha\) where sum _xs_ = fold (+) _xs_ zero
The generated code looks as follows (ignoring the list data type):
type Semigroup[a any] struct { Plus func(a, a) a }
type Monoid[a any] struct { Semigroup_monoid Semigroup[a] Zero func () a }
func Sum[a any] (a_ Monoid[a], xs List[a]) a { return Fold[a, a]( func (aa a) func(a) a { return func (b a) a { return a_.Semigroup_monoid.Plus(aa, b); }; }, xs, a_.Zero()
### Mapping high-level constructs
So far, the shallow embedding we have presented produced code with no dependencies on the Go side, with only the built-in constructs panic and && used.
All higher-level constructs used by programs (such as lists, numbers) must thus be "brought along" from Isabelle, and are translated wholesale exactly as they are defined in their formalisations. While this guarantees correctness, it is highly impractical for real-world applications: for example, natural numbers as defined in Isabelle/HOL (unary Peano representation, SS4.2) require linear memory and quadratic runtime even for simple oprations like addition.
Luckily, the Code Generator already has a solution for this conundrum in the form of _printing rules_, which can map Isabelle's types and constants to user-supplied names in the target language. We have set up printing rules mapping:
* Isabelle/HOL's booleans to booleans in Go
* numbers to arbitrary-precision integers (via Go's math/big package)
* strings of the String.literal type to strings in Go
Unfortunately, linked lists cannot be mapped, because Go does not feature a standard implementation of linked lists.
## 5 Evaluation
Even though Go is a very different programming language compared to the other targets Haskell, Scala, OCaml, and SML, we have achieved almost full feature parity for the translation described in this paper. This means that almost any Isabelle construct can be cleanly mapped to a corresponding encoding in Go. We have confirmed that in two case studies:
Existing formalisationAt G+D, we use Isabelle for a substantial formalization of various graph algorithms powering a financial transaction system. The purpose of the formalization is to provide real-world security guarantees, such as inability to clone money. We have previously used the Code Generator to produce Scala code as a reference implementation, combined with some hand-written wrapper code and basic unit tests.
As a simple evaluation of Go code generated from the same Isabelle theories, we re-wrote the unit tests and the necessary wrapper code in Go. We obtained equivalent results and could not find bugs in the Code Generator or unintended behaviour of the code it produced. However, the task of porting the wrapper code from Scala proved to be error-prone: many explicit type annotations are needed in the code (in particular, every usage of a data type constructor requires at least one), and not all incorrect type annotations will cause compilation of the wrapper code to fail. Instead, if a data type's constructor is annotated with a different interface type, the assumption underlying the translation of case-expressions will fail, resulting in a "match failed" error at runtime.
Another awkward source of problems when integrating the generated Go code with a larger code base is that Go's standard library lacks basic functional data
structures, such as lists on tuples (SS4.6). This means that the generated code is unidiomatic and relies on manual conversions, e.g. between arrays and lists.
HOL-Codegenerator_Test Isabelle's distribution contains a Code Generator test session which is used as a self-check for the various target languages of the Code Generator. For this paper, a single export command is relevant, which is meant to export a considerable chunk of Isabelle/HOL's library as a stress-test for the Code Generator. This has worked as expected, with the entirety of the stress-test successfully compiling in Go.
Trusted code baseJust like for the other target language, our implementation is part of the _trusted code base_, i.e., bugs in the Code Generator may lead to bugs in the generated program, and will not be caught by Isabelle's kernel. We did not have to enlarge that trusted code base, therefore promising similar correctness like the other targets. More ambitious code printing however may change that picture, since such rules may have to assume more constructs in Go.
## 6 Conclusion
We have presented a translation from Thingol by shallow embedding into a fragment of Go, and implemented it as a target language for Isabelle's code generation framework. The new target language has been used with success to port an existing Isabelle formalisation that was only targeting Scala to additionally target Go. The implementation is readily usable with a standard Isabelle2023 installation and requires merely importing an additional theory file. The suite of existing tests of Isabelle's Code Generator is also supported.
Future workThe two most promising areas of future work are: leveraging Go's imperative nature by tightly integrating it with Imperative/HOL [2]; and generating more idiomatic Go code through custom code printing rules. Both can be implemented using similar mechanisms. However, substantial changes to Isabelle's code generation infrastructure are required, because Go demands more type annotations than other target languages.
#### 6.0.1 Acknowledgements
The authors would like to thank Florian Haftmann for his contributions to the development. We appreciate the comments suggested by Cornelius Diekmann, greatly improving the presentation in this paper. This work has been partially supported by the Federal Ministry of Education and Research (BMBF), Verbundprojekt CONTAIN (13N16582).
|
2307.08180 | Fixed point Floer cohomology and closed-string mirror symmetry for nodal
curves | We show that for singular hypersurfaces, a version of their genus-zero
Gromov-Witten theory may be described in terms of a direct limit of fixed point
Floer cohomology groups, a construction which is more amenable to computation
and easier to define than the technical foundations of the enumerative geometry
of more general singular symplectic spaces. As an illustration, we give a
direct proof of closed-string mirror symmetry for nodal curves of genus greater
than or equal to 2, using calculations of (co)product structures on fixed point
Floer homology of Dehn twists due to Yao-Zhao. | Maxim Jeffs, Yuan Yao, Ziwen Zhao | 2023-07-17T00:54:38Z | http://arxiv.org/abs/2307.08180v1 | # Fixed Point Floer Cohomology and Closed-String Mirror Symmetry for Nodal Curves
###### Abstract
We show that for singular hypersurfaces, a version of their genus-zero Gromov-Witten theory may be described in terms of a direct limit of fixed point Floer cohomology groups, a construction which is more amenable to computation and easier to define than the technical foundations of the enumerative geometry of more general singular symplectic spaces. As an illustration, we give a direct proof of closed-string mirror symmetry for nodal curves of genus greater than or equal to 2, using calculations of (co)product structures on fixed point Floer homology of Dehn twists due to Yao-Zhao [13].
###### Contents
* 1 Introduction
* 1.1 Statement of results
* 1.2 Homogeneous coordinate rings
* 2 \(B\)-model calculation
* 2.1 Hochschild-Kostant-Rosenberg theorem and balanced vector fields
* 2.2 Sheaf cohomology of balanced vector fields
* 2.3 Mirrors to open surfaces
* 2.4 Homogeneous coordinate rings
* 3 Standard Lefschetz fibrations
* 4 Symplectic cohomology of singular hypersurfaces
* 4.1 Compatibility with wrapping
* 4.2 Section-counting maps
* 4.3 Twisted closed-open maps
* 4.4 Twisted Hochschild cohomology
* 5 Background on the product on fixed point Floer homology
* 5.1 Fixed point Floer homology for Dehn twists on punctured Riemann surfaces
Computation of the Seidel class * 6.1 The exact case * 6.2 The non-exact case
* 7 A-model computations of symplectic cohomology
* 7.1 Single Dehn twist on a closed surface
* 7.2 The case of multiple Dehn twists
* 7.3 Dehn twists on punctured Riemann surfaces
* 7.4 Homogeneous coordinate ring for \(\phi^{2}\)
## 1 Introduction
Singular varieties arise frequently in mirror symmetry, even in the simplest cases, such as mirrors of smooth algebraic curves. While studying the enumerative geometry of such singular varieties intrinsically is certainly possible, outside of the orbifold case it is often technically demanding and not straightforward to compute (see for instance [12, 19, 13, 14, 15, 16, 17, 18, 19] though this list is certainly not exhaustive). An alternative proposal due to Auroux and developed in [10], uses a more algebraic and categorical approach to defining symplectic invariants of singular hypersurfaces and complete intersections. Passing to the closed-string setting by taking Hochschild cohomology suggests defining new enumerative invariants in terms of direct limits of fixed point Floer cohomology groups of nearby fibers. In the case of smooth algebraic curves of genus greater than or equal to \(2\), calculations of the (co)product structures on fixed point Floer homology for Dehn twists have been carried out by [16]. Their work allows us to carry out this direct-limit construction explicitly and produce precisely the algebra structures predicted by mirror symmetry.
Homological mirror symmetry for (smooth) curves has been studied extensively: beginning with [12] and [11], as well as [14, 15, 16], and [17, 18]. However, it seems that enumerative mirror symmetry for curves has yet to be studied outside of the genus-\(1\) case (see [19, 20]).
In SS1.1 we explain the background in mirror symmetry and the closed-string predictions for nodal curves; in SS2 we carry out the calculation of the \(B\)-model invariants on the mirror side. After setting up our conventions for Lefschetz fibrations in SS3, in SS4, we define the Seidel class and a twisted closed-open map, and prove the twisted closed-open map takes the Seidel class to the Seidel natural transformation. Finally, in SS6, we calculate the Seidel class and in SS7 use the results of [16] to compute the direct limit along multiplication by this class.
In future work we will study the algebraic structure of fixed point Floer cohomology for symplectomorphisms of algebraic surfaces using similar methods. This opens up avenues for studying fixed points of symplectomorphisms using mirror symmetry, which we will explore further.
### Statement of Results
If \(\Sigma_{g}\) is a smooth Riemann surface of genus \(g\geq 2\), it is known that a mirror can be described as a Landau-Ginzburg model \((X_{g},W_{g})\) where \(X_{g}\) is a \(3\)-dimensional algebraic variety and \(W_{g}:X_{g}\to\mathbb{C}\) is a holomorphic function whose critical locus is a trivalent configuration \(Z_{g}\) of \(\mathbb{P}^{1}\)s and \(\mathbb{A}^{1}\)s [11, 12]. Homological mirror symmetry then predicts that a Fukaya category of \(\Sigma_{g}\) is equivalent
to the matrix factorization category \(\mathrm{MF}(X_{g},W_{g})\). After passing to Hochschild cohomology, this should yield the closed-string mirror symmetry equivalence [10] with symplectic cohomlogy (with \(\mathbb{C}\) coefficients):
\[\mathrm{SH}^{k}(\Sigma_{g})\cong\bigoplus_{i\equiv k\bmod 2}\mathrm{HH}^{i}( \mathcal{F}(\Sigma_{g}))\cong\bigoplus_{i\equiv k\bmod 2}\mathrm{HH}^{i}( \mathrm{MF}(X_{g},W_{g}))\]
that induces an isomorphism of \(\mathbb{Z}/2\)-graded \(\mathbb{C}\)-algebras between \(\mathrm{SH}^{*}(\Sigma_{g})\) and \(\mathrm{HH}^{*}(\mathrm{MF}(X_{g},W_{g}))\).
One expects that a nodal degeneration of the curve \(\Sigma_{g}\), which we denote by \(\Sigma_{g}^{0}\), is mirror to removing one smooth point from the critical locus of \(W_{g}\); in the sense that we should have an equivalence of categories between \(\mathcal{F}(\Sigma_{g}^{0})\), the Fukaya category of the nodal curve \(\Sigma_{g}^{0}\) defined as in [11], with the matrix factorization category \(\mathrm{MF}(X_{g}^{0},W_{g}^{0})\), where the critical locus \(Z_{g}^{0}\) of \(W_{g}^{0}\) should be the complement of a single smooth point in the critical locus \(Z_{g}\) of \(W_{g}\). The \(A\)-model invariant of \(\Sigma_{g}^{0}\) we consider is:
Definition 1.1.: _The **symplectic cohomology** of the nodal curve \(\Sigma_{g}^{0}\) is given by the direct limit_
\[\mathrm{SH}^{*}(\Sigma_{g}^{0})=\varinjlim_{d}\mathrm{HF}^{*}(\Sigma_{g},\phi ^{d})\]
_where \(\mathrm{HF}^{*}(\Sigma_{g},\phi^{d})\) is the fixed point Floer cohomology (in the sense of [10]) of the composition of the Dehn twists around the vanishing cycles, and the direct limit is taken along multiplication by the Seidel class \(S\in\mathrm{HF}^{0}(\Sigma_{g},\phi)\) (defined in SS4 below)._
The ring structure on symplectic cohomology \(\mathrm{SH}^{*}(\Sigma_{g}^{0})\) is induced by the product structure
\[\mathrm{HF}^{*}(\Sigma_{g},\phi^{i})\otimes\mathrm{HF}^{*}(\Sigma_{g},\phi^{ j})\to\mathrm{HF}^{*}(\Sigma_{g},\phi^{i+j})\]
on fixed point Floer (co)homology as in [10].
The justification for this definition will be given in SS4, where we prove Theorem 4.1, showing that our symplectic cohomology algebra does indeed compute the Hochschild cohomology of the Fukaya category of the nodal curve:
Theorem 1.: _Suppose \(M\) is a non-degenerate Liouville manifold in the sense of [1, Definition 1.1], then the twisted closed-open map \(\mathcal{CO}_{\phi}\) is an isomorphism and there is an equivalence of graded algebras:_
\[\mathrm{HH}^{*}(\mathcal{F}(M^{0}))\cong\varinjlim_{d}\mathrm{HF}^{*}(\phi^{ d})\]
Figure 1: The mirror \(Z_{2}\) of a nodal genus-2 curve \(\Sigma_{2}^{0}\).
_where the connecting maps in the direct limit are given by multiplication by the Seidel class \(S\) in \(\operatorname{HF}^{0}(\phi)\)._
_Remark 1_.: As the computations in Theorem 7.6 and 7.7 show, we can think of the symplectic cohomology \(\operatorname{SH}^{*}(\Sigma^{0}_{g})\) as the quantum deformation of the singular cohomology of \(\Sigma^{0}_{g}\) with the node removed. As this result falls out of a direct computation, it is not clear in what level of generality this result is expected to hold. Our definition of symplectic cohomology applies to singular hypersurfaces of any dimension, though it is not clear whether we can still view our definition of symplectic cohomology as a deformation of singular cohomology (of the complement of the singular locus). For this reason, we have described our \(A\)-model invariant as'symplectic cohomology' rather than the quantum cohomology of \(\Sigma^{0}_{g}\), even though the curve \(\Sigma_{g}\) may be closed.
We are certainly not claiming that this is the same as other notions of Gromov-Witten theory that may be defined for nodal curves; and this is certainly not the only way the symplectic cohomology of such a curve could be defined. However, attempting to compute the quantum cohomology of a nodal curve by naive counts of actual holomorphic spheres inside the nodal curve will only yield trivial results.
The invariants on the \(B\)-side that we shall consider are given by taking the cohomology of
Definition 1.2.: _The sheaf \(\widetilde{T}_{Z}\) of **balanced vector fields** on a trivalent configuration \(Z\) of \(\mathbb{A}^{1}s\) and \(\mathbb{P}^{1}s\) is the sheaf whose sections are vector fields on \(Z\) (vanishing at the nodes) whose rotation numbers around every node sum to zero._
The justification for this definition will be given in SS2 where we prove Theorem 2.2, showing that the cohomology of the sheaves of balanced vector fields computes the Hochschild cohomology of the matrix factorization category:
Theorem.: _Let \(Z^{0}_{g}\) be the critical locus of the LG model \((X^{0}_{g},W^{0}_{g})\) mirror to the nodal curve \(\Sigma^{0}_{g}\); then_
\[\operatorname{HH}^{\operatorname{even}}(\operatorname{MF}(X^{0}_ {g},W^{0}_{g})) \cong H^{0}(\mathcal{O}_{Z^{0}_{g}})\oplus H^{1}(\widetilde{T}_{Z^ {0}_{g}})\] \[\operatorname{HH}^{\operatorname{odd}}(\operatorname{MF}(X^{0}_ {g},W^{0}_{g})) \cong H^{1}(\mathcal{O}_{Z^{0}_{g}})\oplus H^{0}(\widetilde{T}_{Z^ {0}_{g}})\]
_as \(\mathbb{C}\)-algebras and modules respectively._
_Remark 2_.: One could say that \(\widetilde{T}_{Z^{0}_{g}}\) represents the vector fields corrected by the sheaf of vanishing cycles on \(\operatorname{crit}(W_{g})\) as in [1].
With these definitions our main theorem is:
Theorem 1.3.: _(**Closed-string mirror symmetry for nodal curves**) There is an equivalence of graded algebras_
\[\operatorname{SH}^{i}(\Sigma^{0}_{g})\cong\bigoplus_{j+k\cong i\bmod 2}H^{j}(Z ^{0}_{g},\wedge^{k}\widetilde{T}_{Z^{0}_{g}})\]
_Moreover, the same result holds if \(\Sigma^{0}_{g}\) has several nodes whose vanishing cycles are disjoint homologically linearly independent closed curves._
_Remark 3_ (Multiple Dehn twists on \(\Sigma_{g}\)).: Let \(\Sigma_{g}\) be the closed Riemann surface with genus \(g\geq 2\) (respectively \(\Sigma_{g,k}\) the \(k\)-th punctured Riemann surface). Whenever we talk about performing
multiple Dehn twists on \(\Sigma_{g}\) (resp. \(\Sigma_{g,k}\)) along different circles, we always assume the following. Let \(C_{1},\cdots,C_{\ell}\) denote the embedded closed curves along which we are performing the Dehn twists: we assume that they are disjoint and homologically linearly independent. Equivalently, this means that \(C_{1}\cup C_{2}\cup\cdots\cup C_{\ell}\) is non-separating.
The proof of this theorem involves directly computing both the symplectic cohomology via a direct limit of fixed point Floer cohomology groups (Theorem 7.6), and the cohomology of sheaves of balanced vector fields on the mirror (Theorem 2.2) to be isomorphic to the same graded algebra.
A very similar calculation can be performed for nodal curves with punctures. Let \(\Sigma_{g,k}\) denote \(\Sigma_{g}\) with finitely many punctures \(\{p_{1},p_{2},\cdots,p_{k}\}\) and disjoint homologically linearly independent vanishing cycles \(C_{1},\cdots,C_{\ell}\), and let \(\Sigma^{0}_{g,k}\) denote the corresponding punctured nodal curve. The punctured curve \(\Sigma^{0}_{g,k}\) is then mirror to a Landau-Ginzburg model \((X^{\prime}_{g},W^{\prime}_{g})\) whose critical locus \(Z^{\prime}_{g}\) is a degeneration of \(Z^{0}_{g}\) that has one additional trivalent node for each puncture (see Figure 2). Then we have
**Theorem 1.4**: _(**Closed string mirror symmetry for punctured nodal curves**) There is an isomorphism of \(\mathbb{Z}/2\)-graded algebras:_
\[\mathrm{SH}^{i}(\Sigma^{0}_{g,k})\cong\bigoplus_{j+\ell\cong i\bmod 2}H^{j}(Z^{ \prime}_{g},\wedge^{\ell}\widetilde{T}_{Z^{\prime}_{g}})\]
The reader may find the following table helpful for keeping track of notation (under the assumption that \(\Sigma_{g,k}\) has disjoint homologically linearly independent vanishing cycles).
\begin{tabular}{c|c|c} \hline \(A\)**-side** & \(B\)**-side** & **critical locus** \\ \hline smooth compact curve \(\Sigma_{g}\) & LG model \((X_{g},W_{g})\) & \(Z_{g}\) trivalent configuration of \((3g-3)\)-many \(\mathbb{P}^{1}\)s \\ Dehn twist \(\phi\) on \(\Sigma_{g}\) along \(C\) & - & line bundle \(\mathcal{L}\) on \(Z_{g}\), degree-1 on one \(\mathbb{P}^{1}\) \\ \(\ell\)-nodal compact curve \(\Sigma^{0}_{g}\) & LG model \((X^{0}_{g},W^{0}_{g})\) & \(\ell\)-punctured trivalent configuration \(Z^{0}_{g}\) (Fig. 1) \\ nodal \(k\)-punctured curve \(\Sigma^{0}_{g,k}\) & LG model \((X^{\prime}_{g},W^{\prime}_{g})\) & punctured curve \(Z^{\prime}_{g}\) with \(k\)-many \(\mathbb{A}^{1}\)s (Fig. 2) \\ \hline \end{tabular}
Figure 2: The mirror \(Z^{\prime}_{2}\) of a punctured nodal genus-2 curve \(\Sigma^{0}_{2,1}\).
_Remark 4_.: All of our results above apply only in the case \(g\geq 2\). In the case where \(g=1\), Theorem 1.3 above follows more or less directly from Theorem 4.1 combined with Theorem 2 from [10].
### Homogeneous Coordinate Rings
If \(\phi\) is the monodromy around a large complex structure limit point, one expects that it corresponds under an appropriate homological mirror symmetry equivalence to tensoring by an ample line bundle \(\mathcal{L}\) on the mirror [10]. In our case, a Dehn twist on \(\Sigma_{g}\) is mirror to tensoring by a degree-1 line bundle \(\mathcal{L}\) on one \(\mathbb{P}^{1}\)-component of \(Z_{g}=\operatorname{crit}(W_{g})\). This motivates the following theorem:
**Theorem 1.5**.: _(**Mirror symmetry for homogeneous coordinate rings**) There is an equivalence_
\[\bigoplus_{d=0}^{\infty}\operatorname{HF}^{k}(\Sigma_{g},\phi^{d})\cong \bigoplus_{d=0}^{\infty}\bigoplus_{i+j\not=k\text{ mod }2}H^{i}(\wedge^{j}T_{Z_{g}} \otimes\mathcal{L}^{\otimes d})\]
_of graded algebras (for \(k=0\)) and graded modules (when \(k=1\)) where the grading is given by the order \(d\). Similarly, by starting with \(\phi^{2}\),_
\[\bigoplus_{d=0}^{\infty}\operatorname{HF}^{k}(\Sigma_{g},\phi^{2d})\cong \bigoplus_{d=0}^{\infty}\bigoplus_{i+j\not=k\text{ mod }2}H^{i}(\wedge^{j}T_{Z_{g}} \otimes\mathcal{L}^{\otimes 2d})\]
_as graded algebras/modules._
There are analogous results also for \(\phi^{k}\) for \(k\geq 3\). The rings that arise in this case are also intrinsically interesting, as they give rise to different embeddings of the nodal elliptic curve into weighted projective spaces.
In SS2 we justify these \(B\)-side invariants as the result of an expected twisted HKR theorem for matrix factorization categories. We then calculate these homogeneous coordinate rings on the \(B\)-side; in SS5 and SS7.4 we carry out the calculation on the \(A\)-side and verify that we obtain the same algebra.
## Total Spaces of Line Bundles
There is an alternative way to formulate these results: let \(f:E\to\mathbb{C}^{*}\) be the \(\phi\)-twisted \(\Sigma_{g,k}\)-bundle over \(\mathbb{C}^{*}\). When \(\Sigma_{g,k}\) is an open surface (i.e. \(k\geq 1\)), we can make \(E\) into a Liouville domain and define \(\operatorname{SH}^{*}(E,f)\), the symplectic cohomology (in the sense of [11]) of the Liouville sector obtained from \((E,f)\) by placing a stop at \(f^{-1}(-\infty)\). If one assumes that a twisted version of the Kunneth theorem of [11] to hold, we have
\[\operatorname{SH}^{0}(E,f)\cong\bigoplus_{d=0}^{\infty}\operatorname{HF}^{0}( \Sigma_{g,k},\phi^{d})\]
Therefore, from Theorem 1.5 we may deduce:
**Theorem 1.6**.: _If a twisted Kunneth theorem for symplectic cohomology is assumed to hold, then there is an equivalence of algebras_
\[\operatorname{SH}^{0}(E,f)\cong H^{0}(L,\mathcal{O}_{L})\]
_where \(L\) is the total space of the line bundle \(\mathcal{L}\) over \(Z_{g}\)._
This we may regard also as a form of mirror symmetry. If \(\Sigma_{g,k}\) can be realized as a very affine hypersurface \(H\subseteq(\mathbb{C}^{*})^{2}\) and \(\Sigma_{g,k}^{0}\) is its large complex structure limit induced by a choice of subdivision of the Newton polytope (as in [1]), then it is proved in [11, Proposition 3.2.2] that the twisted \(H\)-bundle \(E\) is homologically mirror to the total space of the canonical line bundle \(K_{X}\). The proof of Theorem 1.6 is analogous to the proof given there, assuming Theorem 1.5.
NOTATION AND CONVENTIONS
The _Fukaya category_\(\mathcal{F}(M)\) refers to the split-closure of the \(A_{\infty}\)-category of \(\mathbb{Z}/2\)-graded twisted complexes over the Fukaya category, (partially) wrapped if appropriate (as in [11]). Grading and sign conventions are as in [12]. The _matrix factorization category_ MF is the \(\mathbb{Z}/2\)-graded dg-derived category of coherent matrix factorizations. We write \(\hom\) for morphism complexes in a \(\operatorname{dg}/A_{\infty}\)-category, and \(\hom\) for their cohomology. We write \(\mathcal{C}-\hom\) for the category of \(\mathcal{C}-\mathcal{C}\) bimodules, and \(\mathcal{Y}^{\ell},\mathcal{Y}^{r}\) for left and right Yoneda modules respectively. All coefficient rings are \(\mathbb{C}\) unless otherwise stated; all algebras are \(\mathbb{Z}/2\)-graded.
ACKNOWLEDGEMENTS
We would like to thank Kai Xu for asking us questions about fixed point Floer homology that led to many fruitful ideas; we would also like to thank Shaoyun Bai for telling us about his forthcoming work with Paul Seidel [BS]. MJ would also like to thank Sheel Ganatra and Xujia Chen for helpful conversations, as well as his advisor Denis Auroux for his invaluable support and guidance. MJ was partially supported by the Rutherford Foundation of the Royal Society of New Zealand, NSF grants DMS-1937869 and DMS-2202984, and by Simons Foundation grant #385573.
\(B\)-Model Calculation
The critical locus \(\operatorname{crit}(W_{g})\) is a trivalent configuration of \(\mathbb{P}^{1}\)s, with \((2g-2)\) nodes, call them \(p_{i}\), and \((3g-3)\) irreducible \(\mathbb{P}^{1}\)-components. When \(\Sigma^{0}_{g}\) has a single node, the mirror \((X^{0}_{g},W^{0}_{g})\) to \(\Sigma^{0}_{g}\) has critical locus \(Z^{0}_{g}=\operatorname{crit}(W^{0}_{g})\) which is the critical locus of \(W_{g}\) punctured at a single point: say this puncture occurs on component \(P_{0,1}\) (containing \(p_{0},p_{1}\)) without loss of generality.
### Hochschild-Kostant-Rosenberg Theorem and Balanced Vector Fields
In this subsection, we justify our use of balanced vector fields as our \(B\)-model invariants with the following theorem:
Theorem 2.1.: _Let \(Z^{0}_{g}\) be the critical locus of the LG model \((X^{0}_{g},W^{0}_{g})\) mirror to the nodal curve \(\Sigma^{0}_{g}\); then_
\[\operatorname{HH}^{\operatorname{even}}(\operatorname{MF}(X^{0}_ {g},W^{0}_{g})) \cong H^{0}(\mathcal{O}_{Z^{0}_{g}})\oplus H^{1}(\widetilde{T}_{Z^ {0}_{g}})\] \[\operatorname{HH}^{\operatorname{odd}}(\operatorname{MF}(X^{0}_ {g},W^{0}_{g})) \cong H^{1}(\mathcal{O}_{Z^{0}_{g}})\oplus H^{0}(\widetilde{T}_{Z^ {0}_{g}})\]
_as \(\mathbb{C}\)-algebras and as modules respectively._
Proof.: By the Hochschild-Kostant-Rosenberg theorem for global matrix factorization categories [13, Theorem 3.1], we can compute Hochschild cohomology of the matrix factorization category \(\operatorname{MF}(X_{g},W_{g})\) as the hypercohomology of the complex \((\bigwedge^{*}T_{X_{g}},t_{\text{\tiny{$\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ \cdot
LG model takes the form \((\mathbb{C}^{3},xyz)\)[11]. A simple calculation shows that the cohomology of the complex \((\bigwedge^{*}T_{\mathbb{C}^{3}},\iota_{\mathrm{d}(xyz)})\) is given by:
\[H^{i}\left(\wedge^{*}T_{\mathbb{C}^{3}},\iota_{\mathrm{d}(xyz)}\right)\cong \left\{\begin{array}{ll}\mathbb{C}[x,y,z]/(xy,yz,xz)&\text{for}&i=0\\ \mathbb{C}[x,y,z]/(xy,yz,xz)\langle x\partial_{x}-y\partial_{y},y\partial_{y} -z\partial_{z}\rangle&\text{for}&i=1\\ 0&\text{for}&i\geq 2\end{array}\right.\]
Thus the local regular functions are unchanged, while the vector fields are required to be 'balanced' in the sense that the rotation numbers around the nodes sum to zero. Therefore the hypercohomology of this sheaf \((\bigwedge^{*}T_{X_{g}},\iota_{\mathrm{d}W})\) is the same as the cohomology of the sheaf of balanced vector fields \(\widetilde{T}_{Z_{g}}\) on \(Z_{g}\). An entirely analogous argument applies when \(Z_{g}^{0}\) has punctures or additional \(\mathbb{A}^{1}\)-components. \(\blacksquare\)
### Sheaf Cohomology of Balanced Vector Fields
In this section we shall calculate the algebraic structure of the \(B\)-model invariants directly:
Theorem 2.2.: _Let \(Z_{g}^{0}\) be the critical locus of the LG model \((X_{g}^{0},W_{g}^{0})\) mirror to the nodal curve \(\Sigma_{g}^{0}\): then_
\[H^{0}(\mathcal{O}_{Z_{g}^{0}})\oplus H^{1}(\widetilde{T}_{Z_{g}^ {0}})\cong A\] \[H^{1}(\mathcal{O}_{Z_{g}^{0}})\oplus H^{0}(\widetilde{T}_{Z_{g}^ {0}})\cong A\oplus\mathbb{C}^{2g-2}\]
_as \(\mathbb{C}\)-algebras and as \(A\)-modules respectively, where \(A\) is the \(\mathbb{C}\)-algebra \(A=\mathbb{C}[Y,Z]/(YZ=Y^{3}+Z^{2})\)._
Proof.: To compute this sheaf cohomology, we can take a Zariski open cover of \(Z_{g}\) given by \(U_{p}\), the complement of all components of \(Z_{g}\) not adjacent to node \(p\); for \(i\neq 0,1\) this is the affine scheme \(\{xy=yz=xz=0\}\subseteq\mathbb{C}^{3}\). Except for \(i,j=0,1\), every two-fold intersection \(U_{p_{i}}\cap U_{p_{j}}\) is a disjoint union of three \(\mathbb{C}^{*}\)s. For \(i,j=0,1\) we have
\[\mathcal{O}_{Z_{g}}(U_{p_{0}})\cong\mathbb{C}[x,y,z,(1+x)^{-1}]/( xy,yz,xz)\] \[\mathcal{O}_{Z_{g}}(U_{p_{1}})\cong\mathbb{C}[x^{\prime},y^{ \prime},z^{\prime},(1+x^{\prime})^{-1}]/(x^{\prime}y^{\prime},y^{\prime}z^{ \prime},x^{\prime}z^{\prime})\] \[\mathcal{O}_{Z_{g}}(U_{p_{0}}\cap U_{p_{1}})\cong\mathbb{C}[x^{ \pm},(1+x)^{-1}]\oplus\mathbb{C}[y^{\pm}]\oplus\mathbb{C}[z^{\pm}]\]
and the restriction map takes \(x,y,z\mapsto x,y,z\) and \(x^{\prime},y^{\prime},z^{\prime}\mapsto 1/x,1/y,1/z\) respectively. Here we identify \(P_{0,1}\cong\mathbb{P}^{1}\setminus\{-1\}\) with local coordinates \(x\) near \(p_{0}=0\) and \(x^{\prime}=1/x\) near \(p_{1}=\infty\). All three-fold intersections \(U_{i}\cap U_{j}\cap U_{k}\) are empty for \(i<j<k\) so there are no \(H^{2}\) contributions.
Regardless of the genus, \(H^{0}(\mathcal{O}_{Z_{g}^{0}})\) is always equivalent to the algebra \(A=\mathbb{C}[Y,Z]/(YZ-Z^{2}-Y^{3})\) of functions on a nodal affine cubic curve. This is simply because regular functions on \(Z_{g}^{0}\) must be constant over the compact \(\mathbb{P}^{1}\) components, so the functions on the punctured component \(P_{0,1}\) must hence take on the same value at \(0\) and \(\infty\). This is therefore equivalent to the algebra of regular functions on a once-punctured \(\mathbb{P}^{1}\) with two points identified, which is an affine nodal elliptic curve. Since the once-punctured nodal elliptic curve has no moduli, this ring of functions is isomorphic
to \(A\). Explicitly, regular functions on \(Z^{0}_{g}\) correspond to rational functions \(f\in\mathbb{C}[x,(1+x)^{-1}]\) with \(f(0)=f(\infty)\). These are generated as an algebra by the functions
\[1 =\frac{1+x}{1+x},\] \[Y =\frac{x}{(1+x)^{2}},\] \[Z =\frac{x^{2}}{(1+x)^{3}}\]
which indeed satisfy the relation \(YZ=Y^{3}+Z^{2}\).
To calculate the cohomology of \(\mathcal{O}_{Z^{0}_{g}}\) we use the Cech complex:
\[0\to\mathcal{O}_{Z^{0}_{g}}(Z^{0}_{g})\to\bigoplus_{i}\mathcal{O}_{Z^{0}_{g}}( U_{p_{i}})\stackrel{{\mathrm{d}}}{{\longrightarrow}}\bigoplus_{i<j} \mathcal{O}_{Z^{0}_{g}}(U_{p_{i}}\cap U_{p_{j}})\to 0\]
The map \(\mathrm{d}\) will be surjective on Laurent polynomials with no constant terms, but \(\mathrm{d}\) will take a constant \(c\in\mathcal{O}_{Z^{0}_{g}}(U_{p_{i}})\) to \((c,c,c)\in\bigoplus_{j}\mathcal{O}_{Z^{0}_{g}}(U_{p_{i}}\cap U_{p_{j}})\) where \(p_{j}\) are the three vertices adjacent to \(p_{i}\). Over all \(i\), this means the image of \(\mathrm{d}\) inside the \((3g-3)\)-dimensional space of constant functions in \(\bigoplus_{i>j}\mathcal{O}_{Z^{0}_{g}}(U_{p_{i}}\cap U_{p_{j}})\) is the span of \((2g-2)\) vectors, though only \((2g-3)\) are linearly independent. The only other constant functions in the image are those in the image of the restriction map \(\mathcal{O}_{Z^{0}_{g}}(U_{p_{0}})\oplus\mathcal{O}_{Z^{0}_{g}}(U_{p_{1}})\to \mathcal{O}_{Z^{0}_{g}}(U_{p_{0}}\cap U_{p_{1}})\), which takes
\[\left(\frac{1}{1+x},\frac{1}{1+x^{\prime}}\right)\mapsto\frac{1}{1+x}+\frac{x} {1+x}=1\]
so that \((1,0,0)\in\bigoplus_{j}\mathcal{O}_{Z^{0}_{g}}(U_{p_{0}}\cap U_{p_{j}})\) is also in the image of \(\mathrm{d}\). Therefore this gives us a cokernel of dimension \((3g-3)-(2g-3)-1=g-1\) and so this Cech cohomology calculation tells us that \(H^{1}(\mathcal{O}_{Z^{0}_{g}})\cong\mathbb{C}^{g-1}\), with all classes represented by constant functions.
The global sections of the sheaf \(\widetilde{T}_{Z^{0}_{g}}\) of balanced vector fields is given by \(A\langle x\partial_{x}\rangle\oplus\mathbb{C}^{g-1}\) (as an \(A\)-module), consisting of vector fields that satisfy the balancing condition at every vertex. On each compact \(\mathbb{P}^{1}\) component, every vector field is a constant multiple of \(x\partial_{x}\); around every vertex, these multiples must sum to zero. On the punctured component \(P_{0,1}\), vector fields are of the form \(f(x)x\partial_{x}\) for \(f\in\mathcal{O}(P_{0,1})\). The balancing conditions at every node force the rotation numbers of the vector field at \(0\) and \(\infty\) of \(P_{0,1}\) to agree, so that \(f(0)=f(\infty)\). Again, this is the algebra \(A\) of functions on a nodal affine cubic curve, and so balanced vector fields are of the form \(f(x)x\partial_{x}\) for \(f\in A\) over \(P_{0,1}\). The \(2g-2\) balancing conditions, along with the \(3g-3\) components, mean that there are \(g-1\) constant rotations that need to be specified also: \(A\) acts on this summand by evaluation at \(0\).
To calculate the cohomology of \(\widetilde{T}_{Z^{0}_{g}}\), we consider the Cech complex
\[0\to\widetilde{T}_{Z^{0}_{g}}(Z^{0}_{g})\to\bigoplus_{i}\widetilde{T}_{Z^{0}_{ g}}(U_{p_{i}})\stackrel{{\mathrm{d}}}{{\longrightarrow}}\bigoplus_{i<j} \widetilde{T}_{Z^{0}_{g}}(U_{p_{i}}\cap U_{p_{j}})\to 0\]
Again, for \(i\neq 0,1\), the intersections \(U_{p_{i}}\cap U_{p_{j}}\) are a disjoint union of three \(\mathbb{C}^{*}\)s and so
\[\widetilde{T}_{Z^{0}_{g}}(U_{p_{i}}) \cong\mathbb{C}[x,y,z]/(xy,yz,xz)\langle x\partial_{x}-y\partial_{ y},y\partial_{y}-z\partial_{z}\rangle\] \[\bigoplus_{j}\widetilde{T}_{Z^{0}_{g}}(U_{p_{i}}\cap U_{p_{j}}) \cong\mathbb{C}[x^{\pm}]\langle\partial_{x}\rangle\oplus\mathbb{C}[ y^{\pm}]\langle\partial_{y}\rangle\oplus\mathbb{C}[z^{\pm}]\langle \partial_{z}\rangle\]
Because of the balancing condition, the image of the restriction maps \(\widetilde{T}_{Z^{0}_{g}}(U_{p_{i}})\to\bigoplus_{j}\widetilde{T}_{Z^{0}_{g}}(U_{ p_{i}}\cap U_{p_{j}})\) is the subspace of vector fields of the form \(f(x)\partial_{x}+g(y)\partial_{y}+h(z)\partial_{z}\) with the condition that \(f(0)+g(0)+h(0)=0\).
However, for the punctured component \(P_{0,1}\) the restriction map \(\widetilde{T}_{Z^{0}_{g}}(U_{p_{0}})\oplus\widetilde{T}_{Z^{0}_{g}}(U_{p_{1}} )\to\widetilde{T}_{Z^{0}_{g}}(U_{p_{0}}\cap U_{p_{1}})\) takes
\[\left(-\frac{(x^{\prime})^{2}\partial_{x^{\prime}}}{1+x^{\prime}},\frac{x^{2} \partial_{x}}{1+x}\right)\mapsto\frac{x\partial_{x}}{1+x}+\frac{x^{2} \partial_{x}}{1+x}=x\partial_{x}\]
where we use the same coordinates on \(P_{0,1}\) as above. Here, since
\[\frac{x^{2}\partial_{x}}{1+x}=\frac{x}{1+x}(x\partial_{x}-y\partial_{y})\]
both are balanced vector fields themselves (i.e. sections of \(\widetilde{T}_{Z^{0}_{g}}(U_{p_{i}})\) for \(i=0,1\)) and so the unbalanced vector field \(x\partial_{x}\) is in the image of \(\mathrm{d}\). Since all balanced vector fields are also in the image of \(\mathrm{d}\), this means that \(\mathrm{d}\) is surjective and therefore the Cech cohomology calculation yields \(H^{1}(\widetilde{T}_{Z^{0}_{g}})=0\). \(\blacksquare\)
In the case of Dehn twists around \(k\) disjoint homologically linearly independent closed curves on \(\Sigma_{g}\) (see Remark 3), the mirror will be \((X^{\prime}_{g},W^{\prime}_{g})\), where the critical locus \(\mathrm{crit}(W^{\prime}_{g})=Z^{\prime}_{g}\) has \(k\) punctures on different components. Then a straightforward generalization of the above calculation implies:
Theorem 2.3.: _Let \(Z^{\prime}_{g}\) be the critical locus of the LG model \((X^{\prime}_{g},W^{\prime}_{g})\) mirror to the \(k\)-nodal curve \(\Sigma^{0}_{g}\); then_
\[\mathrm{HH}^{\mathrm{even}}(\mathrm{MF}(X^{\prime}_{g},W^{\prime}_{g})) \cong H^{0}(\mathcal{O}_{Z^{\prime}_{g}})\oplus H^{1}(\widetilde{T}_{Z^{ \prime}_{g}})\cong A\times_{\mathbb{C}}A\times_{\mathbb{C}}A\times...\times_{ \mathbb{C}}A\]
_as \(\mathbb{C}\)-algebras and as \(A\)-modules respectively, where \(A\) is the \(\mathbb{C}\)-algebra \(A=\mathbb{C}[Y,Z]/(YZ=Y^{3}+Z^{2})\) and \(\times_{\mathbb{C}}\) denotes the fiber product of rings over their common map \(A\to\mathbb{C}\) (iterated \(k\) times in each line)._
### Mirrors to Open Surfaces
Now consider \(Z^{\prime}_{g}\), a trivalent configuration of one \(\mathbb{A}^{1}\) component and \((3g+4)\)-many \(\mathbb{P}^{1}\) components (one of them punctured), as in Figure 2. Again denote the punctured component \(P_{0,1}\) and we will denote the \(\mathbb{A}^{1}\) component by \(P_{\infty}\), with local coordinate \(t=0\) corresponding to the nodal point. We explain how to modify the above calculations for this case.
Theorem 2.4.: _Let \(Z^{\prime}_{g}\) be the critical locus of the LG model \((X^{\prime}_{g},W^{\prime}_{g})\) mirror to the open nodal curve \(\Sigma^{0}_{g,1}\); then_
\[H^{0}(\mathcal{O}_{Z^{\prime}_{g}})\oplus H^{1}(\widetilde{T}_{Z ^{\prime}_{g}}) \cong A\times_{\mathbb{C}}\mathbb{C}[T]\] \[H^{0}(\widetilde{T}_{Z^{\prime}_{g}})\oplus H^{1}(\mathcal{O}_{ Z^{\prime}_{g}}) \cong (\mathbb{C}[W]\times_{\mathbb{C}}\mathbb{C}[T])\oplus\mathbb{C}^{2g-2}\]
_as \(\mathbb{C}\)-algebras and as \(A\times_{\mathbb{C}}\mathbb{C}[T]\)-modules respectively, where \(A\) denotes the \(\mathbb{C}\)-algebra \(A=\mathbb{C}[Y,Z]/(YZ-Z^{2}-Y^{3})\). The fiber product in the first is taken along the evaluation at zero maps; while in the
second it is taken along the evaluation maps \(F(W)\mapsto F(0)-F(1)\) and \(g(T)\mapsto g(0)\). The algebra \(A\) acts on \(\mathbb{C}[W]\) by \(Y\mapsto W-W^{2},Z\mapsto W^{2}-W^{3}\)._
Proof.: In the case of \(H^{0}(\mathcal{O}_{Z_{g}^{\prime}})\), the global functions on \(Z_{g}^{\prime}\) simply consist of a pair \((f,g)\) where \(f\in A\) is a global function on \(P_{0,1}\) (with \(f(0)=f(\infty)\)) and \(g\in\mathbb{C}[t]\) is a function on \(\mathbb{A}^{1}\) such that \(g(0)=f(0)\). This means exactly that \(H^{0}(\mathcal{O}_{Z_{g}^{\prime}})\) is the fiber product \(A\times_{\mathbb{C}}\mathbb{C}[T]\) of algebras over their evaluation at zero maps to \(\mathbb{C}\). For the same reasons as in the proof of Theorem 2.2, \(H^{1}(\widetilde{T}_{Z_{g}^{\prime}})=0\).
To calculate the cohomology group \(H^{1}(\mathcal{O}_{Z_{g}^{\prime}})\) we may use the same Cech complex as above. Now, the intersections \(U_{i}\cap U_{j}\) will consist of \((3g-3)\) copies of \(\mathbb{C}^{*}\)s and one pair of pants. Again, d will be surjective onto Laurent polynomials with no constant terms, and there are \(2g-2+1\) constraints on the possible constant terms (corresponding to the number of nodes), minus \(1\) for the additional way of creating constant terms on the component \(P_{0,1}\) as described in the proof of Theorem 2.2. Therefore, the dimension of the cokernel of d is \((3g-3)-(2g-2+1-1)=g-1\), and each of these classes can be represented by constant functions. Hence \(H^{1}(\mathcal{O}_{Z_{g}^{\prime}})\cong\mathbb{C}^{g-1}\) and \(H^{0}(\mathcal{O}_{Z_{g}^{\prime}})\) acts by constants.
Finally, we calculate \(H^{0}(\widetilde{T}_{Z_{g}^{\prime}})\), the global balanced vector fields. If we write \(f(x)x\partial_{x}\) for a section of \(\widetilde{T}_{Z_{g}^{\prime}}\) over \(P_{0,1}\) (with \(f\in\mathbb{C}[x,(1+x)^{-1}]\) and \(f(\infty)<\infty\)), and \(g(t)t\partial_{t}\) for a section over \(P_{\infty}\) (with \(g\in\mathbb{C}[t]\)), then because of the non-compact \(\mathbb{A}^{1}\) component \(P_{\infty}\), the balancing conditions imply that we must have that \(f(0)-f(\infty)=g(0)\). A basis for \(H^{0}(\mathcal{O}_{Z_{g}^{\prime}})\) consists of two kinds of vector fields:
1. Those given by non-zero pairs \((f(x)x\partial_{x},g(t)t\partial_{t})\) with \(f(0)-f(\infty)=g(0)\) (extended appropriately to the compact components);
2. Those \(g-1\) linearly independent vector fields given by constant rotations of the compact components satisfying the balancing conditions, vanishing on \(P_{0,1}\) and \(P_{\infty}\).
The algebra \(H^{0}(\mathcal{O}_{Z_{g}^{\prime}})\) acts by evaluation at zero on vector fields of type (2): this gives a submodule isomorphic to \(\mathbb{C}^{g-1}\). Vector fields of type (1) correspond to pairs
\[\left(F\left(\frac{x}{1+x}\right)x\partial_{x},g(t)t\partial_{t}\right)\]
where \(F\in\mathbb{C}[W]\) is a polynomial in one variable \(W\) and \(g\in\mathbb{C}[T]\) is a polynomial in \(T\), satisfying a balancing condition: since the difference in rotation numbers \(f(0)-f(\infty)\) is given by \(F(0)-F(1)\), the balancing condition becomes \(F(0)-F(1)=g(0)\). In this notation, the generator \(Y\) of \(A\) acts by \(W-W^{2}\) and the generator \(Z\) by \(W^{2}-W^{3}\), while \(T\) simply acts by multiplication by \(T\). This gives a complete description of the \(H^{0}(\mathcal{O}_{Z_{g}^{\prime}})\)-module structure on the fiber product \(\mathbb{C}[W]\times_{\mathbb{C}}\mathbb{C}[T]\) of \(\mathbb{C}[W]\) and \(\mathbb{C}[T]\) over the evaluation maps \(F(W)\mapsto F(0)-F(1)\) and \(g(T)\mapsto g(0)\).
When \(\Sigma_{g,k}\) has disjoint homologically linearly independent vanishing cycles and \(k\geq 1\), the mirror \(Z_{g}^{\prime}\) to the nodal curve \(\Sigma_{g,k}^{0}\) with more than one puncture will be another trivalent configuration of of \(k\)-many \(\mathbb{A}^{1}\) components and \((3g+3+k)\)-many \(\mathbb{P}^{1}\) components (one of them punctured). We have a straightforward generalization to this case:
Theorem 2.5.: _Let \(Z^{\prime}_{g}\) be the critical locus of the LG model \((X^{\prime}_{g},W^{\prime}_{g})\) mirror to the open nodal curve \(\Sigma^{0}_{g,k}\); then_
\[H^{0}(\mathcal{O}_{Z^{\prime}_{g}})\oplus H^{1}(\widetilde{T}_{Z^{\prime}_{g}}) \cong A\times_{\mathbb{C}}\mathbb{C}[T_{1}]\times_{\mathbb{C}}\mathbb{C}[T_{2 }]\times_{\mathbb{C}}\cdots\times_{\mathbb{C}}\mathbb{C}[T_{k}]\]
_as \(\mathbb{C}\)-algebras, where the fiber product is taken over the evaluation maps at zero; and_
\[H^{0}(\widetilde{T}_{Z^{\prime}_{g}})\oplus H^{1}(\mathcal{O}_{Z^{\prime}_{g} })\cong\ (\mathbb{C}[W]\times_{\mathbb{C}}(\mathbb{C}[T_{1}]\times\mathbb{C}[T_{2 }]\times\cdots\mathbb{C}[T_{k}]))\oplus\mathbb{C}^{2g-2}\]
_as \(A\times_{\mathbb{C}}\mathbb{C}[T_{1}]\times_{\mathbb{C}}\mathbb{C}[T_{2}]\times _{\mathbb{C}}\cdots\times_{\mathbb{C}}\mathbb{C}[T_{k}]\)-modules, where the fiber product is taken over the evaluation maps \(F(0)-F(1)=g_{1}(0)+\cdots+g_{k}(0)\), and \(A\) acts on \(\mathbb{C}[W]\) as above and on \(\mathbb{C}^{2g-2}\) by constants._
Proof.: Observe that in the above, we will introduce one additional variable \(t_{i}\) for each \(\mathbb{A}^{1}\) component, and regular functions on \(Z^{\prime}_{g}\) will consist of tuples of polynomials \((f,g_{1},\ldots,g_{k})\) with \(f\in\mathcal{O}_{P_{0,1}}\) and \(g_{i}\in\mathbb{C}[t_{i}]\) satisfying \(f(0)=f(\infty)=g_{1}(0)=\cdots=g_{k}(0)\). For the balanced vector fields, the difference between the rotation numbers of \(f(x)x\partial_{x}\) at \(0\) and \(\infty\) will again be given by the total rotation number around infinity of the vector fields \(g(t_{1})t_{1}\partial_{t_{1}},\ldots,g(t_{k})t_{k}\partial_{t_{k}}\) on the \(\mathbb{A}^{1}\)-components, equal to \(g_{1}(0)+\cdots+g_{k}(0)\).
### Homogeneous Coordinate Rings
In this section, we will describe the \(B\)-side calculations used to prove Theorem 1.5, after first justifying why this is the appropriate quantity to calculate.
One expects a version of the Hochschild-Kostant-Rosenberg theorem with coefficients (see [10, Theorem 3.11] for the affine case) to apply for matrix factorization categories, so that the Hochschild cohomology with coefficients in a power of a line bundle \(\mathcal{L}\) on \(Z_{g}\) (extended to \(X_{g}\)) is isomorphic to
\[\mathrm{HH}^{k}(\mathrm{MF}(X_{g},W_{g}),\mathcal{L}^{\otimes d})\cong\bigoplus _{i+j\cong k\ \mathrm{mod}\ 2}\mathbb{H}^{i}(\wedge^{j}T_{Z_{g}}\otimes \mathcal{L}^{\otimes d})\]
the hypercohomology of the sheaf of (balanced) polyvector fields on the critical locus \(Z_{g}\) with coefficients in a power of \(\mathcal{L}\).
The right hand side of the above can be calculated explicitly:
Theorem 2.6.: _If \(\mathcal{L}\) is a line bundle over \(Z_{g}\) that has degree \(1\) over one single \(\mathbb{P}^{1}\) component and is trivial over all the others, then there is an equivalence of algebras:_
\[\bigoplus_{k=0}^{\infty}H^{0}(\mathcal{L}^{\otimes k})\oplus H^{1}(\widetilde {T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k})\cong R\oplus\mathbb{C}\]
_where \(R\) is the graded \(\mathbb{C}\)-algebra \(\mathbb{C}[X,Y,Z]/(XYZ=Y^{3}+Z^{2})\) with \(|X|=1,|Y|=2,|Z|=3\) and the \(\mathbb{C}\) summand is a square-zero extension by the module \(R/(X,Y,Z)\). Moreover, there is an equivalence of \(R\)-modules,_
\[\bigoplus_{k=0}^{\infty}H^{1}(\mathcal{L}^{\otimes k})\oplus H^{0}(\widetilde {T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k})\cong R\oplus(\mathbb{C}[X])^{2g-2 }\oplus\mathbb{C}\]
_where \(\mathbb{C}\) is the \(R\)-module \(R/(X,Y,Z)\)._
Proof.: The line bundle \(\mathcal{L}\) is trivial over all but one component of \(Z_{g}\), where it has degree \(1\). Let \(P_{0,1}\) denote this distinguished \(\mathbb{P}^{1}\)-component of \(Z_{g}\) over which \(\mathcal{L}\) is non-trivial, so that \(\mathcal{L}^{\otimes k}|_{P_{0,1}}\cong\mathcal{O}_{\mathbb{P}^{1}}(k)\). Written in local coordinates, sections of \(\mathcal{L}^{\otimes k}\) over \(P_{0,1}\) correspond to pairs \((f,f^{\prime})\), where \(f\in\mathbb{C}[z],f^{\prime}\in\mathbb{C}[z^{\prime}]\) are a pair of polynomials satisfying \(f(z)=z^{k}f^{\prime}(\frac{1}{z})\). Sections of \(\mathcal{L}^{\otimes k}\) must be constant over all other components of \(Z_{g}\), so that we must also have \(f(0)=f^{\prime}(0)\). Therefore, sections of \(\mathcal{L}^{\otimes k}\) correspond to sections of \(\mathcal{O}_{\mathbb{P}^{1}}(k)\) that have the same value at \(0\) and \(\infty\). Hence we see that the graded algebra structure on \(\bigoplus_{k\geq 0}H^{0}(\mathcal{L}^{\otimes k})\) is just isomorphic to the homogeneous coordinate ring of a degree-\(1\) line bundle on a nodal elliptic curve. One may show using an elementary argument that there is an equivalence of graded algebras
\[\bigoplus_{k=0}^{\infty}H^{0}(\mathcal{L}^{\otimes k})\cong\mathbb{C}[X,Y,Z]/( XYZ=Y^{3}+Z^{2})\]
where \(|X|=1,|Y|=2,|Z|=3\) (see [1, Proposition IV.4.6] or [1, SS6.1]).
Explicit representatives for these generators in local coordinates on \(P_{0,1}\) are given by:
* \(X\) is represented by \((1+z,1+z^{\prime})\), and is \(1\) on all other components;
* \(Y\) is represented by \((z,z^{\prime})\), and is \(0\) on all other components;
* \(Z\) is represented by \((z^{2},z^{\prime})\), and is \(0\) on all other components.
which satisfy the relation \(XYZ=Y^{3}+Z^{2}\).
Next, we calculate the module structure on \(\bigoplus_{k\geq 0}H^{0}(\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k})\), the balanced vector fields with coefficients in \(\mathcal{L}^{\otimes k}\). Over the component \(P_{0,1}\), a section of \(\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k}\) corresponds to a pair \((f,f^{\prime})\) with \(f\in\mathbb{C}[z],f^{\prime}\in\mathbb{C}[z^{\prime}]\) satisfying \(f(z)=z^{k}f^{\prime}(\frac{1}{z})\). Over all of \(Z_{g}\) a section \(s\) of \(\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k}\) corresponds to a tuple \((f,f^{\prime},a_{2},\cdots,a_{3g-3})\) with \(a_{i}\in\mathbb{C}\), subject to \(2g-4\) balancing conditions among the \(a_{i}\)s around each node except \(p_{0},p_{1}\), plus two further balancing conditions at \(p_{0}\) and \(p_{1}\), given by \(f(0)+a_{2}+a_{3}=0,f^{\prime}(0)+a_{4}+a_{5}=0\). Here the corresponding section \(s\) is given by the vector field \(a_{i}z\partial_{z}\) on component \(i\neq 1\) of \(Z_{g}\). We can find a basis of sections of \(\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k}\) consisting of:
1. sections with \(f(0)=f^{\prime}(0)\),
2. sections that are zero on \(P_{0,1}\).
There are \(k\) basis vectors of the first kind, corresponding to a basis of \(H^{0}(\mathcal{L}^{\otimes k})\); there are \(g-1\) basis vectors of the second kind, there being \(3g-4\) constants to choose, subject to \(2g-3\) balancing conditions. Hence,
\[H^{0}(\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k})\cong H^{0}(\mathcal{ L}^{\otimes k})\oplus\mathbb{C}^{g-1}\]
and the action of the algebra \(\bigoplus_{k\geq 0}H^{0}(\mathcal{L}^{\otimes k})\) preserves this direct sum decomposition. As a module over \(\bigoplus_{k\geq 0}H^{0}(\mathcal{L}^{\otimes k})\), the first summand corresponds to \(\bigoplus_{k\geq 0}H^{0}(\mathcal{L}^{\otimes k})\) as a module over itself. Of the generators of the ring \(\mathbb{C}[X,Y,Z]/(XYZ=Y^{3}+Z^{2})\), the sections \(Y,Z\) act on the \(\mathbb{C}^{g-1}\) summand by \(0\), while \(X\) is \(1\) outside of \(P_{0,1}\), and so restricts to an isomorphism of \(\mathbb{C}^{g-1}\subseteq H^{0}(\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{ \otimes k})\) to \(\mathbb{C}^{g-1}\subseteq H^{0}(\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{ \otimes(k+1)})\). Therefore, as a module,
\[\bigoplus_{k=0}^{\infty}H^{0}(\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k })\cong\mathbb{C}[X,Y,Z]/(XYZ=Y^{3}+Z^{2})\oplus(\mathbb{C}[X])^{g-1}\]
Next, we calculate the module structure on the cohomology groups \(\bigoplus_{k\geq 0}H^{1}(\mathcal{L}^{\otimes k})\). To calculate the cohomology of \(\mathcal{L}^{\otimes k}\) we use the Cech complex:
\[0\to\mathcal{L}^{\otimes k}(Z_{g})\to\bigoplus_{i}\mathcal{L}^{\otimes k}(U_{p_{i }})\stackrel{{\mathrm{d}}}{{\longrightarrow}}\bigoplus_{i<j} \mathcal{L}^{\otimes k}(U_{p_{i}}\cap U_{p_{j}})\to 0\]
where \(U_{i}\) is the same open cover used above. We have
\[\mathcal{L}^{\otimes k}(U_{p_{i}})\cong\mathbb{C}[x,y,z]/(xy,yz, xz)\] \[\mathcal{L}^{\otimes k}(U_{p_{j}})\cong\mathbb{C}[x^{\prime},y^{ \prime},z^{\prime}]/(x^{\prime}y^{\prime},y^{\prime}z^{\prime},x^{\prime}z^{ \prime})\] \[\mathcal{L}^{\otimes k}(U_{p_{i}}\cap U_{p_{j}})\cong\mathbb{C}[ x^{\pm}]\oplus\mathbb{C}[y^{\pm}]\oplus\mathbb{C}[z^{\pm}]\]
and the restriction map takes \(x,y,z\mapsto x,y,z\) and \(x^{\prime},y^{\prime},z^{\prime}\mapsto 1/x,1/y,1/z\) respectively, except on the component \(P_{0,1}\), where \(f^{\prime}(z^{\prime})\mapsto z^{k}f^{\prime}(\frac{1}{z})\). For \(k\geq 0\), this means that the image of d contains all polynomials with no constant terms, and so the cokernel of d is spanned by \(g-1\) classes represented by constant functions (which can be chosen to be zero on \(P_{0,1}\)) just as in the proof of Theorem 2.2 above. Therefore, \(H^{1}(\mathcal{L}^{\otimes k})\cong\mathbb{C}^{g-1}\) and \(Y,Z\in H^{0}(\mathcal{L}^{\otimes k})\) act by \(0\), while again since \(X\) is \(1\) on all components other than \(P_{0,1}\), multiplying by \(X\) gives an isomorphism \(H^{1}(\mathcal{L}^{\otimes k})\to H^{1}(\mathcal{L}^{\otimes(k+1)})\). Hence, as a \(\bigoplus_{k\geq 0}H^{0}(\mathcal{L}^{\otimes k})\)-module,
\[\bigoplus_{k=0}^{\infty}H^{1}(\mathcal{L}^{\otimes k})\cong(\mathbb{C}[X])^{g- 1}\oplus\mathbb{C}\]
where the extra \(\mathbb{C}\) summand comes from the case \(k=0\) and so all positive-degree generators of \(R\) act by zero on this summand.
Finally, we need to calculate \(H^{1}(\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k})\), again using the Cech complex:
\[0\to\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k}(Z_{g})\to\bigoplus_{i} \widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k}(U_{p_{i}})\stackrel{{ \mathrm{d}}}{{\longrightarrow}}\bigoplus_{i<j}\widetilde{T}_{Z_{g}} \otimes\mathcal{L}^{\otimes k}(U_{p_{i}}\cap U_{p_{j}})\to 0\]
where
\[\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k}(U_{p_{i}}) \cong\mathbb{C}[x,y,z]/(xy,yz,xz)\langle x\partial_{x}-y\partial_{y },y\partial_{y}-z\partial_{z}\rangle\] \[\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k}(U_{p_{i}}) \cong\mathbb{C}[x^{\prime},y^{\prime},z^{\prime}]/(x^{\prime}y^{ \prime},y^{\prime}z^{\prime},x^{\prime}z^{\prime})\langle x^{\prime}\partial_{x ^{\prime}}-y^{\prime}\partial_{y^{\prime}},y^{\prime}\partial_{y^{\prime}}-z^{ \prime}\partial_{z^{\prime}}\rangle\] \[\bigoplus_{j}\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k} (U_{p_{i}}\cap U_{p_{j}}) \cong\mathbb{C}[x^{\pm}]\langle\partial_{x}\rangle\oplus\mathbb{C} [y^{\pm}]\langle\partial_{y}\rangle\oplus\mathbb{C}[z^{\pm}]\langle \partial_{z}\rangle\]
and the restriction map takes \(x,y,z\mapsto x,y,z\) and \(x^{\prime},y^{\prime},z^{\prime}\mapsto 1/x,1/y,1/z\) respectively, except on the component \(P_{0,1}\), where \(f^{\prime}(z^{\prime})z^{\prime}\partial_{z^{\prime}}\mapsto-z^{k}f^{\prime}( \frac{1}{z})z\partial_{z}\). For \(k\geq 1\), this restriction map on \(U_{0}\cap U_{1}\) means the rotation numbers of sections of \(\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k}\) over \(P_{0,1}\) need not be the same at \(p_{0}\) and \(p_{1}\), so, as in the proof of Theorem 2.2, all unbalanced vector fields are in the image of d, and d is surjective, meaning \(H^{1}(\widetilde{T}_{Z_{g}}\otimes\mathcal{L}^{\otimes k})=0\). For \(k=0\), the image of d contains only balanced vector fields and so \(H^{1}(\widetilde{T}_{Z_{g}})\cong\mathbb{C}\), and the product of this class with itself is therefore zero. \(\blacksquare\)
The degree-\(0\) part of the above algebra is a kind of homogeneous coordinate ring and can be calculated using an elementary Riemann-Roch argument: see [11, Proposition IV.4.6]. We can use this to see that if we start with \(\mathcal{L}^{\otimes 2}\) instead, we will get:
Proposition 2.7.: _There is an equivalence of graded algebras_
\[\bigoplus_{k=0}^{\infty}H^{0}(\mathcal{L}^{\otimes 2k})\cong\mathbb{C}[X,Y,Z]/(XYZ=Y ^{2}+Z^{4})\]
_where \(|X|=1,|Y|=2,|Z|=1\). Denote this graded algebra by \(R_{2}\)._
This comes from the map \(Z_{g}\to\mathbb{P}(1,1,2)\) whose image is a quartic hypersurface in the weighted projective plane \(\mathbb{P}(1,1,2)\): see [11, SS6.1].
Starting from this, it is not difficult to prove that
Theorem 2.8.: _If \(\mathcal{L}\) is a line bundle over \(Z_{g}\) that has degree \(1\) over one single \(\mathbb{P}^{1}\) component and is trivial over all the others, then there is an equivalence of graded algebras:_
\[\bigoplus_{k=0}^{\infty}H^{0}(\mathcal{L}^{\otimes 2k})\oplus H^{1}(\widetilde{ T}_{Z_{g}}\otimes\mathcal{L}^{\otimes 2k})\cong R_{2}\oplus\mathbb{C}\]
_where \(\mathbb{C}\) summand is a square-zero extension by \(R_{2}/(X,Y,Z)\). Moreover, there is an equivalence of \(R_{2}\)-modules,_
\[\bigoplus_{k=0}^{\infty}H^{1}(\mathcal{L}^{\otimes 2k})\oplus H^{0}(\widetilde{ T}_{Z_{g}}\otimes\mathcal{L}^{\otimes 2k})\cong R_{2}\oplus(\mathbb{C}[X])^{2g-2} \oplus\mathbb{C}\]
_where \(\mathbb{C}\) is a square-zero extension by \(R_{2}/(X,Y,Z)\)._
There are analogous results for the graded rings obtained by starting with \(\mathcal{L}^{\otimes k}\) instead for \(k\geq 3\), corresponding to further embeddings of nodal elliptic curves into weighted projective spaces: see [11, SS6.1] for details.
_Remark 5_.: The results from [11, 12] are stated with the assumption of smoothness, but they continue to apply in our case with the singular curve \(Z_{g}\).
## 3 Standard Lefschetz fibrations
In this section we review some of the important geometric setups that will be relevant in the A-side calculations, especially those related to the Seidel class.
Given a compactly-supported symplectomorphism \(\phi:M\to M\) of a Liouville domain \((M,\lambda_{M})\), define a **standard Lefschetz fibration**\(\pi:E\to\mathbb{C}\) for \(\phi\) to be an exact symplectic Lefschetz fibration over \(\mathbb{C}\) (in the sense of [10]) with total space a Liouville domain \((E,\lambda_{E})\) with the following properties:
* Each smooth fiber \(F_{t}=\pi^{-1}(t)\) is diffeomorphic to \(M\) and the restricted Liouville form \((F_{t},\lambda_{E}|_{F_{t}})\) makes the fiber \(F_{t}\) exact symplectomorphic to \((M,\lambda_{M})\) via symplectic parallel transport.
Using clockwise parallel transport around the boundary of the unit disk we get symplectomorphisms \(\phi_{t}:M\to F_{e^{it}}\) and identify \(\phi_{2\pi}:M\to M\) with the symplectomorphism \(\phi\). Then we require the additional condition:
* Outside of the disk \(\left\{\left|z\right|\leq 1-\varepsilon\right\}\), there is an isomorphism of \(\pi:E\to\mathbb{C}\) as an exact symplectic fiber bundle, with the symplectization of a mapping torus \(\mathbb{R}\times M_{\phi}\), where \(M_{\phi}\) is the mapping torus of \(\phi\) with the fiberwise Liouville 1-form induced from \(\lambda_{M}\). Here we view the symplectization of the mapping torus \(\mathbb{R}\times M_{\phi}\) as an exact symplectic fiber bundle with base \(\mathbb{R}\times S^{1}\), and fiber \((M,\lambda_{M})\).
One has a more general notion of a **standard Lefschetz fibration over a Riemann surface \(S\) with ends**: strip-like ends, cylindrical ends, and _twisting data_ (a choice of integer \(b_{i}\) for every boundary component and interior puncture), compare [14, p.239]. Here the counterclockwise parallel transport of \(\pi:E\to S\) along each boundary component or (sufficiently close to each) interior puncture is identified with the corresponding \(\phi^{bi}\), and \(\pi:E\to S\) comes with isomorphisms (as exact symplectic fiber bundles) with the (positive or negative) symplectization of an open mapping torus of \(\phi^{b_{i}}\) over each cylindrical end, and with the trivial product bundle over each strip-like end. The above is a special case where \(S=\mathbb{C}\) with a positive cylindrical end outside \(\left\{\left|z\right|\leq 1-\varepsilon\right\}\) and twisting data given by 1.
If \(V\subseteq M\) is a framed exact Lagrangian sphere in \(M\), Seidel in [14] builds a canonical standard Lefschetz fibration over \(\mathbb{C}\) for the Dehn twist \(\phi_{V}\), with a single critical point of \(\pi\) over \(0\in\mathbb{C}\). We may then choose to identify the fiber over \(1\) with \(M\), and identify \(V\) with the vanishing cycle obtained by parallel transport along the ray \(\mathbb{R}_{\geq 0}\). By a lemma of [14] which allows us to identify a Lefschetz fibration at infinity with the mapping cylinder of its global monodromy, the second condition will then follow. By [13], there is an essentially unique way to associate a standard Lefschetz fibration to a Dehn twist on a Weinstein manifold, up to Weinstein homotopy. In general, of course, there is no canonical standard Lefschetz fibration associated to a symplectomorphism (despite the name).
For later use, we give a recall of Seidel's construction of a standard Lefschetz fibration associated to a Dehn twist:
Proposition 3.1.: _Given a circle \(S^{1}\subseteq\Sigma\) inside a punctured Riemann surface \((\Sigma,\omega)\), we can choose a Liouville form \(\lambda_{\Sigma}\) so that we can construct the standard Lefschetz fibration for the Dehn twist \(\phi\) along this circle._
Proof.: Most of this is already done in e.g. [14]. In the following, we will closely follow the setup in [14] and explain the necessary modifications that need to be made to suit our purposes.
Fix positive real numbers \(\lambda,r>0\). We choose a local coordinate \((x,y)\in(-\lambda,\lambda)\times S^{1}\) for the region around the Dehn twist, over which the one-from \(\lambda_{\Sigma}=xdy\). Fix a Morse function \(H_{0}\) on the complement of a small neighborhood of the circle \(\{x=0\}\subseteq\Sigma\), with two local minimum corresponding to the fixed points \(e_{0}^{1}\) and \(e_{1}^{1}\). We take the local model of the fibration \(E\) over the disk \(\mathbb{D}_{r}\) to be the same as the one described in [14] equation (1.17), with one modification coming from the Hamiltonian perturbation \(H_{0}\):
\[\omega=d(xdy+(g(|x|)-1)\widetilde{R}_{r}(|x|)d\theta+H_{0}d\theta),\]
where \(g:[0,\lambda)\to\mathbb{R}\) is a smooth, non-decreasing function with \(g(s)=0\) for \(s\) small and \(g(s)=1\) for \(s\) close to \(\lambda\), and \(\widetilde{R}_{r}(s)=\frac{1}{2}(s-\sqrt{s^{2}+r^{2}/4})\).
Outside of the twist region, the fibration over \(\mathbb{C}\) is defined as the trivial product together with the
fiberwise two-form
\[\omega=\omega_{\Sigma}+d(H_{0}d\theta).\]
To extend the fibration to \(\mathbb{C}\setminus\mathbb{D}_{r}\), we pick a non-decreasing smooth function \(\psi:[0,\infty)\to[0,\infty)\) with the following properties:
1. \(\psi(s)=s\) for \(s\in[0,r]\), and
2. There exist \(R>r\) and \(C>0\) such that \(\psi(s)=C\) for \(s>R\).
Now we define the local structure of the fibration \(E\) over \(\mathbb{C}\) to be given by the two-form
\[\omega=d(xdy+(g(|x|)-1)\widetilde{R}_{\psi(r)}(|x|)d\theta+H_{0}(x,y)d\theta).\]
It is not difficult to see that over \(\mathbb{C}\setminus\mathbb{D}_{R}\), the fibration is isomorphic (as an exact Lefschetz fibration) to the symplectization of the mapping torus \((R,\infty)\times M_{\phi}\), where \(\phi\) is the time-\(1\) map of \((g(|x|)-1)\widetilde{R}_{C}(|x|)+H_{0}\). \(\blacksquare\)
_Remark 6_.: Note that our \(\pi:E\to\mathbb{C}\) is an exact symplectic Lefschetz fibration in the sense of [10]: the total space \((E,\lambda_{E})\) is made into a Liouville domain by taking a horizontal convex slice and then rounding the corners. Therefore, our definition of the Seidel class involves only counting those sections contained inside this domain; if wrapping is performed at infinity (in the open setting), then this is performed in the completion of the domain and does not affect the count for the Seidel class.
_Remark 7_.: If we choose a different Liouville form \(\lambda^{\prime}_{\Sigma}\) that is only different from \(\lambda_{\Sigma}\) outside of the neighborhood (described in the above proof) of the circle, the resulting Lefschetz fibration, when viewed as a symplectic manifold, will be the same.
For the non-exact case, let \(\Sigma\) denote a closed Riemann surface with symplectic form \(\omega_{\Sigma}\). Using the same techniques as in the exact case (again, we need to fix a local primitive \(\lambda_{\Sigma}=xdy\) of \(\omega_{\Sigma}\) in a neighborhood of the circle with coordinates \((x,y)\in(-\lambda,\lambda)\times S^{1}\)) we have
PROPOSITION 3.2.: _We can construct a (non-exact) Lefschetz fibration \(E\to\mathbb{C}\) with the following properties._
* _The generic smooth fiber is_ \(\Sigma\)_, a Riemann surface with punctures with symplectic form_ \(\omega_{\Sigma}\)_._
* _The only critical point lies over_ \(0\in\mathbb{C}\)_, with the monodromy map a Dehn twist about a circle_ \(S^{1}\subseteq\Sigma\)_._
* _Away from the origin_ \(0\in\mathbb{C}\)_, the Lefschetz fibration is isomorphic as a symplectic fiber bundle to the symplectization of a mapping torus_ \(M_{\phi}\times\mathbb{R}\)_, where_ \(M_{\phi}\) _is the mapping torus and the symplectic structure on_ \(M_{\phi}\times\mathbb{R}\) _is given by_ \(dr\wedge d\theta+\omega\)_._
_Hence we have a (nonexact) standard Lefschetz fibration for the Dehn twist of \(\Sigma\) around this circle._
For future use we now construct a Lefschetz fibration with the following properties:
PROPOSITION 3.3.: _Let \(\Sigma\) be a Riemann surface (closed or with punctures). Let \(V_{-},V_{+}\subseteq\Sigma\) be vanishing circles along which we perform Dehn twists \(\phi_{V_{i}}\). Then we can construct an (exact or non-exact, respectively) Lefschetz fibration \(\pi:E\to\mathbb{C}\) with the following properties_
* _The generic smooth fiber is_ \(\Sigma\)_, with symplectic form_ \(\omega_{\Sigma}\) _(in the exact case with also equip it with_ \(d\lambda_{\Sigma}=\omega_{\Sigma}\)_)._
* _There are two critical points, and they lie over_ \(\pm 1\in\mathbb{C}\)_. The monodromy map around_ \(\pm 1\) _is a Dehn twist around_ \(V_{\pm}\subseteq\Sigma\)_._
* _Outside of a large disk, the Lefschetz fibration is isomorphic as a symplectic fiber bundle to the symplectization of a mapping torus_ \(M_{\phi}\times\mathbb{R}\) _of_ \(\phi=\phi_{V_{-}}\phi_{V_{+}}\)_, where_ \(M_{\phi}\) _is the mapping torus with the symplectic structure on_ \(M_{\phi}\times\mathbb{R}\) _given by_ \(dr\wedge d\theta+\omega\)_._
_Hence we have a standard Lefschetz fibration for the composition of Dehn twists of \(\Sigma\) around this \(V_{+},V_{-}\)._
Proof.: Take a pair of pants with punctures \(\pm 1\in\mathbb{C}\) and \(\infty\). Construct, as in [10], a symplectic fiber bundle over this pair of pants with fiber \(\Sigma\), monodromy \(\phi_{V_{\pm}}\) around \(\pm 1\) and monodromy \(\phi\) around \(\infty\). Next, near \(\pm 1\) glue in the standard Lefschetz fibrations with one critical point (and monodromy \(\phi_{V_{\pm}}\)). This is possible because sufficiently far away from the singular fiber the standard Lefschetz fibrations we constructed in Proposition 3.1 are isomorphic to the symplectization of a mapping torus.
## 4 Symplectic Cohomology of Singular Hypersurfaces
Suppose \(f:X\to\mathbb{C}\) is a holomorphic function on a Stein manifold \(X\), having general fiber \(M\) and a single singular fiber \(M^{0}\) over \(0\); and suppose \(\phi:M\to M\) is the counterclockwise monodromy symplectomorphism of the fiber. The Fukaya category \(\mathcal{F}(M^{0})\) of the singular fiber is defined in [11], and is quasi-equivalent to the localization of the Fukaya category \(\mathcal{F}(M)\) of the smooth fiber at Seidel's natural transformation \(s:\mathrm{id}\to\phi\) (see below). The purpose of this section is to prove:
Theorem 4.1.: _Suppose \(M\) is a non-degenerate Liouville manifold in the sense of [1, Definition 1.1]: then the twisted closed-open map \(\mathcal{CO}_{\phi}\) is an isomorphism and there is an equivalence of graded algebras:_
\[\mathrm{HH}^{*}(\mathcal{F}(M^{0}))\cong\varinjlim_{d}\mathrm{HF}^{*}(\phi^{d})\]
_where the connecting maps in the direct limit are given by multiplication by the Seidel class \(S\) in \(\mathrm{HF}^{0}(\phi)\) (see Definition 4.4 below)._
The proof of the first part of this theorem was known to Ganatra, as a generalization of the arguments from [10]; we give an outline below for completeness. Compare also [1, 11], [12, Conjecture 7.17], forthcoming work of Shaoyun Bai and Paul Seidel [2], as well as forthcoming work of Shuo Zhang. By Seidel's split-generation theorem [12], this non-degeneracy hypothesis will hold whenever \(f:X\to\mathbb{C}\) is an exact symplectic Lefschetz fibration coming from a Lefschetz pencil with a smooth fiber at infinity.
_Remark 8_.: In [11], the monodromy \(\phi\) was taken to be the clockwise monodromy, and \(s\) to be a natural transformation to the identity from the monodromy functor. This difference is entirely a matter of convention and the categories resulting from localization will be quasi-equivalent. The
difference in convention is chosen to be closer to [14]. Moreover, the definition of \(s\) used in [13], as the cone of the \(\cap-\cup\) adjunction, can be shown to be equivalent to the definition given in [14]: this is a result of [1].
Our setting is substantially simpler than that considered in [1]. Since we consider only the case where \(f:X\to\mathbb{C}\) has one Lefschetz critical value at \(0\), the wrapped Fukaya category \(\mathcal{W}(X,f)\) is generated by the thimble \(T\); any definition of the \(\cap\) functor will take \(\cap T\) to be the vanishing cycle \(V\), then the exact triangle
simply becomes, for any Lagrangian \(L\) in \(M\),
which is exactly Seidel's exact triangle from [14], and \(s\) is exactly his section counting map from [14].
_Remark 9_.: **Signs:** to show that the result of Theorem 4.1 holds with \(\mathbb{C}\) coefficients, as we will use in SS6 and SS7, we will need to ensure that our Floer-theoretic arguments hold with signs. If one uses the standard setup of orientation lines and canonical orientations, as in [14, 15], by choosing consistent orientations of moduli spaces of domains and equipping our Lagrangians with gradings, spin structures and orientations, our moduli spaces are canonically oriented relative to the orientation lines at the ends (see [15, Lemma B.1] or [14, (12.8)]). Then the fact that our arguments work with signs is essentially automatic, with two minor subtleties:
* Since the holomorphic curves we consider live in the total space of a Lefschetz fibration \(E\) rather than in \(\Sigma\) itself, we will need to use a (canonical)'stabilization' identification of the orientation lines for \(p\in L_{i}\cap L_{i+1}\) and \(\widetilde{p}\in\widetilde{L}_{i}\cap\widetilde{L}_{i+1}\), given by lifting the brane structures to \(E\), in order to have signs for our section counting maps. We have a similiar identification for fixed points of \(\phi\).
* In our case of a Riemann surface \(\Sigma\), we have essentially at most two choices of spin structures on any connected Lagrangian: in the following, we will implicitly choose the non-trivial (bounding) spin structure on compact Lagrangians, so that Seidel's vanishing result continues to hold when taken with signs (see [14, Example 17.3]).
For further details on signs and orientations, the reader can refer to the forthcoming [1].
_Remark 10_.: Note that the results in this section are stated in the setting where \(M\) is an exact symplectic manifold, and so do not directly apply to those calculations in SS5 where \(\Sigma_{g}\) is a closed Riemann surface. Nevertheless, one expects the same theorems to hold also in the monotone setting, and with the same proofs, provided the appropriate technology were developed.
### 4.1 Compatibility with wrapping
In the following we shall also want to consider operations between _wrapped_ Floer complexes, in the cases where \(M\) is not compact. Thus we will want to consider domains that carry (implicit) rescaling data. This of course poses no problem if we cut off our rescaling diffeomorphism to lie away from the region in which the compactly-supported symplectomorphism is taking place. Thus the following is largely a straightforward combination of [12, SS17] and [13, SS4].
Suppose \((M,\lambda_{M})\) is a Liouville domain, with boundary \(\partial M\). Denote its completion to a Liouville manifold by \(\hat{M}\), given by attaching \(M\) to \(\partial M\times[0,\infty)_{r}\) with Liouville form \(\lambda_{\hat{M}}=r\lambda_{\partial M}\). We say a Hamiltonian on \(\hat{M}\) is **admissible** if \(\partial_{r}H(x,r)>0,\partial_{x}H(x,r)=0\) on \(\partial M\times(0,\infty)\), if \(H(x,r)=\frac{1}{2}r^{2}\) outside of a compact set, and \(\partial_{r}H(x,0)=0\). We say an almost-complex structure \(J\) is of \(c\)**-rescaled contact type** if \(\lambda(J(\partial_{r}))=-1\) for \(r\) sufficiently small, and \(cr^{-1}\lambda\circ J=\mathrm{d}r\) for some constant \(c\) outside of some compact set. Our Floer data will always be a choice of admissible Hamiltonian \(H_{t}\) and rescaled contact-type almost-complex structure \(J_{t}\) for each generator of a Floer complex (possibly \(t\)-dependent, to break the \(S^{1}\) symmetry). We say a Lagrangian \(L\) inside \(\hat{M}\) is **strictly cylindrical** if \(L=\Lambda\times[0,\infty)\) inside \(\partial M\times[0,\infty)\), where \(\Lambda\subseteq\partial M\) is a compact Legendrian submanifold of \((M,\lambda|_{\partial M})\). For our purposes in SS5, \(M\) will be a Stein manifold, so the Fukaya category is generated by strictly-cylindrical Lagrangians (see [11]), and defining our operations for these Lagrangians is sufficient.
It is important to note that in the following sections, when choosing Floer data and perturbation data, we use the same classes of Hamiltonians and almost-complex structures as we use to define the operations on the fixed point Floer homology groups on \(M\) as in SS2 of [10] away from the puncture regions (if any), and use admissible Hamiltonians and almost complex structures of rescaled contact type near the punctures. We do this in order for the closed-open map to respect the product structure (Theorem 4.7). The details of the product operation in fixed point Floer (co)homology are described in Section 5.
Define a **rescaling diffeomorphism**\(\psi^{\rho}:\hat{M}\to\hat{M}\) as follows: \(\psi^{\rho}=\mathrm{id}\) on \(M\), and on \(\partial M\times[0,\infty)\), it is \(\psi^{\rho}(x,r)=(x,f_{\delta,\rho}(r))\), where \(f_{\delta,\rho}(r)\) is a small convex smoothing of:
\[f_{\delta,\rho}(r)=\left\{\begin{aligned} r,&\qquad r \geq 2\delta,\\ \delta^{-1}(\rho-1)r^{2}-(\rho-2)r,&\delta\leq r\leq 2 \delta,\\ \rho r,&\qquad 0\leq r\leq\delta.\end{aligned}\right.\]
Given an admissible Hamiltonian \(H\), let \(h_{\delta,\rho}(r)\) denote the unique smooth solution to the differential equation:
\[H(f_{\delta,\rho}^{2}(r))h^{\prime}(r)+f_{\delta,\rho}^{\prime}(r)h(r)\frac{ \partial H}{\partial r}-\frac{\partial H}{\partial r}=0\]
with \(h_{\delta,\rho}(0)=1\). The diffeomorphism \(\psi^{\rho}\) has the following important properties:
LEMMA 4.2.: _Given an admissible Hamiltonian \(H\), choose \(\delta\) sufficiently small so that all of the \(1\)-periodic orbits of \(H\) take place outside \(\partial M\times[0,2\delta]\). Then:_
1. _For all_ \(\rho>0\)_,_ \(\psi^{\rho}\) _takes strictly cylindrical Lagrangians to strictly cylindrical Lagrangians;_
2. _The pullback_ \(\rho^{-2}(\psi^{\rho})^{*}H\) _is also an admissible Hamiltonian;_
3. _If_ \(J\) _is an almost-complex structure on_ \((\hat{M},\lambda)\) _that is of rescaled contact type, then the pullback_ \((\psi^{\rho})^{*}J\) _is an almost-complex structure for the same_ \((\hat{M},\lambda)\) _and is also of rescaled contact type (for a different value of_ \(c\)_)._
4. _If_ \(\phi\) _is any compactly-supported symplectomorphism of_ \(M\) _extended to_ \(\hat{M}\)_, then_ \(\psi^{\rho}\) _commutes with_ \(\phi\)_._
Since \(\rho^{-2}(\psi^{\rho})^{*}H\) is an admissible Hamiltonian, so is \(h_{\rho,\delta}(r)(\psi^{\rho})^{*}H\). Moreover, under \(\psi^{\rho}\), orbits of \(H\) are in bijection with periodic orbits of \(h_{\rho,\delta}(r)(\psi^{\rho})^{*}H\) on \(\partial M\times[0,\infty)\):
PROPOSITION 4.3.: _Given any compactly-supported symplectomorphism \(\phi:M\to M\), there is a canonical isomorphism of fixed point Floer cohomology complexes:_
\[\mathrm{CF}^{*}(\phi,H_{t},J_{t})\cong\mathrm{CF}^{*}(\phi,h_{\delta,\rho}( \psi^{\rho})^{*}(H_{t}),(\psi^{\rho})^{*}J_{t})\]
_Likewise, given any pair of strictly cylindrical exact Lagrangians \(L_{0},L_{1}\), we have a canonical isomorphism of wrapped Lagrangian Floer cochain complexes:_
\[\mathrm{CF}^{*}(L_{0},L_{1},H_{t},J_{t})\cong\mathrm{CF}^{*}(\psi^{\rho}L_{0}, \psi^{\rho}L_{1},h_{\delta,\rho}(\psi^{\rho})^{*}(H_{t}),(\psi^{\rho})^{*}J_{t }).\]
The isomorphism of the two complexes arises from the fact that \(\phi\) is the identity outside of a compact set: the left hand side is the fixed point Floer cohomology of \(\phi\) perturbed by the Hamiltonian flow of \(H_{t}\) near the punctures, using the almost complex structure \(J_{t}\) near the punctures to define the differential. The right hand side is the fixed point Floer cohomology of \(\phi\), perturbed by \(h_{\delta,\rho}(\psi^{\rho})^{*}(H_{t})\) near the punctures, with the almost complex structure \((\psi^{\rho})^{*}J_{t}\) used to define the differential. The details of how we perform wrapping on fixed point Floer cohomology are described in Section 5.1.
Following [10], the boundary of the total space \(E\) as a Liouville manifold can be separated into two parts, the vertical part \(\partial^{v}E\cong\partial D\times M\) and the horizontal part \(\partial^{h}E\cong D\times\partial M\). We may likewise define a diffeomorphism \(\widetilde{\psi}^{\rho}:\hat{E}\to\hat{E}\) that is the identity in the interior \(E\), acts by \(\psi^{\rho}\) fiberwise on \(\partial^{h}E\), and by the standard rescaling function on \(\mathbb{C}\) that is constant in the fibers on \(\partial^{v}E\). By our above observation, \(\widetilde{\psi}^{\rho}\) takes fibered Lagrangians in \(E\) to fibered Lagrangians for all \(\rho>0\). Moreover, if \(\widetilde{L}_{i}\) denotes the parallel transport of a cylindrization of a Lagrangian \(L_{i}\subseteq\Sigma\) along a radial arc, then \(\widetilde{\psi}^{\rho}(\widetilde{L}_{i})=\overline{(\widetilde{\psi}^{\rho }\widetilde{L}_{i})}\).
Using our rescaling functions \(\psi^{\rho}\) and \(\widetilde{\psi}^{\rho}\) we may now use identical definitions (as in [11, Definitions 4.5,4.7,4.7]) of when perturbation data are adapted to a choice of Floer data for a Riemann surface \(S\) with weighted strip-like ends and cylindrical, carrying a standard Lefschetz fibration \(\pi:E\to S\) (itself adapted to the ends of \(S\) in the sense of [10, (17b)]). This perturbation data consists of a choice of a 1-form \(K\in\Omega^{1}(S,C^{\infty}(E))\) and a domain-dependent almost-complex structure \(J_{S}\) on \(E\), satisfying a list of compatibility conditions with strip-like ends, weighting data, boundary conditions, Floer data, and fibration structures, which can be found in [11, SS4.1] and [10, SS17]. As stated above, because our rescaling by \(\psi^{\rho}\) and twisting by \(\phi\) take place in disjoint regions of \(M\) there is no obstruction to finding Floer data and perturbation data satisfying both sets of conditions. We may then carry through the same analysis as in [11, 10] to define operations on (wrapped) Floer complexes: we shall leave this data implicit unless it is significant to the argument at hand.
e of a certain symplectic fiber bundle \(\pi:E\to S\) where \(S\) is a three-punctured sphere equipped with two negative and one positive cylindrical ends, of weights \(n,m\), and \(m+n\), respectively. Counting isolated points in a moduli space of perturbed pseudoholomorphic sections defines a map
\[\operatorname{CF}(\phi^{n},H_{t},J_{t})\otimes\operatorname{CF}(\phi^{m},H_{t},J _{t})\to\operatorname{CF}(\phi^{n+m},h_{\rho,\delta}(\psi^{\rho})^{*}(H_{t}),( \psi^{\rho})^{*}J_{t})\cong\operatorname{CF}(\phi^{m+n},H_{t},J_{t})\]
which is exactly the product as defined in [10], extended to the wrapped setting.
### Section-Counting Maps
For the purposes of illustration, we will use a dashed curve on a figure representing a Riemann surface to indicate any boundary marked points that are mapped to an intersection between \(L_{i}\) and \(\phi^{d}(L_{i+1})\), or cylindrical ends that are asymptotic to an orbit of \(\phi^{d}\). This is purely illustrative: our surfaces contain no seams or cuts (though the analysis could alternatively be set up this way), and they do not indicate that marked points must lie on the same line. Interior critical values of the Lefschetz fibration are denoted by solid dots; domains biholomorphic to \(D\) have solid boundary, while those biholomorphic to \(\mathbb{C}\) have dashed boundary.
**Definition 4.4**.: _Given a standard symplectic Lefschetz fibration \(\pi:E\to\mathbb{C}\) with global monodromy \(\phi\), and a choice of compatible perturbation data \((K,J)\), we define a moduli space \(\mathcal{M}(E,J,K,p)\) to be the set of \(K\)-perturbed \(J\)-holomorphic sections of the Lefschetz fibration \(\pi:E\to\mathbb{C}\) that are horizontally \(C^{1}\)-asymptotic to the orbit of \(\phi\) starting at \(p\) (in the sense that \(u(\operatorname{re}^{\operatorname{i}t})\) converges to \(\phi_{t}(p)\) in \(C^{1}(M_{r})\) as \(r\to\infty\))._
_For generically chosen compatible perturbation data \((K,J)\), this moduli space is a topological manifold (see [11, p.237]) and compact when the dimension is zero. In this case we define the **Seidel element** to be \(S\in\operatorname{CF}(\phi)\) given by the count of dimension zero moduli spaces,_
\[S=\sum_{p\in\operatorname{Fix}(\phi)}\#\mathcal{M}(E,J,K,p)\;[p]\]
_with canonically determined signs. In the wrapped case we equip \(\mathbb{C}\) with a weight-\(1\) positive cylindrical end._
One may verify using standard methods that this is indeed a cocycle, and we call the result the **Seidel class**, though this could be called a special kind of Borman-Sheridan class.
Figure 4: Domains used to define Seidel element.
_Remark 11_.: Because of the non-uniqueness of standard Lefschetz fibrations, this class may depend on the choice of standard Lefschetz fibration for \(\phi\) when \(\phi\) is not a single Dehn twist.
We have a similar construction in the open sector: given a spin exact cylindrical Lagrangian \(L\) and a vanishing cycle \(V\) inside a Liouville manifold \(M\), Seidel in [14, (17d)] defines a cocycle \(s\in\operatorname{CF}(L,\phi_{V}(L))\) via a count of sections. The arguments in [14, SS17] essentially show that this extends to a degree-\(0\) natural transformation \(\operatorname{id}\to\phi_{V}\) between \(A_{\infty}\)-functors on the Fukaya category \(\mathcal{F}(M)\), and hence an element of \(\operatorname{HH}^{0}(\mathcal{F}(M),\phi_{V})\) which we call **Seidel's natural transformation**. Seidel's construction applies more generally: a count of suitably perturbed sections of a standard Lefschetz fibration for \(\phi\) over a domain (modulo reparametrization) such as in Figure 5 defines a map:
\[s_{k}:\operatorname{CF}(L_{k-1},L_{k})\otimes\cdots\otimes\operatorname{CF}(L _{0},L_{1})\to\operatorname{CF}(L_{0},\phi(L_{k}))\]
which gives a term of a natural transformation \(s:\operatorname{id}\to\phi\).
Likewise, counting sections of a standard symplectic fibration over domains as in Figure 6:
\[\mu_{\phi}^{k,\ell}(x_{k},x_{k-1},\ldots,x_{1},y_{1},x_{1}^{\prime},\ldots,x_ {\ell}^{\prime})=\sum_{y_{0}\in L_{\ell}^{\prime}\cap(L_{k})}\#\mathcal{M}_{ k,\ell}(E_{k,\ell},\phi,J,K;p,x_{k},\ldots,x_{1},y_{1},x_{1}^{\prime},\ldots,x_ {\ell}^{\prime},y_{0})\;[y_{0}]\]
with canonically determined signs, defines a term of the bimodule structure on \(\Gamma_{\phi}\):
\[\mu_{\phi}^{k,\ell}:\operatorname{CF}(L_{k-1},L_{k})\otimes\cdots\otimes \operatorname{CF}(L_{0},L_{1})\otimes\operatorname{CF}(L_{0}^{\prime},\phi(L _{0}))\otimes\operatorname{CF}(L_{0}^{\prime},L_{1}^{\prime})\otimes\cdots \otimes\operatorname{CF}(L_{\ell}^{\prime},L_{\ell-1}^{\prime})\to \operatorname{CF}(L_{\ell}^{\prime},\phi(L_{k}))\]
Figure 5: Domains used to define Seidelβs natural transformation.
Figure 6: Domains used to define bimodule structure for \(\phi\).
### Twisted Closed-Open Maps
To relate the Seidel class to the Seidel natural transformation we want to consider a twisted closed-open map
\[\operatorname{\mathsf{CO}}_{\phi}:\operatorname{HF}^{*}(\phi)\to\operatorname{HH }^{*}(\mathcal{F}(M),\Gamma_{\phi})\]
where \(\Gamma_{\phi}\) is the \(A_{\infty}\)-bimodule induced by the symplectomorphism \(\phi\) of a Liouville manifold \(M\) (see [10]), and \(\operatorname{HH}^{*}(\mathcal{F}(M),\Gamma_{\phi})\) denotes the Hochschild cohomology of \(\mathcal{F}(M)\) with coefficients in this bimodule.
Definition 4.5: _Let \(\mathcal{Q}_{k}\) be the moduli space of closed disks \(S_{k}\) with \(k\) negative boundary points \(p_{1},\dots,p_{k}\) and one positive boundary marked point \(p_{0}\) fixed at \(1\); as well as an interior negative puncture fixed at \(0\), equipped with a negative cylindrical end. For each such \(S_{k}\), we equip it with twisting data given by \(1\) at the interior puncture, and \(0\) over every boundary component except that between \(p_{k}\) and \(p_{0}\). Then we fix a standard symplectic fiber bundle \(\pi:E_{k}\to S_{k}\) with fiber \(M\), compatible with ends and twisting data._
_Each of these we equip with choices of Floer data and Lagrangian labels \(L_{0},\dots,L_{k}\subseteq\Sigma\), modified so that there are counterclockwise moving boundary conditions along the boundary segment between \(p_{k}\) and \(p_{0}\) given by the isotopy \(\phi\) (cf. [10, p.244])._
_Given choices of compatible perturbation data \((K,J)\), let \(\mathcal{M}_{k}(E_{k},\phi,J,K,p,x_{k},\dots,x_{1},x_{0})\) denote the moduli space of \(K\)-perturbed \(J\)-holomorphic sections \(u:S_{k}\to E_{k}\) over some domain \(S_{k}\in\mathcal{Q}_{k}\), satisfying Lagrangian boundary conditions along \(\widetilde{L}_{0},\dots\widetilde{L}_{k}\) (the parallel transport of \(L_{0},\dots,L_{k}\) along \(\partial S_{k}\)), that are horizontally asymptotic to the orbit \(p\) of \(\phi\) around \(0\)._
_For generic consistent perturbation data \((K,J)\), these moduli spaces are topological manifolds, compact when the dimension is zero, and we may define the **twisted closed-open map**\(\operatorname{\mathsf{CO}}_{\phi}:\operatorname{CF}^{*}(\phi)\to \operatorname{CC}^{*}(\mathcal{F}(M),\Gamma_{\phi})\) as follows (cf. [1, p.66])._
_Given a fixed point \(p\in\operatorname{Fix}(\phi)\) and morphisms in \(\mathcal{F}(M)\) given by \(x_{i}\in\operatorname{CF}(L_{i-1},L_{i})\), we define a Hochschild cochain \(\operatorname{\mathsf{CO}}_{\phi}(p)\) in_
\[\operatorname{CC}^{*}(\mathcal{F}(M),\Gamma_{\phi})=\prod_{L_{0},\dots,L_{k}} \operatorname{Hom}(\operatorname{CF}(L_{k-1},L_{k})\otimes\dots\otimes \operatorname{CF}(L_{0},L_{1}),\operatorname{CF}(L_{0},\phi(L_{k})))\]
_via a sum over \(k\geq 0\) of dimension-zero moduli spaces:_
\[\operatorname{\mathsf{CO}}_{\phi}^{k}(p)(x_{k},\dots,x_{1})=\sum_{x_{0}\in L_ {k}\cap\phi(L_{0})}\#\mathcal{M}_{k}(E_{k},\phi,J,K;p,x_{k},\dots,x_{1},x_{0} )\;[x_{0}]\]
_with their canonically determined signs, as illustrated in Figure 7 below._
Again, one can show using standard methods that this gives a chain map and so descends to a map \(\operatorname{\mathsf{CO}}_{\phi}:\operatorname{HF}^{*}(\phi)\to\operatorname{HH}^{* }(\mathcal{F}(M),\Gamma_{\phi})\).
Theorem 4.6.: _Given a choice of standard Lefschetz fibration for \(\phi\), Seidel's natural transformation \(s\in\operatorname{HH}^{*}(\mathcal{F}(M),\phi)\) is the image under the twisted closed-open map \(\operatorname{\mathsf{CO}}_{\phi}\) of the Seidel class \(S\in\operatorname{HF}(\phi)\)._
Proof.: This argument is a fairly straightforward combination of the compactness arguments of [1, SS4] applied to the section-counting maps of [1, SS17]. Let \(\mathcal{Q}_{k}\) be the moduli space of closed disks \(D_{k}\) with \(k\) incoming boundary marked points \(p_{i}\), one outgoing marked point \(p_{0}\) fixed at \(1\), and a distinguished point at \(0\) which will be the critical value of a Lefschetz fibration. We construct a family of standard Lefschetz fibrations parametrized by \(r\in(0,1]\), living over each domain \(S_{k}\in\mathcal{Q}_{k}\), with Lefschetz fibration \(E_{r}\) pulled back from the chosen standard Lefschetz fibration \(E\to\mathbb{C}\) for \(\phi\) by the map \(z\to z/r\) for \(r\neq 0\). This has the effect of flattening the Lefschetz fibration over the complement of \(D_{r(1-\varepsilon)}\). Denote the extended moduli space of sections of this Lefschetz fibration (with appropriate perturbation data \((J,K)\)) by \(\widetilde{\mathcal{M}}_{k}(E_{r},\phi,J,K,x_{0},x_{1},\dots,x_{k})\), consisting of pairs \((u,r)\) where \(u:D_{k}\to E_{r}\) is a \(K\)-perturbed \(J\)-holomorphic section of the Lefschetz fibration \(E_{r}\) over \(S_{k}\in\mathcal{Q}_{k}\) satisfying the boundary conditions described in Definition 4.5 above. After choosing suitably generic perturbation data, when the dimension is zero, counting elements of this moduli space yields a map:
\[T^{k}:\operatorname{CF}(L_{k-1},L_{k})\otimes\dots\otimes\operatorname{CF}(L_ {0},L_{1})\to\operatorname{CF}(L_{0},\phi(L_{k}))\]
of degree \(-k-1\), giving an element in \(CC^{-1}(\mathcal{F}(M),\phi)\).
Figure 7: Domains used to define twisted closed-open map.
Taking the Gromov compactification of \(\widetilde{\mathcal{M}}_{k}(E,\phi,J,K,x_{0},x_{1},\ldots,x_{k})\) gives a manifold fibered over \([0,1]\), and the codimension-1 boundary is covered by the union of the images of the natural inclusions of the moduli spaces of the form below (see [1, 10] for this kind of diagrammatic proof). As well as taking our perturbation data to be universal and consistent under gluing, one must verify that the Lefschetz fibrations constructed are consistent under gluing of components inside the boundary strata of Deligne-Mumford moduli space (see [10, p.27] for the notion of gluing Lefschetz fibrations and its effect on section-counting maps). This poses no difficulty in our case since Lefschetz fibrations with a single critical point are essentially unique in a strong sense.
1. At \(r=0\) we have Figure 9, which represents the closed-open map applied to the Seidel class; 2. At \(r=1\), we have Figure 10, which represents Seidel's natural transformation;
Figure 8: Domains used to define the Hochschild cochain \(T\).
Figure 9: Domains representing the closed-open map applied to the Seidel element
3. We have the Deligne-Mumford degenerations in Figures 11 and 12 of \(S_{k}\) which represent (respectively) the first and second terms of the Hochschild coboundary of \(T\):
Recall that the Hochschild differential is given by:
\[d_{CC^{*}}(T)(x_{k}\otimes\cdots\otimes x_{1})=\sum_{i=1}^{k}\sum_ {\ell=0}^{k-i}(-1)^{\mathfrak{F}^{i-1}}T(x_{k},\ldots,x_{i+\ell+1},\mu^{\ell+1 }(x_{i+\ell},\ldots,x_{i}),x_{i-1},\ldots,x_{1})\] \[+\sum_{i=1}^{k}\sum_{\ell=0}^{k-i}(-1)^{|T|(\mathfrak{G}^{i-1}+1) +1}\mu_{\phi}^{k-i-\ell,i-1}(x_{k},\ldots,x_{i+\ell+1},T(x_{i+\ell},\ldots,x_{ i}),x_{i-1},\ldots,x_{1})\]
where \(\mu_{\phi}^{d}\) denotes the \(A_{\infty}\) bimodule operations for \(\phi\) and \(\mathfrak{F}^{j}=|x_{0}|+\cdots+|x_{j}|-j\).
Figure 11: Domains representing \(T\) composed with the \(A_{\infty}\) operations.
Figure 12: Domains representing the \(A_{\infty}\)-bimodule operations applied to \(T\).
Figure 10: Domains used to define the Seidel natural transformation.
4. Finally, we have the strip-breaking degenerations in Figure 13, which represent the remaining part of the Hochschild coboundary of \(T\): \[\sum_{i=1}^{k}(-1)^{|T|(\mathfrak{K}^{i-1}+1)+1}T^{k}(x_{k},\dots,\mu^{1}(x_{i}), \dots,x_{1})+(-1)^{k+1}\mu^{1}_{\phi}(T^{k}(x_{k},\dots,x_{1}))\]
If one takes appropriate sign twisting data and orients the moduli space \(\mathcal{Q}_{k}\) as in [10], one can see that the sum of the number of boundary points of the Gromov compactification of \(\widetilde{\mathcal{M}}_{k}(E,\phi,J,K,x_{0},\dots,x_{k})\), taken with respect to their canonical orientations, agrees with the signs of the Hochschild differential and \(A_{\infty}\) operations. Since this count is zero, this shows that \(\mathcal{CO}_{\phi}(S)=s+d_{CC^{*}}(T)\), so they represent the same class in Hochschild cohomology (with \(\mathbb{Z}\)-coefficients).
Theorem 4.7.: _The twisted closed-open map \(\mathcal{CO}_{\phi}\) respects the natural product operations on fixed point Floer homology from [11]:_
Proof.: Again, the proof is analogous to [10, SS4], where we consider a moduli space of parametrized domains. Let \(D_{k}\) be a closed disk \(k\) incoming boundary marked points \(p_{1},\dots,p_{k}\) and one outgoing marked point \(p_{0}\) fixed at \(1\), and with two negative interior punctures, lying on the imaginary axis at a distance of \(r\in(0,1)\), see Figure 14. As well as strip-like and cylindrical ends, \(D_{k}\) also comes with twisting data: equal to \(i\) and \(j\) for the interior punctures, equal to \(i+j\) on the boundary component between \(p_{k}\) and \(p_{0}\), and zero otherwise. For each \(r\in(0,1)\), for each \(D_{k}\) inside the moduli space \(\mathcal{Q}^{r}_{k}\) of such disks, take \(\pi:E_{r}\to D_{k}\) to be a standard symplectic fiber bundle (with respect to these ends and twisting data). Consider the extended moduli space of \(K\)-perturbed \(J\)-holomorphic sections \(\widetilde{\mathcal{M}}_{i,j,k}(E_{k},J,K,x_{0},x_{1},\dots,x_{k},p_{1},p_{2})\), consisting of pairs \((u,r)\) where \(u:D_{k}\to E_{r}\) is a \(K\)-perturbed \(J\)-holomorphic section of \(E_{r}\) with \(D_{k}\in\mathcal{Q}^{r}_{k}\), satisfying the boundary and asymptotic conditions as described in Definition 4.5.
Figure 13: Strip breaking from \(T\).
As in the proof of Theorem 4.6 above, we consider the Gromov compactification of \(\widetilde{\mathcal{M}}_{i,j,k}(E_{k},J,K)\) as a manifold fibered over \([0,1]\). The components of the Gromov boundary over \(r=0,r=1\) are illustrated in Figure 4.3 and Figure 4.3 respectively.
There are also additional components of the Gromov boundary coming from cylinder breaking, corresponding to the the differential of \(\mathrm{CF}^{*}(\phi^{i})\), and disk bubbling and strip breaking, corresponding to the differential of \(\mathrm{CC}^{*}(\Gamma_{\phi^{i}})\). When \((K,J)\) are chosen generically and the virtual dimension is zero, the count of points in this moduli space provides a chain map:
\[h:\mathrm{CF}^{*}(\phi^{i})\otimes\mathrm{CF}^{*}(\phi^{j})\to\mathrm{CC}^{*}( \Gamma_{\phi^{i+j}})\]
via
\[h(p_{0},p_{1},x_{1},\ldots,x_{k})(x_{0})=\#\widetilde{\mathcal{M}}_{i,j,k}(E_{ k},J,K,x_{0},x_{1},\ldots,x_{k},p_{1},p_{2})\]
that gives a homotopy between the two multiplication operations. If appropriate sign twisting data are used, and \(\mathcal{Q}_{k}\) is oriented consistently with [1], then if this count is taken with its canonically determined signs, the orientations of the boundary strata of the Gromov compactification agree with the orientations for the Hochschild differential and \(A_{\infty}\)-operations, and this chain homotopy also holds with \(\mathbb{C}\) coefficients. \(\blacksquare\)
## Lefschetz fibrations
We note our product formula for fixed point Floer cohomology can be used to compute the Seidel class in more complicated Lefschetz fibrations, which may be of independent interest, _though it is not essential for the rest of the paper_. We give an illustration below.
Figure 16: A degeneration of domains in Figure 14 as \(r\to 1\), corresponding to the product in twisted Hochschild cohomology applied to the twisted closed-open map.
Theorem 4.8.: _Given a single Dehn twist \(\phi\), the Seidel element \(S_{2}\in\mathrm{HF}^{0}(\phi^{2})\) associated to the standard Lefschetz fibration with two critical points constructed in Proposition 3.3 with and \(V_{-}=V_{+}\), is equal to \(S^{2}\in\mathrm{HF}^{0}(\phi^{2})\) where \(S\in\mathrm{HF}^{0}(\phi)\) is the Seidel element for \(\phi\)._
Proof.: The proof of this result is analogous to Theorem 4.6, where now we instead stretch the complex structure along two circles, around the two critical values in the base. We construct a family of Lefschetz fibrations \(E_{r}\to\mathbb{C}\) with strip-like ends living over \(\mathbb{C}\) parametrized by \(r\in(0,1)\), and look at sections asymptotic to a given orbit \(p\) of \(\phi^{2}\) at infinity: this is given by taking the construction from Proposition 3.3 of the standard Lefschetz fibration with two critical points and instead gluing in at the punctures at \(\pm 1\) the Lefschetz fibration \(E_{r}\) pulled back from the standard Lefschetz fibration \(E\to\mathbb{C}\) associated to the vanishing cycle \(V\) by the map \(z\to z/r\) for \(r\neq 0\). This has the effect of stretching the gluing region for the Lefschetz fibration over the complement of \(D_{r(1-\varepsilon)}\). Denote the extended moduli space of sections of this Lefschetz fibration (with generic perturbation data) by \(\widetilde{\mathcal{M}}(E_{r},J,K,p)\), consisting of pairs \((u,r)\) where \(u:\mathbb{C}\to E_{r}\) is a \(K\)-perturbed \(J\)-holomorphic section of the Lefschetz fibration \(E_{r}\), horizontally asymptotic to the orbit \(p\) at infinity.
Taking the Gromov compactification of \(\widetilde{\mathcal{M}}(E_{r},J,K,p)\), gives a manifold fibered over \([0,1]\), and the codimension-1 boundary is covered by the union of the images of the natural inclusions of the moduli spaces of \(J\)-holomorphic sections over the domains in Figures 4.3 and 4.3.
Figure 17: Domains representing the product structure on \(\mathrm{CF}(\phi)\) applied to the Seidel class.
Figure 18: Domains representing the original count of sections defining the Seidel class for \(\phi^{2}\).
Essentially by the construction in Proposition 3.3, the result of stretching the gluing region gives the standard symplectic fiber bundle over the pair of pants used to define the product map in fixed point Floer homology. We may also have an additional term from possible strip-breaking for the Floer differential for the fixed point Floer homology of \(\phi^{2}\). Taking appropriate sign conventions as in [12, 13], we see that the sum of the number of boundary points of the compactification of \(\widetilde{\mathcal{M}}(p)\), taken with the correct orientation, is zero. This shows that the difference between \(S^{2}\) and \(S_{2}\) in \(\operatorname{CF}(\phi^{2})\) is a Floer coboundary. \(\blacksquare\)
_Remark 12_.: We include this theorem as a point of independent interest. This can be used, for instance, when combined with our computation of homogeneous coordinate rings for \(\phi^{2}\) in Theorem 1.5, to compute the direct limit \(\varinjlim_{d}\operatorname{HF}^{*}(\Sigma_{g},\phi^{2d})\) when we localize around the monodromy around the Lefschetz fibration constructed in Proposition 3.3.
### 4.4 Twisted Hochschild Cohomology
In [13], Ganatra constructs a split-wrapped Fukaya category \(\mathcal{F}^{2}(M)\) of \(M\times\overline{M}\) whose objects consist of product Lagrangians \(L_{i}\times L_{j}\) and the diagonal Lagrangian \(\Delta\). He moreover constructs an \(A_{\infty}\)-functor
\[\mathbf{M}:\mathcal{F}^{2}(M)\to\mathcal{F}(M)-\operatorname{bimod}\]
using a version of quilt techniques, which is full on the subcategory split-generated by product Lagrangians. Under a non-degeneracy assumption on \(M\) (see [13, Definition 1.1]) satisfied always for curves \(\Sigma_{g}\), Ganatra uses methods from [1] to show that the diagonal Lagrangian \(\Delta\subseteq M\times\overline{M}\) is split-generated by product Lagrangians \(L_{i}\times L_{j}\) in the split-wrapped category \(\mathcal{F}^{2}(M)\). This implies that \(\mathcal{F}(M)\) must be homologically smooth under the non-degeneracy assumption.
One can define an appropriate version of \(\mathcal{F}^{2}(M)\) in which the Lagrangian correspondence \(\Delta_{\phi}\subseteq M\times\overline{M}\) associated to \(\phi\) is also an object, such that
\[\operatorname{Hom}^{*}_{\mathcal{F}^{2}(M)}(\Delta,\Delta_{\phi})\cong \operatorname{HF}^{*}(\phi)\]
where \(\operatorname{HF}(\phi)\) is an (appropriately wrapped) version of fixed point Floer cohomology. Then it is straightforward to see that \(\Delta_{\phi}\) is split-generated by the same complex of product Lagrangians \(L_{i}\times\phi(L_{j})\). Since \(\mathbf{M}\) is full on the subcategory of \(\mathcal{F}^{2}(M)\) of product Lagrangians, and faithful because of its algebraic properties [13], it follows that there is an isomorphism:
\[[\mathbf{M}^{1}]:\operatorname{Hom}^{*}_{\mathcal{F}^{2}(M)}(\Delta,\Delta_{ \phi})\to\operatorname{Hom}^{*}_{\mathcal{F}(M)-\operatorname{bimod}}( \operatorname{Id},\Gamma_{\phi})\]
Composing this sequence of equivalences gives a twisted closed-open map \(\mathcal{CO}_{\phi}\):
\[\mathcal{CO}_{\phi}:\operatorname{HF}^{*}(\phi)\cong\operatorname{Hom}^{*}_{ \mathcal{F}^{2}(M)}(\Delta,\Delta_{\phi})\stackrel{{[\mathbf{M}^{ 1}]}}{{\longrightarrow}}\operatorname{Hom}^{*}_{\mathcal{F}(M)-\operatorname{ bimod}}(\operatorname{Id},\Gamma_{\phi})\cong\operatorname{HH}^{*}(\mathcal{F}(M), \Gamma_{\phi})\]
It is not difficult to check that this is chain-homotopic to the closed-open map as defined in Definition 4.5 (analogous to the 'unfolding' argument in [13, Proposition 9.7]).
PROPOSITION 4.9.: _Suppose \(\mathcal{C}\) is a homologically smooth \(A_{\infty}\)-category and \(s:\operatorname{id}\to F\) is an ambidextrous natural transformation. Then there is a quasi-isomorphism_
\[\operatorname{CC}^{*}(\mathcal{C}[s^{-1}])\simeq\varinjlim_{d} \operatorname{hom}_{\mathcal{C}-\operatorname{bimod}}(\operatorname{id}, \mathcal{F}^{d})\]
_where \(\mathcal{F}\) is the \(A_{\infty}\)-bimodule associated to \(F\) and the direct limit is taken along composition with \(s\)._
Here \(\mathcal{C}[s^{-1}]\) denotes the localization of \(\mathcal{C}\) at the natural transformation \(s\), which is defined to be the \(A_{\infty}\)-quotient category of \(\mathcal{C}\) by the full subcategory \(\mathcal{A}\) of cones of \(s\), in the sense of [13, 14]. The quotient category \(\mathcal{C}/\mathcal{A}\) has the same objects as \(\mathcal{C}\) but morphisms are given by the bar complex:
\[\hom_{\mathcal{C}/\mathcal{A}}(X,Y)=\bigoplus_{k\geq 0}\bigoplus_{A_{1},\ldots,A _{k}\in\mathcal{A}}\hom_{\mathcal{C}}(A_{k},Y)\otimes\cdots\otimes\hom_{ \mathcal{C}}(A_{1},A_{2})[1]\otimes\hom_{\mathcal{C}}(X,A_{1})[1]\]
where the \(k=0\) term is \(\hom_{\mathcal{C}}(X,Y)\) itself and the \(A_{\infty}\) operations given by summation over all ways of collapsing the complex. One moreover has the notion of a quotient module: if \(\mathcal{M}\) is a right \(\mathcal{C}\)-module, one may define a right \(\mathcal{C}/\mathcal{A}\)-module [11] denoted \(\mathcal{M}/\mathcal{A}\) via:
\[(\mathcal{M}/\mathcal{A})(X)=\bigoplus_{k\geq 0}\bigoplus_{A_{1},\ldots,A_{k} \in\mathcal{A}}\mathcal{M}(A_{k})\otimes\cdots\otimes\hom_{\mathcal{C}}(A_{1},A_{2})[1]\otimes\hom_{\mathcal{C}}(X,A_{1})[1]\]
where again the \(k=0\) term is \(\mathcal{M}(X)\) and the \(A_{\infty}\)-module operations come from summation over all ways of applying collapsing the complex. Similarly, we use \(\mathcal{A}\backslash\mathcal{N}\) to denote the quotient of a left \(\mathcal{C}\)-module \(\mathcal{N}\) to give a left \(\mathcal{C}/\mathcal{A}\)-module; and \(\mathcal{A}\backslash\mathcal{B}/\mathcal{A}\) to denote the quotient of a \(\mathcal{C}\)-bimodule \(\mathcal{B}\) to give a \(\mathcal{C}/\mathcal{A}\)-bimodule, defined in an analogous fashion.
We recall the following properties of \(A_{\infty}\)-quotients, which follow directly from the definitions:
**Lemma 4.10**.: _Suppose \(\mathcal{A}\) is a full subcategory of \(\mathcal{C}\), and \(X,Z\) are objects of \(\mathcal{C}\): then_
1. _The Yoneda module_ \(\mathcal{Y}^{\prime}_{[X]}\) _of_ \([X]\) _in_ \(\mathcal{C}/\mathcal{A}\) _is quasi-isomorphic to the quotient module_ \(\mathcal{Y}^{\prime}_{X}/\mathcal{A}\)_;_
2. _The Yoneda module_ \(\mathcal{Y}^{\prime}_{[Z]}\) _of_ \([Z]\) _in_ \(\mathcal{C}/\mathcal{A}\) _is quasi-isomorphic to the quotient module_
3. _The quotient bimodule_ \(\mathcal{A}\backslash\Delta_{\mathcal{C}}/\mathcal{A}\) _is quasi-isomorphic to the diagonal bimodule_ \(\Delta_{\mathcal{C}/\mathcal{A}}\)_;_ \(\mathcal{A}\backslash\mathcal{Y}^{\prime}_{Z}\)_;_
4. _The quotient map_ \(\mathcal{M}\to\mathcal{M}/\mathcal{A}\) _from_ \(\mathcal{C}\)_-modules to_ \(\mathcal{C}/\mathcal{A}\)_-modules is an exact functor._
_Hence if \(\mathcal{C}\) is homologically smooth, then so is \(\mathcal{C}/\mathcal{A}\)._
Proof.: (of Proposition 4.9) For a homologically smooth \(A_{\infty}\)-category \(\mathcal{C}\), every \(A_{\infty}\)-functor \(F:\mathcal{C}\to\mathcal{C}\) induces a perfect \(\mathcal{C}\)-bimodule \(\mathcal{F}\), so by Lemma 4.10 it suffices to show that if \(\mathcal{M},\mathcal{N}\) are in the subcategory of \(\mathcal{C}-\text{bimod}\) that is split-generated by tensor products of Yoneda bimodules, one can compute morphisms by
\[\hom_{\mathcal{C}/\mathcal{A}-\text{bimod}}(\mathcal{A}\backslash\mathcal{M}/ \mathcal{A},\mathcal{A}\backslash\mathcal{N}/\mathcal{A})\simeq\varinjlim_{d} \hom_{\mathcal{C}-\text{bimod}}(\mathcal{M},\mathcal{N}\otimes\mathcal{F}^{d})\]
The Kunneth theorem for bimodules [10, Proposition 2.13] says
\[\hom_{\mathcal{C}/\mathcal{A}-\text{bimod}}(\mathcal{Y}^{\ell}(X)\otimes \mathcal{Y}^{\prime}(Z),\mathcal{Y}^{\ell}(X^{\prime})\otimes\mathcal{Y}^{ \prime}(Z^{\prime}))\simeq\hom_{\mathcal{C}/\mathcal{A}}(X,X^{\prime}) \otimes\hom_{\mathcal{C}/\mathcal{A}}(Z,Z^{\prime})\]
while for \(\mathcal{C}/\mathcal{A}\) we know from [17, SS1] that
\[\Hom_{\mathcal{C}/\mathcal{A}}^{*}(Y,Y^{\prime})\cong\varinjlim_{d}\Hom_{ \mathcal{C}}^{*}(Y,F^{d}(Y^{\prime}))\]
Since direct limits commute with tensor products and taking cohomology we have
\[\hom_{\mathcal{C}/\mathcal{A}-\mathrm{bimod}}(\mathcal{V}^{\ell}(X)\otimes \mathcal{Y}^{r}(Z),\mathcal{V}^{\ell}(X^{\prime})\otimes\mathcal{Y}^{r}(Z^{ \prime}))\simeq\varinjlim_{\overline{d_{1},\overline{d_{2}}}}\hom_{\mathcal{C} }(X,F^{d_{1}}(X^{\prime}))\otimes\hom_{\mathcal{C}}(Z,F^{d_{2}}(Z^{\prime}))\]
and using the fact the diagonal sequence is cofinal, along with the tensor-hom adjunction we get
\[\hom_{\mathcal{C}/\mathcal{A}-\mathrm{bimod}}(\mathcal{V}^{\ell}(X)\otimes \mathcal{Y}^{r}(Z),\mathcal{V}^{\ell}(X^{\prime})\otimes\mathcal{Y}^{r}(Z^{ \prime}))\simeq\varinjlim_{\overline{d}}\hom_{\mathcal{C}-\mathrm{bimod}}( \mathcal{V}^{\ell}(X)\otimes\mathcal{Y}^{r}(Z),\mathcal{V}^{\ell}(X^{\prime}) \otimes\mathcal{Y}^{r}(Z^{\prime})\otimes\mathcal{Y}^{d})\]
which completes the proof.
Proof.: (of Theorem 4.1) By [12, Lemma 3.16], the morphisms in the cohomology category of \(\mathcal{F}(M^{0})\) are given by
\[\hom_{\mathcal{F}(M^{0})}(X_{0},X_{1})=\varinjlim_{d}\hom_{\mathcal{F}(M)}(X_ {0},\phi^{d}(X_{1}))\]
where the connecting maps are given by Seidel's natural transformation \(s\in\hom(X_{1},\phi(X_{1}))\). By Proposition 4.9, we know that
\[\HH^{*}(\mathcal{F}(M^{0}))\cong\varinjlim_{\overline{d}}\HH^{*}(\mathcal{F}( M),\Gamma_{\phi^{d}})\]
and by Theorem 4.6, we know that the twisted closed-open map takes the Seidel natural transformation to the Seidel element \(S\); since \(\CO_{\phi}\) moreover respects multiplicative structures, we know
\[\HH^{*}(\mathcal{F}(M^{0}))\cong\varinjlim_{d}\HF^{*}(\phi^{d})\]
as claimed.
## 5 Background on the Product on Fixed Point Floer Homology
We begin by describing the product on fixed point Floer cohomology. We note the results we list here come from taking the _cohomology_ of the computations in our earlier paper [13] instead of homology. In other words, our co-product in [13]
\[\Delta:\HF_{*}(\phi\circ\psi)\to\HF_{*}(\phi)\otimes\HF_{*}(\psi)\]
is the dual of the product structure
\[\cdot:\HF^{*}(\phi)\otimes\HF^{*}(\psi)\to\HF^{*}(\phi\circ\psi)\]
in this paper. We explain this dualization process as a remark 13, but we first describe the results. Recall \(\Sigma_{g}\) denotes a closed Riemann surface of genus g. Let \(\phi\) denote a Dehn twist around a circle satisfying conditions of our previous paper [13] (in the context of this paper see Remark 3). Then Let \(N=[0,1]\times S^{1}\) denote a Weinstein neighborhood of the essential circle. In this region (which we call the twist region) the symplectic for \(\omega=dx\wedge dy\), and the Dehn twist \(\phi^{d}\) can be written as
\[(x,y)\to(x,y-dx)\]
Outside the Dehn twist region \(N\) the product is trivial (for the critical points of the Morse functions aside from \(e_{0}^{d},e_{d}^{d},h_{0}^{d},h_{d}^{d}\)).
_Remark 13_ (Homology v cohomology).: We briefly explain how to arrive at the previous product relations by dualizing the coproduct structure in [13].
The chain complex for homology is given by \((\mathrm{CF}_{*}(\phi^{d}),\partial)\) where the differential counts \(J\)-holomorphic cylinders between critical points. The cochain complex \((\mathrm{CF}^{*}(\phi^{d}),d)\) is defined to be
\[(\mathrm{CF}^{*}(\phi^{d}),d):=\mathrm{Hom}(\mathrm{CF}_{*}(\phi^{d}),\mathbb{ C})\]
The differential \(d\) is defined by the following. For \(\alpha\in\operatorname{CF}^{*}(\phi^{d}),a\in\operatorname{CF}_{*}(\phi^{d})\) we define
\[\langle d\alpha,a\rangle:=\langle\alpha,\partial a\rangle.\]
Geometrically the differential \(d\) on cohomology counts holomorphic cylinders flowing in the opposite direction. By the universal coefficient theorem we have \(\operatorname{HF}^{*}(\phi^{d})\cong\operatorname{Hom}(\operatorname{HF}_{*}( \phi^{d}),\mathbb{C})\). Let \(\Delta:\operatorname{CF}_{*}(\phi^{m+n})\to\operatorname{CF}_{*}(\phi^{n}) \otimes\operatorname{CF}_{*}(\phi^{m})\), then the product on cohomology is defined by
\[\langle\alpha\cdot\beta,a\rangle=\langle\Delta a,\alpha\otimes\beta\rangle.\]
Geometrically, the product structure of \(\operatorname{HF}^{*}\) counts pair of pants with direction opposite to the pair of pants counted by the co-product \(\Delta\). Applying this definition to the computations in [10] recovers the relations listed above.
_Remark 14_ (Signs).: Even though the computations in [10] were done with \(\mathbb{Z}_{2}\) coefficients, we can choose a coherent orientation so that the above relations hold with \(\mathbb{Z}\) (or in our case of interest, \(\mathbb{C}\)) coefficients. We give a brief outline of this below.
We follow Wendl's [20] expositions for coherent orientations. Recall to specify a coherent orientation for \(\operatorname{HF}^{*}(\phi^{m})\) and the underlying product structure, it suffices to consider the asymptotic operator \(A\) associated to each Reeb orbit in \(\operatorname{HF}^{*}(\phi^{m})\). Consider the trivial bundle \(\mathbb{C}^{2}\to\mathbb{C}\), equipped with a Cauchy Riemann type operator \(D_{A}\) that is asymptotic to the operator \(A\) as \(r\to\infty\) in the base \(\mathbb{C}\) (which we think of as a negative puncture). We choose any orientation in the determinant line bundle of \(D_{A}\), which we write as \(\mathfrak{o}_{A}\in\det(D_{A})\), then the collection of such choices specifies a coherent orientation.
Now we look at the Reeb orbits involved in the computation of \(\operatorname{HF}^{*}(\phi^{m})\), we realize there are two kinds of Reeb orbits. There are \(\gamma_{x}\) which correspond to Morse critical points \(x\in\Sigma_{g}\setminus N\) with associated asymptotic operators \(A_{\gamma_{x}}\), and Reeb orbits in the Dehn twist region \(N\) of the form \(e_{j}^{m}\) and \(h_{j}^{m}\). Now we can choose our trivializations so that all \(e_{j}^{m}\)'s (resp. all \(h_{j}^{m}\)'s) have the same asymptotic operator, which we denote by \(A_{e}\) (resp. \(A_{h}\)).
If we restrict our attention to purely the Dehn twist region \(N\), then we realize that we are computing the product on the symplectic cohomology of \(T^{*}S^{1}\). This was already observed in [10]: we can view the Dehn twist region as a subset of \(T^{*}S^{1}\), and the \(J\)-holomorphic curve equation for our count of sections in the bundle of the form \(B_{0}\times T^{*}S^{1}\to B_{0}\) (here \(B_{0}\) is the thrice punctured sphere) is exactly the Hamiltonian Floer equation computing the pair of pants product on the symplectic cohomology of \(T^{*}S^{1}\). From its isomorphism with the homology of the loop space \(S^{1}\), we can choose \(\mathfrak{o}_{h}\in\det(D_{A_{h}})\) and \(\mathfrak{o}_{e}\in\det(D_{A_{e}})\) so that all the pairs of pants contained in the Dehn twist region \(N\) are counted with sign \(+1\). Now for the orbits in the Morse region, choose \(\mathfrak{o}_{\gamma}\) so that the cohomology class \(f^{m}\) multiplies other Reeb orbits in the Morse region as the identity. Then we need to verify that the two coherent orientations agree when we take \(\gamma=e_{0}^{m},e_{m}^{m},h_{0}^{m},h_{m}^{m}\). We observe from the requirements we imposed on \(\mathfrak{o}_{e}\) on Dehn twist region and Morse region that they agree for \(\gamma=e_{0}^{m},e_{m}^{m}\). If they do not agree for \(\gamma=h_{0}^{m},h_{m}^{m}\), we simply reverse \(\mathfrak{o}_{h}\) and observe this has no effect on the sign of the curves we count in the Dehn twist region.
### Fixed point Floer homology for Dehn twists on punctured Riemann surfaces
In this subsection we outline the difference between Dehn twists on closed Riemann surface \(\Sigma_{g}\) and punctured Riemann surface \(\Sigma_{g,k}\). The main difference is that near each puncture \(p_{i}\) we need
to consider wrapping in a way that is similar to symplectic cohomology.
For simplicity we assume there is only one puncture \(p\). Choose cylindrical neighborhoods \((x,y)\in[0,\infty)\times S^{1}\) around \(p\). Then \(\phi\) looks like the time-1 map of the Hamiltonian flow of \(H=\frac{1}{2}x^{2}\) in the neighborhood (then perturbed slightly to \(H_{0}\), in order to break the Morse-Bott degeneracy). The fixed points are located over the circles \(x=0,1,2,\cdots\). For each non-negative integer \(i\) there are two fixed points, \(u_{i}\) an elliptic fixed point of Conley-Zehnder index -1 and \(v_{i}\), an hyperbolic fixed point of Conley-Zehnder index 0.
We need to further describe the Floer data near the puncture \(p\). We make our conventions consistent with the _universal and consistent choice_ of Floer data described in [10], because we use the open- closed map as defined in [10] and want to import the technology developed therein. In the following, we briefly recall the Floer data described in [10] and point out the differences with the setup in [11], where no punctures are dealt with. To define the product \(\mathrm{HF}^{*}(\phi^{m})\otimes\mathrm{HF}^{*}(\phi^{n})\to\mathrm{HF}^{*}( \phi^{m+n})\), let \(B\) be the thrice-punctured sphere with chosen cylindrical coordinates \((s_{0},t_{0}\in[0,\infty)\times S^{1}\) for the positive puncture, and \((s_{i},t_{i})\in(-\infty,0]\times S^{1}\) for the \(i\)-th negative puncture (\(i=1,2\)). To define the product structure, choose a closed one-form \(\beta\) on \(B\) that is equal to \(mdt_{1}\) and \(ndt_{2}\) for the two negative punctures and \((m+n)dt_{0}\) for the positive puncture (likewise, if we want to define the coproduct structure, we let \(\beta\) to be \((m+n)dt_{0}\) for the negative puncture and \(mdt_{1}\) and \(ndt_{2}\) for the two positive punctures). We then make the product bundle \(B\times[0,\infty)_{x}\times S^{1}_{y}\) into a symplectic fiber bundle, with the fiberwise symplectic form \(\omega_{\Sigma}=dx\wedge dy+d(H_{0}\beta)\). Remark that near the three punctures of \(B\), the parallel transports along the \(t_{*}-\)circles are precisely the time-1 maps of the Hamiltonians \(mH_{0}\), \(nH_{0}\) and \((m+n)H_{0}\) respectively. We also fix a function \(a:B\to[\min(m,n),m+n]\) that is equal to \(m\) on the first negative puncture, \(n\) on the second negative puncture, and \(m+n\) on the positive puncture (likewise, if we want to define the coproduct structure, \(a\) is assumed to be \(m\) and \(n\) on the two positive punctures, and \(m+n\) on the negative puncture). View the neighborhood of \(p\) in \(\Sigma_{g,1}\) as the Liouville manifold \([0,\infty)_{x}\times S^{1}_{y}\) with the Liouville form \(xdy\), so the Liouville flow \(\psi^{\rho}\) after time \(\log\rho\) is given by \((x,y)\mapsto(\rho x,y)\) near the puncture \(p\).
The almost complex structure \(J\) on the bundle \(X=B\times[0,\infty)\times S^{1}\) should satisfy the following. Fix a small positive number \(\delta\). In a fixed small neighborhood \(B\times[0,\delta)\times S^{1}\) of the slice \(\{x=0\}\), \(J\) is a fibration-compatible almost complex structure induced by the almost complex structure \(J_{0}\) on \([0,\infty)_{x}\times S^{1}_{y}\) which sends \(\partial_{x}\) to \(\partial_{y}\). In the region \(B\times[2\delta,\infty)\times S^{1}\), \(J\) is the fibration-compatible almost complex structure that is induced by the almost complex on the vertical distribution, which is equal to \((\psi^{a(z)})^{*}J_{0}\) on the fiber over \(z\in B^{1}\). We note that in order to apply the _local energy inequality_ described in [11] to the slice \(\{x=0\}\), we only need the almost complex structure to be the fibration-compatible one that is induced from \(J_{0}\)_in a neighborhood of_ the slice. Consequently, if we make the above requirements for the almost complex structure, the same "no crossing" result as described in [11] holds. In particular, the no-crossing result tells us the \(J\)-holomorphic curves with inputs and outputs both below \(x=0\) remain entirely within that region, and the analogous statement holds true for \(J\)-holomorphic curves with input and output above \(x=0\). Furthermore, via homology considerations (together with "no crossing"), we cannot have \(J\)-holomorphic curves asymptotic to critical points both above \(x=0\) and below \(x=0\). Hence as far as the product is concerned, the product splits into the product of two distinct regions (with tiny overlap around
\(x=0\)): we get the Morse theoretic product for the region where \(x\leq 0\) and away from the Dehn twists; and we get the product for the non-negative sector of symplectic cohomology of \(T^{*}S^{1}\) for \(x\geq 0\).
Here we use the superscript such as \(u_{i}^{n}\) and \(v_{i}^{n}\) to denote the fact we are thinking of the fixed points as living in \(\mathrm{HF}(\phi^{n})\). Similar to the unpunctured case, the fixed point Floer cohomology \(\mathrm{HF}^{*}(\Sigma_{g,k},\phi^{d})\) is isomorphic to \(H^{*}(\Sigma^{\prime}_{g,k})\oplus(\oplus_{j=1}^{d-1}H^{*}(S^{1}))\oplus(\oplus _{i=1}^{\infty}\mathbb{C}\langle u_{i}^{d},v_{i}^{d}\rangle)\). Here we think of \(u_{0}^{d}\) and \(v_{0}^{d}\) as being (the dual of) critical points in \(\Sigma^{\prime}_{g,k}\). In particular, the cohomology class \([f^{d}]\in H^{0}(\Sigma^{\prime}_{g,k})\) is given by \([e_{0}^{d}+e_{d}^{d}+u_{0}^{d}]\), and the cohomology class \([g^{d}]\in H^{1}(\Sigma^{\prime}_{g,k})\) is given by \([h_{0}^{d}+h_{d}^{d}]\). There is another special cohomology class \([h_{0}^{d}-v_{0}^{d}]\in H^{1}(\Sigma^{\prime}_{g,k})\), which we denote by \([\varphi^{d}]\). This new cohomology class comes from the fact adding a new puncture changes the first homology group of the surface. The new product relations are given by
\[[u_{i}^{m}]\cdot[u_{j}^{n}]=[u_{i+j}^{m+n}],\quad[u_{i}^{m}]\cdot[v_{j}^{n}]=[ v_{i+j}^{m+n}],\quad[v_{i}^{m}]\cdot[v_{j}^{n}]=0,\]
\[[f^{m}]\cdot[u_{i}^{n}]=[u_{i+n}^{m+n}],\quad[f^{m}]\cdot[v_{i}^{n}]=[v_{i}^{m +n}],\]
\[[g^{m}]\cdot[u_{i}^{n}]=[v_{i}^{m+n}],\quad[g^{m}]\cdot[v_{i}^{n}]=0,\]
\[[e_{i}^{m}]\cdot[u_{j}^{n}]=[e_{i}^{m}]\cdot[v_{j}^{n}]=[h_{i}^{m}]\cdot[u_{j }^{n}]=[h_{i}^{m}]\cdot[v_{j}^{n}]=0,\]
\[[f^{m}]\cdot[\varphi^{n}]=[\varphi^{m+n}]+[h_{m}^{m+n}],\quad[e_{i}^{m}]\cdot [\varphi^{n}]=[h_{i}^{m+n}],\quad[h_{i}^{m}]\cdot[\varphi^{n}]=0,\]
\[[u_{i}^{m}]\cdot[\varphi^{n}]=-[v_{i}^{m+n}],\quad[v_{i}^{m}]\cdot[\varphi^{n }]=0,\]
and the product relations between \([f^{d}],[g^{d}]\) and \([e_{i}^{m}],[h_{i}^{m}]\) are the same as the unpunctured case. These product relations come from the same calculations that were performed in [13]. The same no-crossing lemma tells us that when we think of \(\Sigma_{g,k}\) as union of the Morse region, Dehn twist region, and the puncture region, holomorphic curves with inputs/outputs in the puncture region cannot enter the Morse region and vice versa (with the exception of \(u_{0}^{d}\) and \(v_{0}^{d}\) which we can think of as belonging in either the puncture or the Morse region). Carrying out the same computation as in [13] gives us the above results.
_Remark 15_.: In the case where \(\Sigma_{g,k}\) has punctures, we need \(\phi\) to wrap around the punctures, and in this case there are infinitely many generators in \(\mathrm{HF}_{*}(\Sigma_{g,k},\phi)\). To avoid subtleties with duals of infinite dimensional vector spaces, we simply define the cochain complex \(\mathrm{CF}^{*}(\phi^{d})\) to be generated by the fixed points of \(\phi^{d}\), with the differential \(d\) and the product structure \(\cdot\) defined in the same way as in Remark 13.
_Remark 16_.: It is also possible to compute the fixed point Floer cohomology using a direct enumeration of \(J\)-holomorphic sections by using the standard almost complex structure near each of the punctures - this is more similar to the approach in [13] and similar techniques will give the same co-homology level product relations. However we shift our conventions to agree with that of Ganatra's [14] to use the machinery developed in that paper.
## 6 Computation of the Seidel Class
In this section we give an explicit computation of the Seidel class in the closed-string setting and in the punctured setting. Throughout this section, we fix a fibration compatible2 almost complex
structure \(J\) on the Lefschetz fibration, that is induced by the complex structure \(j_{0}\) on the Riemann surface which, inside the neighborhood around the circle, sends \(\partial_{x}\) to \(\partial_{y}\). Note by Remark 6, to compute the Seidel class for \(\Sigma_{g,k}\), near a puncture \(p_{i}\) where there are cylindrical neighborhoods \(S^{1}\times[0,\infty)\), we pass to the open Riemann surface with boundary by restricting to \(S^{1}\times[0,\varepsilon)\) for each of these punctures. We call the resulting Riemann surface \(\widetilde{\Sigma}_{g,k}\). We then form the exact Lefschetz fibration with boundary, with fiber \(\widetilde{\Sigma}_{g,k}\), and the monodromy near each of these neighborhoods is given by the time-\(1\) map of the Hamiltonian \(H(s,t)=s^{2}/2\).
Theorem 6.1.: _For either \(\operatorname{HF}^{*}(\Sigma_{g},\phi)\) or \(\operatorname{HF}^{*}(\Sigma_{g,k},\phi)\), the Seidel class \(S\in\operatorname{HF}^{*}(\phi)\) is given by \(\pm[f^{1}]\)._
Here is the sketch of the proof of Theorem 6.1. We first make this computation in the exact setting, i.e. where the fiber is \(\widetilde{\Sigma}_{g,k}\). We show by energy estimates that all sections counted by the Seidel class are horizontal. We then construct the horizontal sections by hand and show they are transversely cut out. Finally, using an index argument, we show that the sections counted in the computation of the Seidel class in the non-exact case can be reduced to the exact case.
### The exact case
We first compute the index of the curves counted by the Seidel class in the exact case. Let \(s:\mathbb{C}\to E\) denote a \(J\)-holomorphic section counted by the Seidel element. Let \(pt\in\Sigma\) be a generic point away from the Dehn twist region on \(\Sigma\), and consider the "constant" section \(c:\mathbb{C}\to E\) given by \(c(z)=pt\). For any section \(u\) of \(E\) that is asymptotic to an orbit \(\gamma_{x}\), we have the following definition of the wrapping number (see Definition 4.2 of [1]):
Definition 6.2.: _We define \(\eta:=s\cap c\in\mathbb{Z}\) to be the wrapping number of the section \(s\)._
The wrapping number \(\eta(s)\) of a section \(s\) is always non-negative. To see this, notice that in the definition of \(\eta(s)\), the intersection number does not depend on the choice of \(pt\). In particular, we can choose \(pt\) to be a critical point of \(H_{0}\). Under such a choice, the section \(c\) is \(J\)-holomorphic, and hence \(\eta(u)\geq 0\) by positivity of intersections. Moreover, any \(J\)-holomorphic section \(s\) that's not \(c\) with \(\eta(s)=0\) is disjoint from \(c\).
Proposition 6.3.: _Let \(x\in Fix(\phi)\) denote the fixed point of \(\phi\) and let \(\gamma_{x}\) be the corresponding orbit in the mapping torus. Suppose a section \(u\) is asymptotic to \(\gamma_{x}\) at \(r=\infty\), then the Fredholm index of the section \(u\) is given by_
\[\operatorname{ind}(u)=1+CZ^{\tau}(\gamma_{x})+(4-4g)\eta([u]).\]
_Hence the only fixed points of \(\phi\) that can contribute to the Seidel class are \(e_{0}^{1}\), \(e_{1}^{1}\) or \(u_{0}^{1}\)._
Proof.: Away from the critical locus, the bundle is a product bundle, and we use the trivialization \(\operatorname{Ver}=T((-1,1)_{x}\times S_{y}^{1})\cong\mathbb{R}^{2}\) to trivialize the vertical distribution. The Fredholm index formula reads:
\[\operatorname{ind}(u) =-\chi(u)+2\langle c_{1}^{\tau}(TE),[u]\rangle+CZ^{\tau}(\gamma_ {x})\] \[=-1+2(1+\langle c_{1}^{\tau}(\operatorname{Ver}),[u]\rangle)+CZ^ {\tau}(\gamma_{x})\] \[=1+2\langle c_{1}^{\tau}(\operatorname{Ver}),[u]\rangle+CZ^{ \tau}(\gamma_{x}).\]
The term \(\langle c_{1}^{\tau}(\mathrm{Ver}),[u]\rangle\) is equal to the wrapping number \(\eta([u])\) multiplied by \(2-2g\), whose proof (in a slightly different setting) can be found in [13, Lemma 5.6]. So we have
\[\mathrm{ind}(u)=1+CZ^{\tau}(\gamma_{x})+(4-4g)\eta([u]).\]
Since \(g\geq 2\), \(\eta([u])\geq 0\) and \(CZ^{\tau}(\gamma_{x})\in\{-1,0,1\}\), we see that the only possibility for \(\mathrm{ind}(u)\) to be zero is \(CZ^{\tau}(\gamma_{x})=-1\) and \(\eta([u])=0\). By our setting, the only fixed points that have Conley-Zehnder index -1 are \(e_{0}^{1}\), \(e_{1}^{1}\) and \(u_{0}^{1}\), hence the proof.
Proposition 6.4.: _The only sections of \(E\) that are asymptotic to \(e_{0}^{1}\), \(e_{1}^{1}\) or \(u_{0}^{1}\) at \(r=\infty\) are horizontal. The horizontal sections exist and are automatically transversely cut out._
Proof.: We use the same notations from Proposition 3.1, and let \(G(r,x,y)=(g(\mu)-1)\widetilde{R}_{\phi(r))}(\mu)+H_{0}(x,y)\), and let the \(\lambda\) be a primitive of \(\omega_{\Sigma}\) that is equal to \(xdy\) in the twist region. We know that there is a one-form \(\alpha\) on \(E\) such that \(\omega=d(\lambda+\alpha)\), and \(\alpha=Gd\theta\) away from the critical locus. For every section \(u\), we consider the vertical energy
\[E(u)=\frac{1}{2}\int|\partial_{s}u-\partial_{s}^{\#}|_{g_{J}}^{2}+|\partial_{ t}u-\partial_{t}^{\#}|_{g_{J}}^{2}ds\wedge dt,\]
where \((s,t)\) is any local conformal coordinate of the base, \(g_{J}\) is the metric induced by \(\omega\) and the almost complex structure \(J\), and \(\partial_{s}^{\#}\), \(\partial_{t}^{\#}\) are the horizontal lifts of the vector fields. Using the polar coordinate \((r,\theta)\), the above expression can be re-written as
\[E(u)=\int u^{*}\omega-\int\frac{\partial G}{\partial r}dr\wedge d\theta.\]
Now if \(u\) is any section that is asymptotic to the fixed points \(e_{0}^{1}\), \(e_{1}^{1}\) or \(u_{0}^{1}\) at \(r=\infty\), then the term \(\int u^{*}\omega\) is equal to \(u^{*}\lambda\) integrated at the boundary of \(u\), which is zero. Away from the critical locus, the integrand \(\frac{\partial G}{\partial r}\) is non-negative. So we conclude that the second term \(\int\frac{\partial G}{\partial r}dr\wedge d\theta\geq 0\). Thus, for any \(J\)-holomorphic section \(u\) that is asymptotic to \(e_{0}^{1}\), \(e_{1}^{1}\) or \(u_{0}^{1}\) at \(r=\infty\), we have the vertical energy
\[E(u)\leq 0,\]
and this can only happen when \(E(u)=0\) and \(u\) is a horizontal section.
There exist horizontal sections asymptotic to \(e_{0}^{1}\) and \(e_{1}^{1}\) corresponding to the critical points of \(H_{0}\) near the circles \(\{x=\pm\lambda\}\). To see this, locally we can write the fiberwise symplectic two-form \(\omega\) as \(\omega_{\Sigma}+dH_{0}\wedge d\theta\), so the section given by \(\mathbb{C}\times\{p\}\) where \(p\) is the critical point of \(H_{0}\) gives the existence of horizontal sections. A simple energy argument shows that for \(e_{0}^{1}\), \(e_{1}^{1}\) and \(u_{0}^{1}\), such horizontal section with the desired asymptotes is unique.
That the horizontal sections are transversely cut out follows from the automatic transversality criterion given in [25, Theorem 1]. To be precise we observe the Fredholm index is zero, \(c_{N}=-1\), and \(Z(du)=0\). Hence the automatic transversality criterion is satisfied.
_Remark 17_.: Since all of the sections involved in computing the Seidel class are transverse, there is no need to further perturb the almost complex structure \(J\), for example as in definition 4.4, and hence the above proposition finishes the proof of Theorem 6.1 in the exact case.
_Remark 18_.: Whether the end result is \(+[f^{1}]\) or \(-[f^{1}]\) will necessitate working through the coherent choice of orientations. For us we assume the Seidel class is \([f^{1}]\) for the rest of the article, and make the observation that this choice does not in fact affect the resulting ring in Theorem 7.6. The signs and orientations for the Seidel class are the subject of future work by Shaoyun Bai and Paul Seidel [BS].
### The non-exact case
We now deduce the computation of Seidel class for closed Riemann surfaces from the case of punctured Riemann surfaces.
We consider the standard Lefschetz fibration constructed in Proposition 3.1 and count sections of this fibration satisfying conditions described in Definition 4.4. Recall from Proposition 6.3, for a generic almost complex structure \(J\), we have
\[\operatorname{ind}(s)=1+CZ^{\tau}(\gamma_{x})+(4-4g)\eta([s]),\]
where \(CZ^{\tau}(\gamma_{x})\in\{-1,0,1\}\) and \(\eta(s)\geq 0\).
The next proposition follows from the above observation:
Proposition 6.5.: _Any section \(s\) counted by the Seidel element must satisfy \(\eta(s)=0\) and \(CZ^{\tau}(\gamma_{x})=-1\)._
Hence we see that for generic \(J\), all sections counted by the Seidel element must satisfy \(\eta=0\). If we pick the constant section \(c:\mathbb{C}\to E\) to be \(c(z)=pt\) with \(pt\) a critical point of \(H_{0}\), then \(\eta(u)=0\) implies that \(u\) is disjoint from \(c\), as we observed before. So we can consider the new bundle \(E^{\prime}=E\setminus c\) with the new generic fiber \(\Sigma_{g,1}=\Sigma_{g}\setminus pt\). On \(\Sigma_{g,1}\), we can find a one-form \(\lambda_{\Sigma}\) with \(d\lambda_{\Sigma}=\omega_{\Sigma}\) and \(\lambda_{\Sigma}=xdy\) in the twist region. Now \(E^{\prime}\) becomes an exact Lefschetz fibration with fiberwise symplectic form
\[\omega=d(\lambda_{\Sigma}+\alpha),\]
where \(\alpha\) is the one-form described in Proposition 6.4. So the discussions from the previous section imply Proposition 6.4 in the closed case as well, and this finishes the proof of Theorem 6.1.
## 7 A-model computations of symplectic
cohomology
### Single Dehn twist on a closed surface
We first discuss the case of a single Dehn twist on the closed surface \(\Sigma_{g}\), then we shall point to the extensions to the cases of multiple Dehn twists and punctured surface \(\Sigma_{g,k}\). In the following, we use the notation \(R^{d}\) to denote the span
\[\langle[e_{0}^{d}],[e_{1}^{d}],\cdots,[e_{d-1}^{d}],[f^{d}]\rangle,\]
in other words, \(R^{d}=\operatorname{HF}^{0}(\phi^{d})\) for \(d>0\). Notice that when \(d=0\), \(\operatorname{HF}^{0}(\operatorname{id})\cong H^{*}(\Sigma_{g})\) is generated by \([f^{0}]\) together with the fundamental class \(K\in H^{2}(\Sigma_{g})\). We grade the elements \([e_{i}^{d}]\), \([h_{i}^{d}]\), \([f^{d}]\) and \([g^{d}]\) with degree \(d\), making \(\bigoplus_{d=0}^{\infty}R^{d}\) a graded algebra. We have the following
Theorem 7.1.: _The algebra \(\bigoplus_{d=0}^{\infty}R^{d}\) is isomorphic to_
\[\mathbb{C}[X,Y,Z]/(XYZ-Y^{3}-Z^{2})\]
_as graded \(\mathbb{C}\)-algebras, where \(|X|=1\), \(|Y|=2\) and \(|Z|=3\). And the \(\mathbb{C}\)-algebra \(\bigoplus_{d=0}^{\infty}\operatorname{HF}^{0}(\phi^{d})\) is isomorphic to_
\[(\mathbb{C}[X,Y,Z]/(XYZ-Y^{3}-Z^{2}))\oplus\mathbb{C}\langle K\rangle,\]
_as a vector space, where \(|K|=0\). The algebra structure is determined by the subalgebra \(\bigoplus_{d=0}^{\infty}R^{d}\cong\mathbb{C}[X,Y,Z]/(XYZ-Y^{3}-Z^{2})\), together with the relations \(K^{2}=KX=KY=KZ=0\)._
_Remark 19_.: The fact that the action of the point class \([f^{0}]\in\operatorname{HF}^{0}(\phi^{0})\) with any other element is identity, and that \(K^{2}=KX=KY=KZ=0\) follow from the module structure of \(\operatorname{HF}^{*}(\Sigma_{g},\phi)\), see the discussions in e.g. [10, 11, 12]. In the following, when deriving the algebra structure, we will often ignore the second factor and only focus on the subalgebra \(\bigoplus_{d=0}^{\infty}R^{d}\) to simplify the notations.
To prove Theorem 7.1, we first prove the algebra \(\bigoplus_{d=0}^{\infty}R^{d}\) is generated by \([f^{1}]\), \([e_{1}^{2}]\) and \([e_{1}^{3}]\).
Lemma 7.2.: _The algebra \(\bigoplus_{d=0}^{\infty}R^{d}\) is generated by \([f^{1}]\), \([e_{1}^{2}]\) and \([e_{1}^{3}]\)._
Proof.: Notice that \([f^{2}]=[f^{1}][f^{1}]-[e_{1}^{2}]-[e_{1}^{2}]\) and \([e_{2}^{3}]=[f^{1}][e_{1}^{2}]-[e_{1}^{3}]\), so it suffices to show that for each \(d\geq 4\), the classes \([e_{i}^{d}]\) (\(0<i<d-1\)) and \([f^{d}]\) are generated by elements of lower degree. To begin with, we notice that if \(1<i<d\) and \(d\geq 4\) then \([e_{i}^{d}]=[e_{1}^{2}][e_{i-1}^{d-2}]\), so we only need to focus on \([f^{d}]\), \([e_{1}^{d}]\) and \([e_{d-1}^{d}]\).
Next we observe that for each \(d\geq 4\), we have
\[[f^{d}]=[f^{2}][f^{d-2}]-[e_{2}^{d}]-[e_{d-2}^{d}]=[f^{2}][f^{d-2}]-[e_{1}^{d-2 }][e_{1}^{2}]-[e_{d-3}^{d-2}][e_{1}^{2}],\]
so \([f^{d}]\) can be generated by elements of lower degrees.
Finally, we observe that for each \(d\geq 4\),
\[[e_{1}^{d}]=[f^{1}][e_{1}^{d-1}]-[e_{2}^{d}]=[f^{1}][e_{1}^{d-1}]-[e_{1}^{2}][ e_{1}^{d-2}],\]
and
\[[e_{d-1}^{d}]=[f^{1}][e_{d-2}^{d-1}]-[e_{d-2}^{d}]=[f^{1}][e_{d-2}^{d-1}]-[e_{1 }^{2}][e_{d-3}^{d-2}].\]
So \([e_{1}^{d}]\) and \([e_{d-1}^{d}]\) (with \(d\geq 4\)) are also generated by elements of lower degree.
Lemma 7.3.: _The generators \([f^{1}],[e_{1}^{2}],[e_{1}^{3}]\) satisfy the relation_
\[[f^{1}]\cdot[e_{1}^{2}]\cdot[e_{1}^{3}]=[e_{1}^{2}]^{3}+[e_{1}^{3}]^{2} \tag{8}\]
_where the exponent is with respect to the product._
Proof.: Follows directly from the product relations.
We next show the relation (8) generates all of the relations of the generators \([f^{1}],[e_{1}^{2}],[e_{1}^{3}]\) in the algebra \(\bigoplus_{d=0}^{\infty}R^{d}\). The proof of Theorem 7.1 then follows immediately. Let
\[P(x,y,z):=xyz-y^{3}-z^{2}\in\mathbb{C}[x,y,z].\]
Lemma 7.4.: _Let \(L(x,y,z)\in\mathbb{C}[x,y,z]\) be a nonzero polynomial such that \(L([f],[e_{1}^{2}],[e_{1}^{3}])=0\), then there is a polynomial \(Q(x,y,z)\in\mathbb{C}[x,y,z]\) so that_
\[L(x,y,z)=Q(x,y,z)\cdot P(x,y,z).\]
Proof.: To set notation, wherever we write a polynomial such as \(P(x,y,z)\) we think of it as a polynomial in \(\mathbb{C}[x,y,z]\). When we write \(P([f],[e_{1}^{2}],[e_{1}^{3}])\) we think of it as an element in \(\bigoplus_{d=0}^{\infty}R^{d}\). We write \(L(x,y,z)=g_{k}(y,z)x^{k}+g_{k-1}(y,z)x^{k-1}+...+g_{0}(y,z)\). The assumption that \(L\) is nonzero implies \(k>0\), since \(\mathbb{C}[[e_{1}^{2}],[e_{1}^{3}]]\cong\mathbb{C}[y,z]\), which follows from the relations in Section 5.
We next apply the division algorithm in \(\mathbb{C}(y,z)[x]\) to write
\[L(x,y,z)=\left(x-\frac{y^{3}+z^{2}}{yz}\right)\left(r_{k-1}(y,z)x^{k-1}+...+r _{0}(y,z)\right)+h_{0}(y,z)\]
where \(r_{j}(y,z),h_{0}(y,z)\in\mathbb{C}(y,z)\).
It follows from an inductive argument we can write
\[r_{j}(y,z)=\frac{\phi_{j}(y,z)}{(yz)^{m_{j}}},\quad\phi_{j}(y,z)\in\mathbb{C}[ y,z],\quad m_{j}\in\mathbb{Z}_{\geq 0}\]
and
\[h_{0}(y,z)=\frac{\rho_{0}(y,z)}{(yz)^{M}},\quad\rho_{0}(y,z)\in\mathbb{C}[y,z],\quad M\geq 1\]
In particular this implies there is a large enough \(N\) so that
\[(yz)^{N}L(x,y,z)=(xyz-y^{3}-x^{2})\left(\widetilde{r}_{k-1}(y,z)x^{k-1}+\cdots +\widetilde{r_{0}}(y,z)\right)+\widetilde{h_{0}}(y,z)\]
where \(\widetilde{r}_{j}(y,z),\widetilde{h}_{0}(y,z)\in\mathbb{C}[y,z]\). We next plug \(x=[f^{1}]\), \(y=[e_{1}^{2}]\) and \(z=[e_{1}^{3}]\) into the above expression to get \(\widetilde{h}_{0}([e_{1}^{2}],[e_{1}^{3}])=0\). Using the isomorphism \(\mathbb{C}[y,z]\cong\mathbb{C}[[e_{1}^{2}],[e_{1}^{3}]]\) of subalgebras, we have \(\widetilde{h}_{0}(y,z)=0\).
Using the fact \(\mathbb{C}[x,y,z]\) is a UFD and that the polynomial \(xyz-y^{3}-z^{2}\) is irreducible, we conclude there is another polynomial \(Q(x,y,z)\) so that \(L(x,y,z)=Q(x,y,z)P(x,y,z)\). This concludes the proof of the lemma.
It follows also from the above computations that
Theorem 7.5.: _We have the following description of \(\bigoplus_{d=0}^{\infty}\mathrm{HF}^{1}(\Sigma_{g},\phi^{d})\) as a module over \(\bigoplus_{d=0}^{\infty}\mathrm{HF}^{0}(\phi^{d})\):_
\[\bigoplus_{d=0}^{\infty}\mathrm{HF}^{1}(\Sigma_{g},\phi^{d})\cong\mathbb{C}[ X,Y,Z]/(XYZ-Y^{3}-Z^{2})\oplus(\mathbb{C}[X])^{2g-2}\oplus\mathbb{C}.\]
_The above expression indicates that \(A\subseteq\mathrm{HF}^{0}(\Sigma_{g},\phi^{d})\) acts on the first factor by multiplication, on the second factor by projection to \(\mathbb{C}[X]\) followed by multiplication, and on the last factor by projection to \(\mathbb{C}\) followed by multiplication._
Proof.: Here is a sketch of the proof. If we denote by \(S\) the submodule generated by the classes \([h^{i}_{j}]\) and \([g^{1}]\), then the argument in the proof of Lemma 7.2 shows that \(S\) is generated by the elements \([g^{1}]\), \([h^{2}_{1}]\), and \([h^{3}_{1}]\). The extra classes come from outside of the twist region, i.e. the classes that correspond to \(H^{1}(\Sigma_{g})\) (when \(d=0\)) and \(H^{1}(\Sigma^{\prime}_{g})\) (when \(d>0\)). The dimensions of the vector spaces spanned by these extra classes are \(2g-1\) (when \(d=0\)) and \(2g-2\) (when \(d>0\)) respectively. Multiplication by the Seidel element \(X=[f^{1}]\) identifies these extra classes from different degrees, except that the Seidel element annihilates one of the extra classes from \(\operatorname{HF}^{1}(\phi^{0})\cong H^{1}(\Sigma_{g})\), which corresponds to the fixed point contained in the twist region.
_Remark 20_.: The main difference from [10] is that we need to deal with the product of \(\operatorname{HF}^{1}(\phi^{0})\) and \(\operatorname{HF}^{0}(\phi^{d})\). The calculations for those products can be derived from the "extrinsic" description of the products in Floer homologies, see [11]. More concretely, we can extend the small Hamiltonian \(H_{0}\) to \(\Sigma_{g}\) in such a way that there's an extra pair of critical points with Morse indices \(1\) and \(2\) inside the twist region \(N\). We then count Floer cylinders that intersects the ascending manifolds of the negative gradient flow \(-\nabla H_{0}\). The only nontrivial such count (besides those calculated by the cup product of \(H^{*}(\Sigma^{\prime}_{g})\)) results in \([g^{0}]\cdot[e^{d}_{i}]=[h^{d}_{i}]\) for \(d\geq 2\) and \(i=1,2,\cdots,d-1\).
_Remark 21_.: Technically speaking there is another multiplication on fixed point Floer cohomology taking pair of elements in \(\bigoplus_{d=0}^{\infty}\operatorname{HF}^{1}(\Sigma_{g},\phi^{d})\) to an element in \(\bigoplus_{d=0}^{\infty}\operatorname{HF}^{0}(\Sigma_{g},\phi^{d})\). However this product is not very interesting - the only nontrivial piece comes from the product \(\operatorname{HF}^{1}(\Sigma_{g},\phi^{0})\otimes\operatorname{HF}^{1}( \Sigma_{g},\phi^{0})\to\operatorname{HF}^{0}(\Sigma_{g},\phi^{0})\), which is the classical cup product. Henceforth we will not mention this product and focus on the module structure instead.
It is clear from the proof of Theorem 7.1 that
\[R^{d}\cong(\mathbb{C}[X,Y,Z]/(XYZ-Y^{3}-Z^{2}))_{d},\]
the degree \(d\) part of the graded algebra \(\mathbb{C}[X,Y,Z]/(XYZ-Y^{3}-Z^{2})\). If we consider the direct system \(\{R^{d}\}\) with the connecting map \(R^{d}\to R^{d+1}\) given by multiplying Seidel's element \([f^{1}]\in\operatorname{HF}^{0}(\phi)\), then the direct limit \(\varinjlim_{d}R^{d}\) can be identified with the direct limit
\[\varinjlim_{d}(\mathbb{C}[X,Y,Z]/(XYZ-Y^{3}-Z^{2}))_{d}\]
where the connecting map given by multiplication with \(X\). With the above observations, we have the following
Theorem 7.6.: \(\varinjlim_{d}R^{d}=\varinjlim_{d}\operatorname{HF}^{0}(\phi^{d})\cong \mathbb{C}[Y,Z]/(YZ-Y^{3}-Z^{2})\)_._
Proof.: When taking the direct limit, we can ignore the second factor in \(\operatorname{HF}^{0}(\phi^{d})\cong\mathbb{C}^{2}\), as its multiplication with the Seidel elements yields zero. Identify \(R^{d}\) with \((\mathbb{C}[X,Y,Z]/(XYZ-Y^{3}-Z^{2}))_{d}\). It is clear form the definition of the direct system that the direct limit is isomorphic to
\[(\mathbb{C}[X,Y,Z,X^{-1}]/(XYZ-Y^{2}-Z^{3}))_{0},\]
the degree \(0\) part of the graded algebra \(\mathbb{C}[X,Y,Z,X^{-1}]/(XYZ-Y^{2}-Z^{3})\). This is isomorphic to \(\mathbb{C}[Y,Z]/(YZ-Y^{3}-Z^{2})\).
This completes the proof of Theorem 1.5.
Similarly, we can consider the direct limit \(\varinjlim_{d}\operatorname{HF}^{1}(\phi^{d})\), where the connecting map is also defined by multiplying the Seidel's element \([f^{1}]\). The exact same calculation for \(\varinjlim_{d}R^{d}\), together with the cup product structure on \(H^{*}(\Sigma^{0}_{g})\), shows the following
Theorem 7.7.: \(\varinjlim_{d}\operatorname{HF}^{1}(\phi^{d})\cong\mathbb{C}[Y,Z]/(YZ-Y^{3}-Z^ {2})\oplus\mathbb{C}^{2g-2}\) _as a \(\varinjlim_{d}\operatorname{HF}^{0}(\phi^{d})\) module, where \(\varinjlim_{d}\operatorname{HF}^{0}(\phi^{d})\cong\mathbb{C}[Y,Z]/(YZ-Y^{3}-Z ^{2})\) acts on the first factor by multiplication, and on the second factor by projection to \(\mathbb{C}\) followed by diagonal multiplication._
Theorem 1.3 then follows from Theorems 7.6, 7.7, and 2.2.
### The case of multiple Dehn twists
Using the product formula we developed in [10], we can also compute the direct limit in the case we are performing multiple Dehn twists simultaneously.
The same computation as in Section 6 shows the Seidel element is horizontal and transverse, so each Dehn twist \(C_{i}\) contributes two elliptic Reeb orbits as in Theorem 6.1 to the Seidel class. After a choice of coherent orientations (see remark 14) they all contribute with the same sign. With the same algebraic computation of the direct limit, we can show that
Theorem 7.8.: _Let \(C_{1},...,C_{k}\) be circles on \(\Sigma_{g}\) such satisfying the conditions of Remark 3. Let \(\phi\) denote the simultaneous Dehn twists around \(\{C_{1},..,C_{k}\}\). We have_
\[\varinjlim_{d}\operatorname{HF}^{0}(\phi^{d})=A\times_{\mathbb{C}}A\times_{ \mathbb{C}}A\times...\times_{\mathbb{C}}A\]
_as algebras. Here \(\times_{\mathbb{C}}\) denotes the fiber product of rings over their common map \(A\to\mathbb{C}\), and there are \(k\) copies of \(A\) in the fiber product. We also have_
\[\varinjlim_{d}\operatorname{HF}^{1}(\phi^{d})=(A\times_{\mathbb{C}}A\times_{ \mathbb{C}}A\times...\times_{\mathbb{C}}A)\oplus\mathbb{C}^{2g-2}\]
_as \(\varinjlim_{d}\operatorname{HF}^{0}(\phi^{d})\) modules, where the action of \(\times_{\mathbb{C}}^{k}A\) on the second factor is the projection to \(\mathbb{C}\) followed by the diagonal multiplication._
The above result recovers the mirror statement, Theorem 2.3.
### Dehn twists on punctured Riemann surfaces
We next explain what the symplectic cohomology looks like for nodal surfaces with punctures. For simplicity we consider \(\Sigma_{g,1}\) with only one puncture, \(\phi\) is a Dehn twist around a curve \(C\) that is non-separating, and has a puncture at \(p\).
The Seidel element in this setting is now \([e_{0}^{1}+e_{1}^{1}+u_{0}^{1}]\). Then the symplectic cohomology is given by
Theorem 7.9.: _We have_
\[\varinjlim_{d}\operatorname{HF}^{0}(\Sigma_{g,1},\phi^{d})\cong(\mathbb{C}[Y,Z]/(YZ-Y^{3}-Z^{2}))\times_{\mathbb{C}}\mathbb{C}[T],\]
_where \(T^{i}\) is given by the class \([u^{1}_{i}]\), and the fiber product is given by evaluations at 0. We also have_
\[\varinjlim_{d}\operatorname{HF}^{1}(\Sigma_{g,1},\phi^{d})\cong(\mathbb{C}[W] \times_{\mathbb{C}}\mathbb{C}[T])\oplus\mathbb{C}^{2g-2}\]
_as a module over \(\varinjlim_{d}\operatorname{HF}^{0}(\Sigma_{g,1},\phi^{d})\), where the fiber product is given by evaluations \(f(W)\mapsto f(0)-f(1)\) and \(g(\overline{T})\mapsto g(0)\). The module structure on the first summand is given by \((Y,0)\mapsto(W-W^{2},0)\), \((Z,0)\mapsto(W^{2}-W^{3},0)\) followed by multiplications, and the module structure on the second summand is projection to \(\mathbb{C}\) followed by scalar multiplication._
Proof.: We note that the proof for \(\varinjlim_{d}\operatorname{HF}^{0}(\Sigma_{g,1},\phi^{d})\) is very similar to the non-punctured case. The key is to realize the generators near the puncture and the generators in the Dehn twist region do not interact with each other save for the fact that both \(u^{1}_{0}\) and \(e^{1}_{0}+e^{1}_{1}\) contributes to the Seidel element \([f^{1}]=[e^{1}_{0}+e^{1}_{1}+u^{1}_{0}]\) with respect to which we take the direct limit. This is responsible for the fact we see the fiber product of \(A\) with \(\mathbb{C}[T]\) instead of the direct product. The main difficulty is working out the module structure of \(\varinjlim_{d}\operatorname{HF}^{1}(\Sigma_{g,1},\phi^{d})\) over \(\varinjlim_{d}\operatorname{HF}^{0}(\Sigma_{g,1},\phi^{d})\), which we describe in more detail.
The extra \(2g-2\) generators of \(\mathbb{C}^{2g-2}\) correspond to the fixed points away from the twist region and the punctured region; they have trivial products with the elements in \(\varinjlim_{d}\operatorname{HF}^{0}(\Sigma_{g,1},\phi^{d})\) except for the unit element (that correspond to the classes \([f^{d}]\)), so we can ignore these in the following discussion. The main difference from the non-punctured case is that there is an extra generator \([\varphi^{d}]=[h^{d}_{0}-v^{d}_{0}]\). The images of \([\varphi^{d}]\) in the direct limit then correspond to the element \((W^{d},-1)\) in the fiber product \(\mathbb{C}[W]\times_{\mathbb{C}}\mathbb{C}[T]\), the image of \([g^{d}]=[h^{d}_{0}+h^{d}_{d}]\) correspond to the elements \((1,0)\), while as the images of the classes \([v^{d}_{i}]\) correspond to the elements \((0,T^{i})\) in the fiber product. The same proof as in Lemma 7.2 shows that the module \(\varinjlim_{d}\operatorname{HF}^{1}(\Sigma_{g,1},\phi^{d})\) is generated by the images of the classes \([\varphi^{d}]\), \([g^{1}]\), \([v^{d}_{i}]\), \([h^{2}_{1}]\) and \([h^{3}_{1}]\). In fact, the last two classes \([h^{2}_{1}]\) and \([h^{3}_{1}]\) can be generated by \([\varphi^{d}]\) and \([g^{1}]\) as well. To see this, notice that
\[[h^{2}_{1}]=[e^{1}_{0}+e^{1}_{1}+u^{1}_{0}]\cdot[h^{1}_{0}-v^{1}_{0}]-[h^{2}_{ 0}-v^{2}_{0}]\]
and
\[[h^{3}_{1}]=[e^{1}_{0}+e^{1}_{1}+u^{1}_{0}]\cdot[h^{2}_{0}-v^{2}_{0}]-[h^{3}_{ 0}-v^{3}_{0}].\]
The images of the classes \([\varphi^{d}]\) and the image of the class \([g^{1}]\) are linearly independent over \(\mathbb{C}\). To see this, simply calculate the images of these class in the same degree \(D\) by multiplying appropriate powers of \([f^{1}]\), and we get the following
\[[g^{D}]=[h^{D}_{0}+h^{D}_{D}],\quad[\varphi^{D}]=[h^{D}_{0}-v^{D}_{0}],\quad[ f^{1}]^{D-k}\cdot[\varphi^{k}]=[h^{D}_{0}+h^{D}_{D-k}-v^{D}_{0}]\]
which are easily seen to be linearly independent over \(\mathbb{C}\) in \(\operatorname{HF}^{1}(\Sigma_{g,1},\phi^{D})\).
The actions of \((Y,0)=[e^{2}_{1}]\) and \((Z,0)=[e^{3}_{1}]\) are \((Y,0)\cdot(W^{d},-1)=(W^{d+1},-1)-(W^{d+2},-1)\) and \((Z,0)\cdot(W^{d},-1)=(W^{d+2},-1)-(W^{d+3},-1)\), which come from the relations
\[[e^{2}_{1}]\cdot[h^{d}_{0}-v^{d}_{0}]=[e^{1}_{0}+e^{1}_{1}+u^{1}_{0}]\cdot[h^{ d+1}_{0}-v^{d+1}_{0}]-[h^{d+2}_{0}-v^{d+2}_{0}]\]
and
\[[e^{3}_{1}]\cdot[h^{d}_{0}-v^{d}_{0}]=[e^{1}_{0}+e^{1}_{1}+u^{1}_{0}]\cdot[h^{ d+2}_{0}-v^{d+2}_{0}]-[h^{d+3}_{0}-v^{d+3}_{0}].\]
We now present the case for multiple punctures.
THEOREM 7.10.: _We have_
\[\varinjlim_{d}\mathrm{HF}^{0}(\Sigma_{g,k},\phi^{d})\cong(\mathbb{C}[Y,Z]/(YZ-Y^ {3}-Z^{2}))\times_{\mathbb{C}}\mathbb{C}[T_{1}]\times_{\mathbb{C}}\mathbb{C}[T_{ 2}]\times_{\mathbb{C}}\cdots\times_{\mathbb{C}}\mathbb{C}[T_{k}],\]
_where the fiber product is taken by evaluations at \(0\). For the module structure, we have_
\[\varinjlim_{d}\mathrm{HF}^{1}(\Sigma_{g,1},\phi^{d})\cong(\mathbb{C}[W]\times _{\mathbb{C}}(\mathbb{C}[T_{1}]\times\mathbb{C}[T_{2}]\times\cdots\mathbb{C}[ T_{k}]))\oplus\mathbb{C}^{2g-2}\]
_where the fiber product is given as follows. Let \(f(W)\in\mathbb{C}[W]\) and let \(g_{j}(T_{j})\in\mathbb{C}[T_{j}]\), then we require_
\[f(0)-f(1)=g_{1}(0)+\cdots+g_{k}(0).\]
_The module action is given by the following. Let \((f(Y,Z),g_{1}(T_{1}),\cdots,g_{k}(T_{k}))\in\varinjlim_{d}\mathrm{HF}^{0}( \Sigma_{g,k},\phi^{d})\). The factor \(Y\) acts by multiplication multiplication via \(W-W^{2}\) on the \(\mathbb{C}[W]\) component. The factor \(Z\) acts by multiplication via \(W^{2}-W^{3}\) on the \(\mathbb{C}[W]\) factor. We extend this multiplicatively to obtain an action of \(f(Y,Z)\). The element \(g_{j}(T_{j})\) acts by multiplication on \(\mathbb{C}[T_{j}]\). Finally \((f(Y,Z),g_{1}(T_{1}),\cdots,g_{k}(T_{k}))\in\varinjlim_{d}\mathrm{HF}^{0}( \Sigma_{g,k},\phi^{d})\) acts by multiplication by \(f(0,0)\) in the \(\mathbb{C}^{2g-2}\) factor._
Proof.: The case with multiple punctures is almost the same as the case with only one puncture. For each degree \(d\) and \(j=1,2,\cdots,k\), we have fixed points \(u^{d}_{i,j}\) and \(v^{d}_{i,j}\) that correspond to the \(k\) punctures. The Seidel element is \([f^{1}]=[e_{0}^{1}+e_{1}^{1}+u_{0,1}^{1}+\cdots,+u_{0,k}^{1}]\), and the elements \(T_{j}^{i}\in\varinjlim_{d}\mathrm{HF}^{0}(\Sigma_{g,k},\phi^{d})\) correspond to the classes \([u^{d}_{i,j}]\). To describe the module structure, the only difference from the once punctured case is that there are \(k\) extra special cohomology classes \([\varphi^{d}_{j}]\) in each degree, corresponding to \([h^{d}_{0}-v^{d}_{j,0}]\). These classes are identified with \((W^{d},0,\cdots,0,-1,0,\cdots,0)\) in our description of the module, where \(-1\) is in the \(j\)-th summand \(\mathbb{C}[T_{j}]\). The fact that \((1,0,\cdots,0)\), \((W,-1,0,\cdots,0)\), \((W^{2},-1,0,\cdots,0),\cdots,(W^{D},-1,0,\cdots,0)\), \((W^{D},0,-1,0,\cdots,0),\cdots,(W^{D},0,\cdots,0,-1)\) are linearly independent over \(\mathbb{C}\) is reflected by the fact that in \(\mathrm{HF}^{1}(\Sigma_{g,k},\phi^{D})\), the classes
\[[g^{D}]=[h^{D}_{0}+h^{D}_{D}],[h^{D}_{0}+h^{D}_{D-1}-v^{D}_{0,1}],\cdots,[h^{ D}_{0}+h^{D}_{1}-v^{D}_{0,1}],\]
\[[h^{D}_{0}-v^{D}_{0,1}],[h^{D}_{0}-v^{D}_{0,2}],\cdots,[h^{D}_{0}-v^{D}_{0,k}]\]
are linearly independent over \(\mathbb{C}\).
After we take Spec on \(\varinjlim_{d}\mathrm{HF}^{0}\) adding punctures to \(\Sigma_{g}\) corresponds to a nodal degeneration of the mirror. Comparing with the B-side computation in Theorem 2.4, we recover the mirror statement in Theorem 1.3.
### Homogeneous Coordinate Ring for \(\phi^{2}\)
In this subsection we compute the homogeneous coordinate ring \(\bigoplus_{d\geq 0}\mathrm{HF}^{0}(\phi^{2d})\) and its module action on \(\bigoplus_{d\geq 0}\mathrm{HF}^{1}(\phi^{2d})\).
Theorem 7.11.: _Let \(\phi\) denote the Dehn twist along a non-separating circle \(C\subseteq\Sigma_{g}\). Then we have_
\[\bigoplus_{d\geq 0}\operatorname{HF}^{0}(\phi^{2d})\cong(\mathbb{C}[X,Y,Z]/( XYZ-Y^{4}-Z^{2}))\oplus\mathbb{C}\]
_as a graded algebra, where \(|X|=|Y|=2\) and \(|Z|=4\), and the multiplication of the second \(\mathbb{C}\) factor with any element of positive degree is trivial. And_
\[\bigoplus_{d\geq 0}\operatorname{HF}^{1}(\phi^{2d})\cong\mathbb{C}[X,Y,Z]/( XYZ-Y^{4}-Z^{2})\oplus(\mathbb{C}[X])^{2g-2}\oplus\mathbb{C}\]
_as a \(\bigoplus_{d\geq 0}\operatorname{HF}^{0}(\phi^{2d})\) module._
Proof.: The proof is more or less analogous to the proof of Theorem 7.1. The elements \(X=[e_{0}^{0}+e_{0}^{2}]\), \(Y=[e_{1}^{2}]\) and \(Z=[e_{1}^{4}]\), together with the fundamental class in \(H^{2}(\Sigma_{g}^{\prime})\) generate the ring \(\bigoplus_{d\geq 0}\operatorname{HF}^{0}(\phi^{2d})\), and satisfy the equation \(XYZ=Y^{4}+Z^{2}\). The same proof as in lemma 7.4 shows that this equation generates all of the relations among \([e_{0}^{0}+e_{0}^{2}]\), \([e_{1}^{2}]\) and \([e_{1}^{4}]\). The proof of module action of \(\bigoplus_{d\geq 0}\operatorname{HF}^{0}(\phi^{2d})\) also proceeds as before.
Combined with the B-side computation in Proposition 2.7 produces the mirror symmetry statement in Theorem 1.5.
|
2308.00359 | The asymptotic stability of solitons in the focusing Hirota equation on
the line | In this paper, the $\overline\partial$-steepest descent method and B\"acklund
transformation are used to study the asymptotic stability of solitons to the
Cauchy problem of focusing Hirota equation. The solution of the RH problem is
further decomposed into pure radiation solution and solitons solution obtained
by using $\overline\partial$-techniques and B\"acklund transformation
respectively. As a directly consequence, the asymptotic stability of solitons
for the Hirota equation is obtained. | Ruihong Ma, Engui Fan | 2023-08-01T08:04:26Z | http://arxiv.org/abs/2308.00359v2 | # The asymptotic stability of solitons in the focusing Hirota equation on the line
###### Abstract
In this paper, the \(\overline{\partial}\)-steepest descent method and Backlund transformation are used to study the asymptotic stability of solitons to the Cauchy problem of focusing Hirota equation
\[iq_{t}+\alpha(2|q|^{2}q+q_{xx})+i\beta(q_{xxx}+6|q|^{2}q_{x})=0,\] \[q(x,0)=q_{0}(x),\]
where \(q_{0}\in H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R}),s\in(\frac{1}{2},1].\) It is shown that the solution of the Cauchy problem can be expressed in term of the solution of a Riemann-Hilbert (RH) problem. The solution of the RH problem is further decomposed into pure radiation solution and solitons solution obtained by using \(\overline{\partial}\)-techniques and Backlund transformation respectively. As a directly consequence, the asymptotic stability of solitons for the Hirota equation is obtained.
keywords: Hirota equation; Riemann-Hilbert problem; \(\overline{\partial}\)-steepest descent method, Backlund transformation; asymptotic stability.
_Mathematics Subject Classification:_ 35P25; 35Q51; 35Q15; 35A01; 35G25.
###### Contents
* 1 Introduction and main results
* 2 Direct and inverse scattering transforms
* 2.1 Notations
* 2.2 Jost functions and scattering data
* 2.3 A Riemann-Hilbert problem
Dispersion for pure radiation solutions * 3.1 A regular RH problem * 3.2 A mixed RH Problem and its decomposition * 3.3 A solvable model near \(z_{1}\) and \(z_{2}\) * 3.4 The \(\overline{\partial}\) argument
* 4 The asymptotic stability of the solitons
* 4.1 The Backlund transformation
* 4.2 The proof of main result
* A A parabolic cylinder model
* B RH problem under BT
## 1 Introduction and main results
In this paper, we apply \(\overline{\partial}\)-steepest descent method and Backlund transformation (BT) to obtain the asymptotic stability of solitons to the Cauchy problem of the Hirota equation,
\[iq_{t}+\alpha(2|q|^{2}q+q_{xx})+i\beta(q_{xxx}+6|q|^{2}q_{x})=0, \tag{1.1}\] \[q(x,0)=q_{0}(x),\ \ (x,t)\in\mathbb{R}\times\mathbb{R}^{+}, \tag{1.2}\]
where \(q_{0}\in H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R}),s\in(\frac{1}{2},1]\), the real constants \(\alpha\) and \(\beta\) stand for the second-order and third-order dispersions. The Hirota equation (1.1) is a typically mathematical physical model, which encompasses the well-known NLS equation and derivative NLS equation [1; 2]. It is a more accurate approximation than the NLS equation in describing wave propagation in the ocean and optical fiber [3; 4; 5].
In recent years, much work has been done to study the various mathematical properties of the Hirota equation. For example, exact solutions such as multisoliton solutions, breather solutions, rational solutions and rogue wave solutions for the Hirota equation was widely studied [6; 7; 8; 9]. The \(N\)-soliton solutions for the Hirota equation with non-zero boundary condition were constructed by using the Riemann-Hilbert (RH) method [10]. The initial boundary value problem for the Hirota Equation on the half line was analyzed by using the Fokas unified method [11]. The long time asymptotics for the Hirota equation with decaying initial data was investigated via the nonlinear steepest descent method [12]. We further found the Painleve asymptotics for the Hirota equation with Schwartz Cauchy data in the transition region [13]. It was shown that the Cauchy problem for the Hirota
equation is globally well-posed in the space \(H^{s}(\mathbb{R}),s\geq 1\)[14], and admits the \(L^{2}\)-conservation law \(||q(t)||_{L^{2}}=||q_{0}||_{L^{2}},\ t\in\mathbb{R}\). Further the Cauchy problem for the Hirota equation is well-posed in the space \(H^{s}(\mathbb{R}),s>1/2\)[15]. The orbital stability of solitons for the Hirota equation in Sobolev space \(H^{1}(\mathbb{R})\) was shown [16].
In this paper, we use the \(\overline{\partial}\)-steepest descent method and Backlund transformation to acquire the asymptotic stability of solitons for the Hirota equation in Sobolev space \(H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})\). The asymptotic analysis of solitons for focusing Hirota equation is necessarily more detailed than the defocusing case due to the presence of solitons which correspond to discrete spectrum of the non self-adjoint operator associated with (1.1).
We are interested in asymptotic stability of solitons in the Hirota equation (1.1) given by the explicit expressions [17]
\[q_{(\eta,\xi,\gamma,\beta,\alpha)}(x,t) =2\eta e^{2i(-\xi x-4\beta\xi^{3}t-2\alpha\xi^{2}t+12\beta\xi\eta ^{2}t+2\alpha\eta^{2}t)+i\gamma}\] \[\times\mathrm{sech}(-2\eta x-24\beta\eta\xi^{2}t+8\beta\eta^{3}t- 8\alpha\eta\xi t), \tag{1.3}\]
where \(\lambda=\xi+i\eta\) is a complex nonzero parameter that determines by solitons. We consider here the question of their asymptotic stability that is when \(q_{0}\) is close to \(q_{(\eta,\xi,\gamma)}\) for a particular \((\eta,\xi,\gamma)\). Our principal result is now stated as follows.
**Theorem 1.1**.: _Let \(q_{0}\in H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})\) for fixed \(s\in(1/2,1]\) and a soliton \(q_{(\eta,\xi,\gamma)}(0,x)\) of the Hirota equation. Then, there exist positive constants \(\epsilon_{0}=\epsilon_{0}(\eta_{0},\xi_{0})\), \(T=T(\eta_{0},\xi_{0})\), \(C=C(\eta_{0},\xi_{0})\) and if_
\[\epsilon:=||q_{(\eta_{0},\xi_{0},\gamma_{0})}(\cdot,0)-q_{0}||_{H^{1}(\mathbb{ R})\,\cap\,L^{2,s}(\mathbb{R})}<\epsilon_{0}. \tag{1.4}\]
_then there exist a state solution \(q_{(\eta_{1},\xi_{1},\gamma_{1})}(x,t)\) such that for the solution of the Cauchy problem (1.1)-(1.2), we have_
\[|(\eta_{1},\xi_{1},\gamma_{1},x)-(\eta_{0},\xi_{0},\gamma_{0},,x_{0})|<C\epsilon, \tag{1.5}\]
_and for all \(|t|\geq T\),_
\[||q(\cdot,t)-q_{(\eta_{1},\xi_{1},\gamma_{1})}(\cdot,t)||_{L^{\infty}}<C \epsilon|t|^{-\frac{1}{2}}. \tag{1.6}\]
Organization of the paper is as follows. In Section 2, we describe the inverse scattering transform to formulate the Cauchy problem (1.1)-(1.2) into a RH problem 2.1 (see below). In Section 3, with the \(\overline{\partial}\) analysis, we obtain a RH problem with pure radiation solution. Removing this component of the solution results in a \(\overline{\partial}\) problem which is analyzed in Subsection
3.4. In Section 4, a Backlund transformation is constructed to establish the relation between soliton and soliton free solutions. We estimates the norm of the transformation in Lemma 4.6, and further give the proof of Theorem 1.1.
## 2 Direct and inverse scattering transforms
In this section, we will analyze the spectral problem of the focusing Hirota equation (1.1) to obtain Jost functions and scattering matrix related to initial value \(q_{0}\).
### Notations
With regard to complex variables, given a variable \(z\) or a function \(f(z)\), we denote by \(z^{*}\) and \(f^{*}(z)\) their respective complex conjugates; The symbol \(\overline{\partial}\) denoted the derivative with respect to \(z^{*}\), i.e. if \(z=x+iy\), then
\[\overline{\partial}f=\frac{1}{2}(f_{x}+if_{y}).\]
We introduce the Japanese bracket \(\langle x\rangle=\sqrt{1+x^{2}}\) and the following normed spaces: A weighted space \(L^{p,s}(\mathbb{R})\) defined with the norm
\[\|q\|_{L^{p,s}(\mathbb{R})}:=\|\langle x\rangle^{s}q\|_{L^{p}(\mathbb{R})};\]
A Sobolev space \(W^{k,p}(\mathbb{R})\) defined with the norm \(\|q\|_{W^{k,p}(\mathbb{R})}:=\sum_{j=0}^{k}\|\partial^{j}q\|_{L^{p}(\mathbb{R })}\), where \(\partial^{j}q\) is the \(j^{th}\) weak derivative of \(q\);
A Sobolev space \(H^{k}(\mathbb{R})\) defined with the norm \(\|q\|_{H^{k}(\mathbb{R})}:=\|\langle x\rangle^{k}\mathcal{F}(q)\|_{L^{2}( \mathbb{R})}\), where \(\mathcal{F}(q)\) is the Fourier transform of \(q\);
Recall that \(L^{2,s}(\mathbb{R})\) is embedded into \(L^{1}(\mathbb{R})\) for any \(s>\frac{1}{2}\). Based on this fact we consider the potential function \(q(x)\in L^{1}(\mathbb{R})\) for simplicity.
### Jost functions and scattering data
The focusing Hirota equation (1.1) is just a compatibility condition of the system of linear partial differential equation
\[\Psi_{x}=M(x,t;z)\Psi,\quad M(x,t;z)=-iz\sigma_{3}+Q, \tag{2.1}\] \[\Psi_{t}=N(x,t;z)\Psi,\quad N(x,t;z)=-i(4\beta z^{3}+2\alpha z^{2} )\sigma_{3}+V, \tag{2.2}\]
where \(\Psi=\Psi(x,t;z)\) is a 2\(\times\)2 matrix-valued eigenfunction, \(z\in\mathbb{C}\) is a spectral parameter, and
\[Q=\begin{pmatrix}0&q(x,t)\\ -q^{*}(x,t)&0\end{pmatrix},\ \ \sigma_{3}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix},\]
\[V=4\beta z^{2}Q+zV_{1}+V_{0},\quad V_{1}=2\alpha Q-2i\beta(Q_{x}+Q^{2})\sigma_{3},\]
\[V_{0}=-i\alpha(Q_{x}+Q^{2}+q_{0}^{2})\sigma_{3}+\beta[Q_{x},Q]+\beta(2Q^{3}-Q_{ xx}).\]
The Lax pair (2.1)-(2.2) admits a Jost function with asymptotics
\[\Psi(x,t;z)\thicksim e^{-it\theta(z)\sigma_{3}},\quad|x|\to\infty, \tag{2.3}\]
where
\[\theta=\theta(x,t;z)=z\frac{x}{t}+2\alpha z^{2}+4\beta z^{3}. \tag{2.4}\]
Define the eigenfunctions \(\Phi(x,t;z)\)
\[\Phi(x,t;z)=\Psi(x,t;z)e^{it\theta(z)\sigma_{3}}, \tag{2.5}\]
then \(\Phi(x,t;z)\to I,\;\;|x|\to\infty\), and we obtain a equivalent Lax pair
\[\Phi_{x}+iz[\sigma_{3},\Phi]=Q\Phi, \tag{2.6}\] \[\Phi_{t}+i(2\alpha z^{2}+4\beta z^{3})[\sigma_{3},\Phi]=V\Phi, \tag{2.7}\]
whose solutions can be expressed as Volterra type integrals
\[\Phi_{-}(x,t;z) =I+\int_{-\infty}^{x}e^{-iz(x-y)\hat{\sigma_{3}}}Q(y,t)\Phi_{-}(y, t;z)dy, \tag{2.8}\] \[\Phi_{+}(x,t;z) =I-\int_{x}^{+\infty}e^{-iz(x-y)\hat{\sigma_{3}}}Q(y,t)\Phi_{+}(y,t;z)dy, \tag{2.9}\]
where \(e^{\hat{\sigma}_{3}}A=e^{\sigma_{3}}Ae^{-\sigma_{3}}\) with \(A\) is \(2\times 2\) matrix.
Let \(\Phi_{\pm,j}(x,t;z)(j=1,2)\) denote the \(j\)-th column of \(\Phi_{\pm}(x,t;z)\), then it can be shown that \(\Phi_{-,1}(x,t;z)\) and \(\Phi_{+,2}(x,t;z)\) are analytic in \(C^{+}=\{z\,|\,\mbox{Im}z>0\}\) and continuous in \(C^{+}\cup\mathbb{R}=\{z\,|\,\mbox{Im}z\geq 0\}\), \(\Phi_{+,1}(x,t;z)\) and \(\Phi_{-,2}(x,t;z)\) are analytic in \(C^{-}=\{z\,|\,\mbox{Im}z<0\}\) and continuous in \(C^{-}\cup\mathbb{R}=\{z\,|\,\mbox{Im}z\leq 0\}\).
Since \(\Phi_{+}(x,t;z)e^{-it\theta(z)\sigma_{3}}\) and \(\Phi_{-}(x,t;z)e^{-it\theta(z)\sigma_{3}}\) are fundamental matrix solution of the Lax pair (2.1)-(2.2), there exists a scattering matrix \(S(z)=(s_{ij}(z))_{2\times 2}\) satisfying
\[\Phi_{-}(x,t;z)=\Phi_{+}(x,t;z)e^{-it\theta(z)\hat{\sigma}_{3}}S(z),\quad z\in \mathbb{R}. \tag{2.10}\]
The scattering coefficients can be expressed by using Wronskians
\[s_{11}(z) =\det(\Phi_{-,1}(x,t;z)\quad\Phi_{+,2}(x,t;z)),\] \[s_{12}(z) =e^{-2it\theta(z)}\det(\Phi_{+,2}(x,t;z)\quad\Phi_{-,2}(x,t;z)),\] \[s_{21}(z) =e^{2it\theta(z}\det(\Phi_{+,1}(x,t;z)\quad\Phi_{-,1}(x,t;z)),\] \[s_{22}(z) =\det(\Phi_{-,2}(x,t;z)\quad\Phi_{+,1}(x,t;z)),\]
where scattering data \(s_{11}(z)\) is analytic in \(C^{+}\) and continuous on \(C^{+}\cup\mathbb{R}\) and \(s_{22}(z)\) is analytic in \(C^{-}\) and continuous on \(C^{-}\cup\mathbb{R}\). In addition, \(s_{12}(z)\) and \(s_{21}(z)\) are continuous in \(\mathbb{R}\). It follows the symmetric relation
\[\Phi(x,t;z)=\sigma_{2}\Phi^{*}(x,t;z^{*})\sigma_{2},\quad S(z)=\sigma_{2}S^{*}( z^{*})\sigma_{2}.\]
We can obtain \(s_{11}(z)=s_{22}^{*}(z^{*})\) and \(s_{12}(z)=-s_{21}^{*}(z^{*})\). Define reflection coefficient
\[r(z):=\frac{s_{21}(z)}{s_{11}(z)},\ z\in\mathbb{R}, \tag{2.11}\]
then it can be shown that [18, 19].
**Lemma 2.1**.: _There exists an open dense set \(\mathcal{G}\subset L^{1}(\mathbb{R})\) such that, for \(q\in\mathcal{G}\), the scattering function \(s_{11}(z)\) has at most a finite number of zeros forming a set \(\mathcal{Z}_{+}=\{z_{1},\ldots,z_{N}\}\) in \(\mathbb{C}_{+}\), with \(s_{11}(z)\neq 0\) for all \(z\in\mathbb{R}\) and \(s^{\prime}_{11}(z_{k})\neq 0\) for all \(k\). Denote \(\mathcal{Z}_{-}=\{z_{1}^{*},\ldots,z_{N}^{*}\}\) and the cardinality \(q\to\sharp\mathcal{Z}\) is locally constant near \(q\) in \(\mathcal{G}\) and the map \(\mathcal{G}\ni q\to(z_{1},\ldots,z_{N})\in\mathbb{C}_{+}^{N}\) is locally Lipschitz._
We denote by \(\mathcal{G}_{N}\) the open subset of \(\mathcal{G}\) formed by the elements such that \(\sharp\mathcal{Z}=N\). We call the potential in \(\mathcal{G}\)_generic_. \(\mathcal{G}_{1}\) simply contains the soliton (1.3). It is shown that the solitons under small \(L^{1}\)-perturbations are still in \(\mathcal{G}_{1}\)[20].
**Lemma 2.2**.: _Let \(s\in(\frac{1}{2},1]\). For \(q\in H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})\cap\mathcal{G}\) we have \(r\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\). Furthermore, the map \(H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})\cap\mathcal{G}\ni q\to r\in H^{s}( \mathbb{R})\cap L^{2,1}(\mathbb{R})\) is locally Lipschitz._
Proof.: For any fixed \(\kappa_{0}>0\) there exists a positive constant \(C\) such that if \(||q||_{H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})}\leq\kappa_{0}\) satisfying
\[||\Phi_{-,j}(x,\cdot)-e_{j}||_{H_{x}^{s}(\mathbb{R})} \leq C||q||_{L^{2,s}(\mathbb{R})},\quad\forall\ x\leq 0,j=1,2,\] \[||\Phi_{+,j}(x,\cdot)-e_{j}||_{H_{x}^{s}(\mathbb{R})} \leq C||q||_{L^{2,s}(\mathbb{R})},\quad\forall\ x\geq 0,j=1,2,\] \[||\Phi_{-,j}(x,\cdot)-e_{j}||_{L_{x}^{2,1}(\mathbb{R})} \leq C||q||_{H^{1}(\mathbb{R})},\quad\forall\ x\leq 0,j=1,2,\] \[||\Phi_{+,j}(x,\cdot)-e_{j}||_{L_{x}^{2,1}(\mathbb{R})} \leq C||q||_{H^{1}(\mathbb{R})},\quad\forall\ x\geq 0,j=1,2. \tag{2.12}\]
We now consider the case \(j=1\) and the minus sign only to prove (2.12). We based on the fact that if there is \(s\in(0,1]\) such that for an \(f\in L^{2}(\mathbb{R})\) we have
\[||f(\cdot+h)-f(\cdot)||_{L^{2}(\mathbb{R})}\leq C|h|^{s},\qquad\forall h\in \mathbb{R}, \tag{2.13}\]
then \(f\in H^{s}(\mathbb{R})=\dot{H}^{s}(\mathbb{R})\cap L^{2}(\mathbb{R})\) and there is a positive constant \(c\) independent of \(f\) such that \(||f||_{\dot{H}^{s}(\mathbb{R})}\leq cC\), where \(\dot{H}^{s}(\mathbb{R})\) defined with \(||q||_{\dot{H}^{s}(\mathbb{R})}:=|||x|^{s}\mathcal{F}(q)||_{L^{2}(\mathbb{R})}\).
Define
\[Kf(x,z):=\int_{-\infty}^{x}\begin{pmatrix}e^{-2i(x-y)z}&0\\ 0&e^{2i(x-y)z}\end{pmatrix}Q(q(y))f(y,z)dy. \tag{2.14}\]
According to the [21, 22] we obtain
\[||Ke_{2}||_{L^{\infty}_{x}(\mathbb{R},L^{2}_{z}(\mathbb{R}))}\leq||q||_{L^{2}( \mathbb{R})},\]
hence we acquire that
\[||(1-K)^{-1}||_{L^{\infty}_{x}((-\infty,x_{0}),L^{2}_{z}(\mathbb{R})\to L^{ \infty}_{x}(-\infty,x_{0}),L^{2}_{z}(\mathbb{R}))}\leq e^{||q||_{L^{1}}},\quad \forall x_{0}\leq+\infty.\]
We consider the case for \(x\leq 0\) we obtain
\[(1-K)(\Phi_{-,1}(x,z)-e_{1})=Ke_{1}=\int_{-\infty}^{x}\begin{pmatrix}0\\ -q^{*}e^{2i(x-y)z}\end{pmatrix}dy,\]
and
\[||\Phi_{-,1}(x,z)-e_{1}||_{L^{2}_{z}(\mathbb{R})} \leq e^{||q||_{L^{1}}}\Big{|}\Big{|}\int_{-\infty}^{x}e^{-2i(x-y) z}q^{*}(y)dy\Big{|}\Big{|}_{L^{2}_{z}(\mathbb{R})}\] \[\leq e^{||q||_{L^{1}}}\Big{(}\int_{-\infty}^{x}\langle y\rangle^ {2s}|q(y)|^{2}dy\Big{)}^{1/2}\langle x\rangle^{-s}\] \[\leq e^{||q||_{L^{1}}}||q||_{L^{2,s}(\mathbb{R})}\langle x \rangle^{-s}, \tag{2.15}\]
and
\[||\Phi_{-,1}(x,z)-e_{1}||_{L^{2,1}_{x}(\mathbb{R})} \leq e^{||q||_{L^{1}}}\Big{|}\Big{|}\int_{-\infty}^{x}ze^{2i(x-y) z}q^{*}(y)dy\Big{|}\Big{|}_{L^{2}_{z}(\mathbb{R})}\] \[\leq e^{||q||_{L^{1}}}\Big{(}\int_{-\infty}^{x}ze^{2i(x-y)z}q^{*}( y)+q^{*}(y)dy\Big{|}\Big{|}_{L^{2}_{z}(\mathbb{R})}\] \[\leq e^{||q||_{L^{1}}}\sqrt{\pi}||q||_{H^{1}(\mathbb{R})}. \tag{2.16}\]
Next we define \(\mathcal{N}(x,z):=\Phi_{-,1}(x,z+h)-\Phi_{-,1}(x,z)\) for \(h\in\mathbb{R}\) to estimate the form (2.13) by calculating the inequality \(C\lesssim||q||_{L^{2,s}(\mathbb{R})}\), we have
\[(1-K)\mathcal{N}(x,z)=\int_{-\infty}^{x}\begin{pmatrix}e^{-2i(x-y) (z+h)}-e^{-2i(x-y)z}&0\\ 0&e^{2i(x-y)(z+h)}-e^{2i(x-y)z}\end{pmatrix}\] \[Q(q(y))(\Phi_{-,1}(y,z)-e_{1})dy+\int_{-\infty}^{x}\begin{pmatrix} 0\\ (e^{2i(x-y)(z+h)}-e^{2i(x-y)z})q^{*}(y)\end{pmatrix}dy. \tag{2.17}\]
Using the Fourier transform \(\mathcal{F}\) we have for \(x\leq 0\) the second term r.h.s (2.17)
\[\bigg{\|}\int_{-\infty}^{x} (e^{2i(x-y)(z+h)}-e^{2i(x-y)z})q^{*}(y)dy\bigg{\|}_{L^{2}_{z}}\] \[=\|\mathcal{F}^{*}[q(\cdot+x)\chi_{\mathbb{R}_{-}}](z+h)- \mathcal{F}^{*}[q(\cdot+x)\chi_{\mathbb{R}_{-}}](z)||_{L^{2}_{z}}\] \[\leq C||\mathcal{F}^{*}[q(\cdot+x)\chi_{\mathbb{R}_{-}}](z)||_{H^ {s}_{z}(\mathbb{R})}|h|^{s}\] \[=||q(y+x)||_{L^{2,s}_{y}(\mathbb{R}_{-})}|h|^{s}\leq||q||_{L^{2,s} (\mathbb{R})}|h|^{s}, \tag{2.18}\]
and the first term r.h.s (2.17) is
\[\int_{-\infty}^{x}\begin{pmatrix}e^{-2i(x-y)(z+h)}-e^{-2i(x-y)z}&0 \\ 0&e^{2i(x-y)(z+h)}-e^{2i(x-y)z}\end{pmatrix}\] \[Q(q(y))(\Phi_{-,1}(y,z)-e_{1})dy\] \[\leq 2^{(1-s)}|h|^{s}\int_{-\infty}^{x}|y|^{s}|q(y)||\Phi_{-,1}(y,z)-e_{1}||_{L^{2}_{z}(\mathbb{R})}dy\] \[\leq 2^{(1-s)}|h|^{s}e^{||q||_{L^{1}}}||q||_{L^{2,s}(\mathbb{R})} \int_{-\infty}^{x}|y|^{s}\langle y\rangle^{-s}|q(y)|dy\] \[\leq 2^{(1-s)}|h|^{s}e^{||q||_{L^{1}}}||q||_{L^{2,s}(\mathbb{R})} ||q||_{L^{1}}. \tag{2.19}\]
Then, we obtain
\[||\Phi_{-,1}(x,z+h)-\Phi_{-,1}(x,z)||_{L^{2}_{z}(\mathbb{R})}\leq C|h|^{s}||q|| _{L^{2,s}(\mathbb{R})},\ x\leq 0,\]
where \(C\) is a fixed constant for \(||q||_{H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})}\leq\kappa_{0}\), for a preassigned bound \(\kappa_{0}\). This implies that for all \(x\leq 0\) we have
\[||m_{-,1}(x,z)-e_{1}||_{\dot{H}^{s}_{z}(\mathbb{R})}\leq C||q||_{L^{2,s}( \mathbb{R})},\]
for some positive constant \(C\) and above the assume have been proved. The other cases similar. Now we conclude that (2.12) be established. Then \(s_{12}(z)\in H^{s}(\mathbb{R})\,\cap\,L^{2,1}(\mathbb{R})\) because
\[s_{12}(z) =\det(\Phi_{+,2}(0,z),\Phi_{-,2}(0,z))\] \[=\det(\Phi_{+,2}(0,z)-e_{2},\Phi_{-,2}(0,z)-e_{2})\] \[+\det(\Phi_{+,2}(0,z)-e_{2},e_{2})+\det(e_{2},\Phi_{-,2}(0,z)-e_{2 }), \tag{2.20}\]
and we notice that \(H^{s}(\mathbb{R})\) is a Banach algebra with respect to pointwise multiplication for any \(s>\frac{1}{2}\). Similarly, \((s_{11}-1)\in H^{s}(\mathbb{R})\,\cap\,L^{2,1}(\mathbb{R})\)
because
\[s_{11}(z) =\det(\Phi_{-,1}(0,z),\Phi_{+,2}(0,z))\] \[=\det(\Phi_{-,1}(0,z)-e_{1},\Phi_{+,2}(0,z))+\det(e_{1},\Phi_{+,2}( 0,z)-e_{2})+\det(e_{1},e_{2})\] \[=1+\det(\Phi_{-,1}(0,z)-e_{1},\Phi_{+,2}-e_{2})\] \[+\det(e_{1},\Phi_{+,2}-e_{2})+\det(\Phi_{-,1}(0,z)-e_{1},e_{2}). \tag{2.21}\]
We conclude that if \(q\in H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})\cap\,\mathcal{G}\) then \(r\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\) and this shows that we acquire a map \(H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})\cap\,\mathcal{G}\ni q\to r\in H^{s }(\mathbb{R})\cap L^{2,1}(\mathbb{R})\).The locally Lipschitz of the map \(H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})\cap\,\mathcal{G}\ni q\to r\in H^{ s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\) we skip.
### A Riemann-Hilbert problem
Define the collection what we need in the space
\[\mathcal{S}(s,n):= \{r(z)\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R}),\quad(z_{1}, \ldots,z_{n})\in\mathbb{C}^{n}_{+},\] \[(c_{1},\ldots,c_{n})\in\mathbb{C}^{n}_{*}:=\mathbb{C}^{n}\setminus \{0\}\}, \tag{2.22}\]
is called the scattering data for for initial data \(q_{0}\) and the map \(\mathcal{P}:q_{0}\mapsto\mathcal{S}\) is called the (forward) scattering map.
**Proposition 2.3**.: _If \(r(z)\in H^{s}(\mathbb{R})\,\cap\,L^{2,1}(\mathbb{R})\), then for every \(t\in\mathbb{R}\), we have \(r(z,t)=r(z)e^{i(4\beta z^{3}+2\alpha z^{2})t}\in H^{s}(\mathbb{R})\cap L^{2,1} (\mathbb{R})\)._
Proof.: By (2.11), we obtain
\[\|r(\cdot,t)\|_{L^{2,1}(\mathbb{R})} =\|zr(z)e^{i(4\beta z^{3}+2\alpha z^{2})t}\|_{L^{2}(\mathbb{R})}\] \[=\|zr(z)\|_{L^{2}(\mathbb{R})}=||r(z)||_{L^{2,1}(\mathbb{R})},\] \[and\] \[\|r(\cdot,t)\|_{H^{s}(\mathbb{R})} =\|z^{s}\mathcal{F}(r(z)e^{i(4\beta z^{3}+2\alpha z^{2})t})\|_{L^ {2}(\mathbb{R})}\] \[=(2\pi)^{-1}\|z^{s}\mathcal{F}(r(z))*\mathcal{F}(e^{i(4\beta z^{3 }+2\alpha z^{2})t})||_{L^{2}(\mathbb{R})}\] \[=(2\pi)^{-1}\|r(z)\|_{H^{s}(\mathbb{R})}.\]
Therefore, we completed the proof of Proposition 2.3.
The essential fact of integrability is that if the potential \(q_{0}\) evolves according to (1.1) the the time evolution of the scattering data \(\mathcal{D}\) is trivial and the collection
\[\mathcal{D}(t):= \{r(z,t)\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R}),\quad(z_{1},\ldots,z_{n})\in\mathbb{C}^{n}_{+},\] \[(c_{1}e^{i(4\beta z_{1}^{3}+2\alpha z_{1}^{2})t},\ldots,c_{n}e^{i (4\beta z_{n}^{3}+2\alpha z_{n}^{2})t})\in\mathbb{C}^{n}\setminus\{0\}\}. \tag{2.23}\]
The inverse scattering map \(\mathcal{P}^{-1}:\mathcal{D}(r)\mapsto q(x,t)\) seeks to recover the solution of (1.1) from its scattering data.
We construct the function
\[M(z):=\begin{cases}\big{(}\frac{\Phi_{-,1}(x,t;z)}{s_{11}(z)},\Phi_{+,2}(x,t;z) \big{)},\quad z\in\mathbb{C}^{+},\\ \\ (\Phi_{+,1}(x,t;z),\frac{\Phi_{-,2}(x,t;z)}{s_{22}(z)}\big{)},\quad z\in\mathbb{ C}^{-},\end{cases} \tag{2.24}\]
which satisfies the following RH problem.
**RHP 2.1**.: _Find an analytic function \(M(z):\mathbb{C}\setminus(\mathbb{R}\,\cup\,\mathcal{Z}\,\cup\,\mathcal{Z}^{*} )\to SL_{2}(\mathbb{C})\) with the following properties_
1. \(M(z)=I+\mathcal{O}(z^{-1})\) _as_ \(z\to\infty\)_._
2. \(M(z)\) _takes continuous boundary values_ \(M_{\pm}(z)\) _which satisfy the jump relation_ \[M_{+}(z)=M_{-}(z)V(z),\] _where_ \[V(z)=\begin{pmatrix}1+|r(z)|^{2}&r^{*}(z)e^{-2it\theta(z)}\\ r(z)e^{2it\theta(z)}&1\end{pmatrix}.\] (2.25)
3. \(M(z)\) _has simple poles at each_ \(\mathcal{Z}:=\mathcal{Z}_{+}\cup\mathcal{Z}_{-}\) _at which_ \[\underset{z=z_{k}}{\text{Res}}M(z)=\underset{z\to z_{k}}{\text{lim}}M(z) \begin{pmatrix}0&0\\ c_{k}e^{-2it\theta(z)}&0\end{pmatrix},\] (2.26) \[\underset{z=z_{k}^{*}}{\text{Res}}M(z)=\underset{z\to z_{k}^{*}}{ \text{lim}}M(z)\begin{pmatrix}0&-c_{k}^{*}e^{2it\theta(z^{*})}\\ 0&0\end{pmatrix}.\] (2.27)
It's a simple consequence of Liouville's theorem that if a solution exist, it is unique. The existence of solutions of RHP 2.1 for any \((x,t)\in\mathbb{R}\times\mathbb{R}\) follows by means of Zhou's vanishing lemma argument [23]. Expanding this solution as
\[M(z)=I+\frac{M^{(1)}(x,t)}{z}+\mathcal{O}(z^{-2}),\ z\to\infty,\]
and one can find the solution
\[M(x,z)=I-\sum_{\zeta\in\mathcal{Z}}\frac{M_{x}(\zeta)V(\zeta)}{\zeta-z}+\frac {1}{2\pi i}\int_{\mathbb{R}}\frac{M_{x}(V(\zeta)-I)}{\zeta-z}d\zeta, \tag{2.28}\]
where \(M_{x}(z)\) is defined for \(z\in\mathbb{R}\cup\mathcal{Z}\) in the space \(M_{2\times 2}(\mathbb{C})\) of complex \(2\times 2\) matrices and satisfies system (2.30) and (2.31) written below and it follows that the solution of (1.1) is given by
\[q(x,t)=2i\lim_{z\to\infty}(zM(x,t;z))_{12}=2i(M^{(1)})_{12}. \tag{2.29}\]
**Lemma 2.4**.: _Fix \(s\in(\frac{1}{2},1]\) and suppose that \(r\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\). Then, for any \(x\in\mathbb{R}\) there exists and unique a solution \(M_{x}:\mathbb{R}\cup\mathcal{Z}\to M_{2\times 2}(\mathbb{C})\) of the following system of integral and algebraic equation:_
\[M_{x}(z)=I-\sum_{\zeta\in\mathcal{Z}}\!\frac{M_{x}(\zeta)V(\zeta)}{\zeta-z}+ \underset{\epsilon\to 0}{\text{lim}}\int_{\mathbb{R}}\frac{M_{x}(\zeta)(V( \zeta)-I)}{\zeta-(z-i\epsilon)}d\zeta,\quad z\in\mathbb{R}, \tag{2.30}\]
_and_
\[M_{x}(z)=I-\sum_{\zeta\in\mathcal{Z}\setminus\{z\}}\!\frac{M_{x}(\zeta)V( \zeta)}{\zeta-z}+\frac{1}{2\pi i}\int_{\mathbb{R}}\frac{M_{x}(\zeta)(V(\zeta)- I)}{\zeta-z}d\zeta,\quad z\in\mathcal{Z}, \tag{2.31}\]
_such that \((M_{x}(z)-I)\in L^{2}_{z}(\mathbb{R})\)._
Proof.:
The result of the Lemma 2.4 implies that the map \(\mathcal{G}_{N}\cap H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})\to\mathcal{S}(s,n)\) is one-to-one. The result is due to Zhou [24], we just the completeness of it. Now we just consider the case of pure radiation solutions of the Hirota equation with \(N=0\). And the Lemma 2.5 establish the fact that the map \(\mathcal{G}_{0}\cap H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})\to\mathcal{S} (s,0)\) is only one-to-one but also onto.
**Lemma 2.5**.: _Let \(r\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\), \(\mathcal{Z}=\varnothing\) and consider the potential \(q\) defined by the reconstructing formula (2.29). Then \(q\in H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})\). Furthermore, for any positive \(\kappa_{0}\), there is a constant \(C\) such that for \(||r||_{L^{\infty}(\mathbb{R})}\leq\kappa_{0}\), we have_
\[||q||_{H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})}\leq C||r||_{H^{s}(\mathbb{R })\cap L^{2,1}(\mathbb{R})}.\]
Proof.: We now construct the inverse scattering transform from \(H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\) to \(H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})\). Let \(r\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\) be given. The inverse problem is formulated as RH Problem 2.1.
We factorize the jump matrix \(V(z)\) on the line \(z\in(-\infty,z_{1})\cup(z_{2},+\infty)\) where
\[V(z)=V_{-}^{-1}(z)V_{+}(z)=\begin{pmatrix}1&r^{*}(z)e^{-2it\theta(z)}\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ r(z)e^{2it\theta}&1\end{pmatrix}. \tag{2.32}\]
This problem is adapted for studying the decay behavior of \(q\) as \(x\to-\infty\). Next, we prove the decay behavior of \(q\) as \(x\to-\infty\). The decay behavior of \(q\) as \(x\to\infty\) can be obtained in a similar manner.
Let
\[C_{w_{x}}h:=C^{+}(hw_{x-})+C^{-}(hw_{x+}),\]
where \(w_{x\pm}:=\pm(V_{\pm}-I).\) Then we consider a function \(\mu\in I+L^{2}(\mathbb{R})\) such that
\[(I-C_{w})(\mu_{x})(z)=I. \tag{2.33}\]
We acquire the potential \(q(x,t)\) in (2.29) by \(w(\zeta):=V_{+}(\zeta)-V_{-}(\zeta)\) in the case of \(c_{j}=0\) which can be expressed also as
\[M(x,z)=I+C^{\pm}(\mu_{x}(\zeta)w(\zeta))=I+\frac{1}{2\pi i}\int_{\mathbb{R}} \frac{\mu_{x}(\zeta)w(\zeta)}{\zeta-z}d\zeta. \tag{2.34}\]
Then differentiating the jump relation \(M_{+}(z)=M_{-}(z)V(z)\), we obtain
\[\frac{d}{dx}M_{+}+iz[M_{+},\sigma_{3}]=(\frac{d}{dx}M_{-}+iz[M_{-},\sigma_{3}] )V.\]
A simple calculation shows that
\[iz[M_{\pm},\sigma_{3}]=Q+C^{\pm}(i[\mu_{x}w,\sigma_{3}]).\]
Thus \(M_{\pm}\) solve the differential equation \(\frac{d}{dx}M_{\pm}=iz[\sigma_{3},M_{\pm}]+QM_{\pm}\). Set \(\mathcal{I}(r)=q\). The following results show that \(\mathcal{I}\) maps \(H^{s}(\mathbb{R})\,\cap\,L^{2,1}(\mathbb{R})\) to \(H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})\). For a fixed \(c_{s}\) and \(x\leq 0\), using the argument in Lemma 3.4 [21] we obtain
\[||C^{\pm}w(z)||_{L^{2}_{z}(\mathbb{R})}\leq c_{s}\langle x\rangle^{-s}||r||_{H ^{s}(\mathbb{R})\,\cap\,L^{2,1}(\mathbb{R})}, \tag{2.35}\]
which can be induced by the different definition of the operator in (2.1) from which we get
\[||C_{w}I||_{L^{2}(\mathbb{R})}\leq 2c_{s}\langle x\rangle^{-s}||r||_{H^{s}( \mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})}. \tag{2.36}\]
Then we take into account the inequality
\[\mu_{x}-I=(I-C_{w})^{-1}C_{w}I, \tag{2.37}\]
and correspondingly
\[||\mu_{x}-I||_{L^{2}_{z}}\leq||(I-C_{w})^{-1}||_{L^{2}_{z}\to L^{2}_{z}}||C_{w }I||_{L^{2}_{z}}. \tag{2.38}\]
For fixed \(c\) and from the Lemma 5.1 [25] we obtain \(||(I-C_{w})^{-1}||_{L^{2}_{z}\to L^{2}_{z}}\leq C\langle\rho\rangle^{2}\) where \(\rho:=||r||_{L^{\infty}(\mathbb{R})}\). We conclude that for \(x\leq 0\) and for any \(\kappa_{0}\) there is a constant \(C\) such that
\[||\mu_{x}-I||_{L^{2}_{z}}\leq C\langle x\rangle^{-s}||r||_{H^{s}(\mathbb{R})\, \cap\,L^{2,s}(\mathbb{R})}, \tag{2.39}\]
for \(\rho\leq\kappa_{0}\). As above, we write
\[M-I=\int_{\mathbb{R}}((I-C_{w})^{-1}(V_{+}-V_{-}))dz=\int_{1}+\int_{2}+\int_{3},\]
where
\[\int_{1}=\int(V_{+}-V_{-}),\qquad\qquad\int_{2}=\int(C_{w}I)(V_{+}-V_{-}),\]
\[\int_{3}=\int(C_{w})(I-C_{w})^{-1}C_{w}I)(V_{+}-V_{-})=\int(C_{w})(\mu-I)(V_{+ }-V_{-}).\]
We remark that for calculating \(q\), the estimate of \(\int_{2}\) is not needed because it is diagonal and \(\hat{\sigma_{3}}\int_{2}=0\). But the estimate is useful for other problems. Clearly, \(\int(V_{+}-V_{-})\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\) by the Fourier transform. Using the triangularity of \(V_{\pm}\), the fact that \(C^{+}-C^{-}=1\) Cauchy's theorem, using Lemma 3.4 [21] we obtain for some \(c>0\)
\[|\int_{2}| =|\int[(C^{+}(I-V_{-}))(V_{+}-I)+(C^{-}(V_{+}-I)(I-V_{-}))]\] \[=|\int[C^{+}(I-V_{-}))(C^{-}(I-V_{+}))+(C^{-}(V_{+}-I))(C^{+}(I-V _{-}))]\] \[\leq c(1+x^{2})^{-1},\]
and
\[||\mu-I||_{L^{2}}=||(I-C_{w})^{-1}C_{w}I||_{L^{2}}\leq c(1+x^{2})^{-1/2},\]
we have
\[|\int_{3}| =|\int[(C^{+}(\mu-I)(I-V_{-})(V_{+}-I))]+(C^{-}(\mu-I)(V_{+}-I)( I-V_{-}))\] \[\leq c(1+x^{2})^{-1}.\]
According to Theorem 3.6 [21], we have \(q\in H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})\), moreover,
\[||q||_{H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})}\leq C||r||_{H^{s}( \mathbb{R})\,\cap\,L^{2,1}(\mathbb{R})}.\]
For the case when \(z\in(z_{1},z_{2})\), we consider the second decomposition
\[V=\begin{pmatrix}1&0\\ \frac{r(z)e^{2it\theta}}{1+|r(z)|^{2}}&1\end{pmatrix}\begin{pmatrix}1+|r(z)|^ {2}&0\\ 0&\frac{1}{1+|r(z)|^{2}}\end{pmatrix}\begin{pmatrix}1&\frac{r^{*}(z)e^{-2it \theta}}{1+|r(z)|^{2}}\\ 0&1\end{pmatrix}.\]
Further the RH problem 2.1 can be changed into the RH problem 3.1 by using transformation \(M^{(1)}=M\delta^{-\sigma_{3}}\). Correspondingly we acquire estimates
\[||\tilde{q}||_{H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})}\leq C||\tilde{r}|| _{H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})}\leq c||r||_{H^{s}(\mathbb{R})\cap L ^{2,1}(\mathbb{R})},\]
for the function \(\tilde{q}=\mathcal{I}(\tilde{r})\) with \(\tilde{r}:=r\delta_{+}\delta_{-}\) and for fixed \(c\) when \(\rho\leq\kappa_{0}\) by proceeding as above. Finally, \(\tilde{q}=q\) and more details see [21].
## 3 Dispersion for pure radiation solutions
In this section, we consider the elements of \(\mathcal{G}\) such that \(\mathcal{Z}=\varnothing\) generate pure radiation solutions of the Hirota equation.
### A regular RH problem
From the function (2.4), we get two stationary points
\[z_{1} =\frac{-\alpha-\sqrt{\alpha^{2}-3\beta x/t}}{6\beta}, \tag{3.1}\] \[z_{2} =\frac{-\alpha+\sqrt{\alpha^{2}-3\beta x/t}}{6\beta}. \tag{3.2}\]
The signature table of \(\mathrm{Re}(it\theta(z))\) is given in Figure 3.1.
First we consider the scalar factorization problem
\[\begin{cases}\delta_{+}(z)=\delta_{-}(z)(1+|r(z)|^{2}),&z\in(z_{1},z_{2}),\\ \delta_{+}(z)=\delta_{-}(z),&\mathbb{R}\setminus(z_{1},z_{2}),\\ \delta(z)\to 1,&z\to\infty,\end{cases} \tag{3.3}\]
which admits a solution by the Plemelj formula
\[\delta(z)=\exp\left[i\int_{z_{1}}^{z_{2}}\frac{\nu(s)}{s-z}ds\right], \tag{3.4}\]
where
\[\nu(s)=-\frac{1}{2\pi}\log(1+|r(s)|^{2}). \tag{3.5}\]
Figure 3.1: The function \(e^{2it\theta}\) decay in yellow domains and increase in white domains.
Define \(z_{0}=\frac{|z_{2}-z_{1}|}{2}\) then function \(\delta(z)\) has an expansion
\[\delta(z) =\exp\big{(}i\int_{z_{1}}^{z_{2}}\frac{\nu(s)-\chi(s)\nu(z_{2})(s-z _{2}+1)}{s-z}ds+i\nu(z_{2})\int_{z_{0}}^{z_{2}}\frac{s-z_{2}+1}{s-z}ds\] \[=\exp\{i\beta(z,z_{2})+i\nu(z_{2})+i\nu(z_{2})[(z-z_{2})\log(z-z_{ 2})\] \[-(z-z_{2}+1)\log(z-z_{2}+1)+i\nu(z_{2})\log(z-z_{2})\}\] \[=e^{i\nu(z_{2})+i\beta(z,z_{2})}(z-z_{2})^{i\nu(z_{2})}e^{i\nu(z_{ 2})[(z-z_{2})\log(z-z_{2})-(z-z_{2}+1)\log(z-z_{2}+1)]},\]
where \(\chi(s)\) is characteristic function of \((z_{0},z_{2})\)
\[\beta(z,z_{2})=\int_{z_{1}}^{z_{2}}\frac{\nu(s)-\nu(z_{2})\chi(s)(s-z_{2}+1)}{ s-z}ds.\]
Let \(\rho:=||r||_{L^{\infty}}(\mathbb{R})\), the function \(\delta(z)\) satisfying the properties following:
* when \(z\notin[z_{1},z_{2}]\), we obtain \(\delta(z)\delta^{*}(z^{*})=1\) and \(\langle\rho\rangle^{-1}\leq|\delta(z)|\leq\langle\rho\rangle\);
* For \(\mp\mathrm{Im}z>0\) we have \(|\delta^{\pm}(z)|\leq 1\).
Next we just consider the stationary point \(z=z_{2}\), and the point \(z=z_{1}\) is similar.
**Lemma 3.1**.: _Define \(L_{\phi}=\tilde{z}+e^{-i\phi}\mathbb{R}=\{z=\tilde{z}+e^{-i\phi}q:q\in\mathbb{ R}\}\). For \(s\in(\frac{1}{2},1]\) then there is a fixed \(C(\rho,s)\) such that that for any \(\tilde{z}\in\mathbb{R}\) and any \(\phi\in(0,\pi)\)_
\[||\beta(e^{-i\phi}\cdot,\tilde{z})||_{H^{s}(\mathbb{R})}\leq C( \rho,s)||r||_{H^{s}(\mathbb{R})}, \tag{3.6}\] \[|\beta(z,\tilde{z})-\beta(\tilde{z},\tilde{z})|\leq C(\rho,s)||r|| _{H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})}|z-\tilde{z}|^{s-\frac{1}{2}},\ z \in L_{\phi}. \tag{3.7}\]
Proof.: When \(s=1\) there have
\[||C_{\mathbb{R}}f||_{H^{\tau}(L_{\phi})}\leq C_{\tau}||f||_{H^{\tau}(\mathbb{ R})},\ \tau=0,1\]
which are proved in Lemma 23.3 [27]. By interpolation calculation we get the case \(\tau=s\) where \(s\in(0,1)\). And the (3.7) can be obtained from (3.6) of the following elementary estimate when \(s\in(1/2,1]\)
\[|f(x)-f(y)|\leq C_{s}||f||_{H^{s}(\mathbb{R})}|x-y|^{s-\frac{1}{2}}, \tag{3.8}\]
for all \(x,y\in\mathbb{R}\) and \(f\in H^{s}(\mathbb{R})\) for a fixed \(C_{s}\). Noting that
\[f(x+h)-f(x)=\frac{1}{\sqrt{2\pi}}\int e^{ix\xi}(e^{ih\xi}-1)\hat{f}(\xi)d\xi,\]
Then for any \(\kappa>0\) we have for a fixed \(C_{s}\)
\[|f(x+h)-f(x)| \leq\frac{||f||_{H^{s}}}{\sqrt{2\pi}}\Big{[}\big{(}|h|\int_{|\xi| \leq\kappa}|\xi|^{2-2s}d\xi\big{)}^{\frac{1}{2}}+\big{(}\int_{|\xi|\geq\kappa}| \xi|^{-2s}d\xi\big{)}^{\frac{1}{2}}\Big{]}\] \[\leq C_{s}(|h|\kappa^{\frac{3-2s}{2}}+\kappa^{\frac{1-2s}{2}})||f ||_{H^{s}}.\]
By Plancherel formula we have
\[|f(x+h)-f(x)| \leq C_{s}\frac{||f||_{L^{2,1}}}{\sqrt{2\pi}}\Big{[}\big{(}|h|\int _{|\xi|\leq\kappa}|\xi|^{1-2s}d\xi\big{)}^{\frac{1}{2}}+\big{(}\int_{|\xi|\geq \kappa}|\xi|^{-2s}d\xi\big{)}^{\frac{1}{2}}\Big{]}\] \[\leq C_{s}\kappa^{\frac{1-2s}{2}}||f||_{L^{2,1}}, \tag{3.9}\]
which equals \(2C_{s}|h|^{s-\frac{1}{2}}||f||_{H^{s}}\) for \(\kappa=|h|^{-1}\).
We define a new unknown function
\[M^{(1)}(z)=M(z)\delta^{-\sigma_{3}}(z), \tag{3.10}\]
which satisfies the following RH problem.
**RHP 3.1**.: _Find an analytic function \(M^{(1)}(z):\mathbb{C}\setminus\mathbb{R}\to SL_{2}(\mathbb{C})\) with the following properties_
1. \(M^{(1)}(z)=I+\mathcal{O}(z^{-1})\) _as_ \(z\to\infty\)_._
2. \(M^{(1)}(z)\) _takes continuous boundary values_ \(M^{(1)}_{\pm}(z)\) _and satisfy the jump relation_ \[M^{(1)}_{+}(z)=M^{(1)}_{-}(z)V^{(1)}(z),\] _where_ \[V^{(1)}(z)=\delta^{\sigma_{3}}_{-}V(z)\delta^{-\sigma_{3}}_{+}\] \[=\begin{cases}\begin{pmatrix}1&r^{*}(z)\delta^{2}(z)e^{-2it\theta (z)}\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ r(z)\delta^{-2}(z)e^{2it\theta}&1\end{pmatrix},&z\in(-\infty,z_{1})\cup(z_{2}, \infty),\\ \\ \begin{pmatrix}1&0\\ \frac{r(z)\delta^{-2}e^{2it\theta}}{1+|r(z)|^{2}}&1\end{pmatrix}\begin{pmatrix} 1&\frac{r^{*}(z)\delta^{-2}e^{-2it\theta}}{1+|r(z)|^{2}}\\ 0&1\end{pmatrix},&z\in(z_{1},z_{2}).\end{cases}\]
### A mixed RH Problem and its decomposition
In this section we construct a mixed RH Problem by follow closely the argument of [26, 21, 29].
Fix a smooth cut-off function of compact support, with \(\chi(x)\geq 0\) for any \(x\) and \(\int\chi dx=1\). Let \(\chi_{\epsilon}(x)=\epsilon^{-1}\chi(\epsilon^{-1}x)\) for \(\epsilon\neq 0\). We define \(\mathbf{r}(z)\) as follows
\[\mathbf{r}(z)=\begin{cases}r(\mathrm{Re}z),&\text{for }\mathrm{Im}z=0,\\ \chi_{\mathrm{Im}2}*r(\mathrm{Re}z),&\text{for }\mathrm{Im}z\neq 0.\end{cases} \tag{3.11}\]
For convenience of expression, for \(j=1,2\) we define rays
\[L_{j} =\{z_{j}+\mathbb{R}^{+}e^{i\frac{\pi}{4}}\}\cup\{z_{j}+\mathbb{R} ^{+}e^{i\frac{5\pi}{4}}\}=\Sigma_{j2}\cup\Sigma_{j3}, \tag{3.12}\] \[\overline{L}_{j} =\{z_{j}+\mathbb{R}^{+}e^{-i\frac{\pi}{4}}\}\cup\{z_{j}+\mathbb{R }^{+}e^{i\frac{3\pi}{4}}\}=\Sigma_{j1}\cup\Sigma_{j4},\] (3.13) \[I_{12} =(z_{1},0),\quad I_{22}=(0,z_{2}),\ \ I_{11}=(-\infty,z_{1}),\ \ I_{21}=(z_{2},\infty).\]
These rays divide the complex plane \(\mathbb{C}\) into ten domains \(\Omega_{ij},i=1,2;j=1,3,4,6;\Omega_{2},\Omega_{5}\). See Figure 3.2.
We define functions \(R_{ij}:\overline{\Omega}_{ij}\to\mathbb{C},(i=1,2;j=1,3,4,6)\) and a constant
\(c\) with boundary values satisfying
\[R_{j1}(z) =\begin{cases}r(z),&z\in I_{j1},\\ \hat{r}_{0}(z-z_{j})^{-2i\nu(z_{j})}\delta^{2},&z\in\Sigma_{j1},\end{cases} \tag{3.14}\] \[R_{j3}(z) =\begin{cases}\frac{r^{*}(z)}{1+|r(z)|^{2}},&z\in I_{j2},\\ \frac{\hat{r}_{0}^{*}(z)}{1+|r(z_{j})|^{2}}(z-z_{j})^{2i\nu(z_{j})}\delta^{-2},&z\in\Sigma_{j3},\end{cases}\] (3.15) \[R_{j4}(z) =\begin{cases}\frac{r(z)}{1+|r(z)|^{2}},&z\in I_{j4},\\ \frac{\hat{r}_{0}(z)}{1+|r(z)|^{2}}(z-z_{j})^{-2i\nu(z_{j})}\delta^{2},&z\in \Sigma_{j4},\end{cases}\] (3.16) \[R_{j6}(z) =\begin{cases}r^{*}(z),&z\in I_{j6},\\ \hat{r}_{0}^{*}(z-z_{j})^{2i\nu(z_{j})}\delta^{-2},&z\in\Sigma_{j6},\end{cases} \tag{3.17}\]
where \(\hat{r}_{0}=r(z_{j})e^{-2i\nu(z_{j})-2\beta(z_{j},z_{j})}\).
**Proposition 3.2**.: _Fix \(\lambda_{0}>0\) and assume \(||r||_{H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})}<\lambda\) a preassigned \(s\in(1/2,1]\). Then for \(j=1,2;j=1,2,3,4\) there exist functions \(R_{ji}:\overline{\Omega_{ji}}\to\mathbb{C}\) such that forall \(z\in\Omega_{ji}+z_{j}\) and \(\psi(x)=-\chi(x)-x\chi^{\prime}(x)\), we have for a fixed \(c\)_
\[|\overline{\partial}R_{ji}(z)| \leq c||r||_{H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})}|z-z_{j}|^ {s-\frac{3}{2}}\] \[+c|\partial_{Rez}\mathbf{r}(z)|+c|(\text{Im }z)^{-1}\psi_{\text{Im}z}*r( \text{Re z})|. \tag{3.18}\]
Proof.: We consider the stationary point at \(z_{2}\), the case at \(z_{1}\) is similar. For \(\zeta\in\Omega_{13}\) we use \(0<\arg(\zeta-z_{1})<\pi/4\) and set \(G(\zeta):=g(\arg(\zeta-z_{1}))\) where the function \(g:[0,\pi/4]\to[0,1]\) is a smooth function such that \(g(\psi)=1\) for all \(\psi\in[0,\pi/6]\) and \(g(\pi/4)=0\). the function \(G\) is continuous in \(\Omega_{13}\setminus\{z_{1}\}\) and fulfills
\[G(\zeta)=\begin{cases}1,&\text{for}\,\zeta\in I_{12},\\ 0,&\text{for}\,\zeta\in\Sigma_{13},\end{cases} \tag{3.19}\]
Moreover, we can find a constant \(c\) such that
\[|\overline{\partial}G(\zeta)\leq c|\zeta-z_{1}|^{-1}\]
Consider the function of \(R_{2i}(z)\) which can be defined explicitly by setting for \(z-z_{2}=u+iv\). For \(i=1,3\) we obtain
\[R_{21}(z) =G(\zeta)\mathbf{r}(z)+(1-G(\zeta)))f_{21}(u+iv),\] \[R_{23}(z) =\cos(2(\arg(z-z_{0})-\pi))\frac{\mathbf{r}^{*}(z)}{1+|\mathbf{r} (z)|^{2}}\] \[\qquad+(1-\cos(2\arg(z-z_{0})-\pi))f_{23}(u+iv). \tag{3.20}\]
For \(i=4,6\) which can be defined similarly. Now we consider the boundary values and consider case \(i=1\) only. We have
\[\overline{\partial}R_{21}=(\mathbf{r}-f_{21})\overline{\partial}b+\frac{b}{2}( \chi_{\mathbf{Im}z}*r^{\prime}(\mathrm{Re}z)+i(\mathrm{Im}z)^{-1}\psi_{\mathbf{ Im}z}*r(\mathrm{Re}z)),\]
with \(\psi(x)=-\chi(x)-x\chi^{\prime}(x)\) and \(f_{21}\frac{\hat{r}_{0}^{*}(z)}{1+|r(z_{1})|^{2}}(z-z_{1})^{2i\nu(z_{j})} \delta^{-2}(1-\Xi(\zeta))\). Notice that \(\hat{\psi}(0)=0\). Then we have the bound
\[|\overline{\partial}R_{21}| \leq|\chi_{\mathbf{Im}z}*r^{\prime}(\mathrm{Re}z)|+|(\mathrm{Im}z )^{-1}\psi_{\mathbf{Im}z}*r(\mathrm{Re}z)|\] \[+\frac{c}{|z-z_{2}|}|(r(z)-r(z_{2})|+|f_{21}(z)-r(z_{2})|). \tag{3.21}\]
By (3.8) we acquire that
\[|r(z)-r(z_{2})|\leq C|z-z_{2}|^{s-\frac{1}{2}}||r||_{H^{s}(\mathbb{R})\cap L^ {2.s}(\mathbb{R})}\]
to get the desired estimate for \(|\overline{\partial}R_{21}|\) we need to bound the last line. Then, we have
\[f_{21}(z)-r(z_{2})=r(z_{2})\times[\exp(2i\mu(z_{2})((z-z_{2})\log (z-z_{2})\] \[-(z-z_{2}+1)\log(z-z_{2}+1))+2(\beta(z,z_{2})-\beta(z_{2},z_{2})))-1]. \tag{3.22}\]
According to Lemma 3.1 we have
\[\beta(z,z_{2})-\beta(z_{2},z_{2})=C(\rho,s)||r||_{H^{s}(\mathbb{R})\cap L^{2, 1}(\mathbb{R})}|z-z_{2}|^{s-\frac{1}{2}},\]
and notice that both \((z-z_{2})\log(z-z_{2})\) and \((z-z_{2}+1)\log(z-z_{2}+1)\) are \(\mathcal{O}(|z-z_{2}|^{s-\frac{1}{2}})\) when \(z\to z_{2}\). We get the desired estimate for \(|\overline{\partial}R_{21}|\).
Based on Proposition 3.2, we define a new unknown function
\[\mathcal{R}^{(2)}=\begin{cases}\begin{pmatrix}1&0\\ R_{j1}e^{2it\theta}\delta^{-2}&1\end{pmatrix}^{-1},&z\in\Omega_{j1},\\ \begin{pmatrix}1&R_{j3}e^{-2it\theta}\delta^{2}\\ 0&1\end{pmatrix}^{-1},&z\in\Omega_{j3},\\ \begin{pmatrix}1&0\\ R_{j4}e^{2it\theta}\delta^{-2}&1\end{pmatrix},&z\in\Omega_{j4},\\ \begin{pmatrix}1&R_{j6}e^{-2it\theta}\delta^{2}\\ 0&1\end{pmatrix},&z\in\Omega_{j6},\\ \begin{pmatrix}1&0\\ 0&1\end{pmatrix},&z\in\Omega_{2}\cup\Omega_{5}.\end{cases} \tag{3.23}\]
We introduce jump contour
\[\Sigma^{(2)}=L_{1}\cup\overline{L}_{1}\cup L_{2}\cup\overline{L}_{2}\cup l_{1} \cup l_{2},\]
then it can be shown that
\[M^{(2)}(z)=M^{(1)}(z)\mathcal{R}^{(2)} \tag{3.24}\]
satisfies the following RH problem.
**RHP 3.2**.: _Find a continuous function \(M^{(2)}(z):\mathbb{C}\setminus\Sigma^{(2)}\to SL_{2}(\mathbb{C})\) with the following properties_
1. \(M^{(2)}(z)=I+\mathcal{O}(z^{-1})\) _as_ \(z\to\infty\)_._
2. _For_ \(z\in\Sigma^{(2)}\)_, the boundary values satisfy the jump relation_ \[M^{(2)}_{+}(z)=M^{(2)}_{-}(z)V^{(2)}(z),\] _where_ \[\begin{cases}\begin{pmatrix}1&0\\ R_{j1}(z)\delta^{-2}(z)e^{2it\theta}&1\end{pmatrix},&z\in\Sigma_{j1},\\ \begin{pmatrix}1&-R_{j3}(z)\delta^{2}e^{-2it\theta}\\ 0&1\end{pmatrix},&z\in\Sigma_{j3},\\ \begin{pmatrix}1&-R_{j6}(z)\delta^{2}e^{-2it\theta}\\ 0&1\end{pmatrix},&z\in\Sigma_{j6},\\ \begin{pmatrix}1&(R_{23}-R_{13})e^{-2it\theta}\\ 0&1\end{pmatrix},&z\in l_{1},\\ \begin{pmatrix}1&0\\ (R_{14}-R_{24})e^{2it\theta}\delta^{-2}&1\end{pmatrix},&z\in l_{2}.\end{cases}\]
3. _For_ \(\mathbb{C}\setminus\Sigma^{(2)}\) _we have_ \[\overline{\partial}M^{(2)}(z)=M^{(2)}(z)\overline{\partial}\mathcal{R}^{(2)}(z),\] _where_ \[\overline{\partial}\mathcal{R}^{(2)}=\begin{cases}\begin{pmatrix}1&0\\ -\overline{\partial}R_{j1}(z)e^{2it\theta}\delta^{-2}&1\end{pmatrix},&z\in \Omega_{j1},\\ \\ \begin{pmatrix}1&-\overline{\partial}R_{j3}(z)e^{-2it\theta}\delta^{2}\\ 0&1\end{pmatrix},&z\in\Omega_{j3},\\ \\ \begin{pmatrix}1&\overline{\partial}R_{j6}(z)e^{-2it\theta}\delta^{2}\\ 0&1\end{pmatrix},&z\in\Omega_{j6},\\ \\ 0,&z\in\Omega_{j2}\cup\Omega_{j5}.\end{cases}\] (3.26)
Next we decompose \(M^{(2)}\) in the form
\[M^{(2)}(z)=M^{(3)}(z)M^{rhp}(z), \tag{3.27}\]
where \(M^{rhp}(z)\) as a pure RH problem resulting from \(M^{(2)}(z)\) by setting \(\overline{\partial}\mathcal{R}^{(2)}\equiv 0,\ z\in\mathbb{C}\setminus\Sigma^{(2)}\), which satisfies the following RH problem
**RHP 3.3**.: _Find a analytical function \(M^{rhp}(z):\mathbb{C}\setminus\Sigma^{(2)}\to SL_{2}(\mathbb{C})\) with the following properties_
1. \(M^{rhp}(z)=I+\mathcal{O}(z^{-1})\) _as_ \(z\to\infty\)_._
2. _For_ \(z\in\Sigma^{(2)}\)_, the boundary values satisfy the jump relation_ \[M^{rhp}_{+}(z)=MM^{rhp}_{-}(z)V^{(2)}(z),\] _where_ \(V^{(2)}(z)\) _is given by (_3.25_)._
While the \(M^{(3)}(z)\) defined by (3.27) satisfies the following RH Problem
**RHP 3.4**.: _Find a continuous matrix-valued function \(M^{(3)}(z):\mathbb{C}\to SL_{2}(\mathbb{C})\) with the following properties:_
1. \(M^{(3)}(z)=I+\mathcal{O}(z^{-1})\) _as_ \(z\to\infty\)_._
2. _For_ \(z\in\mathbb{C}\)_, we have_ \[\overline{\partial}M^{(3)}(z)=M^{(3)}(z)W^{(3)}(z),\] _where_ \(W^{(3)}(z):=M^{rhp}(z)\overline{\partial}\mathcal{R}^{(2)}(z)M^{rhp}(z)^{-1}\) _and_ \(\overline{\partial}\mathcal{R}^{(2)}(z)\) _is defined by (_3.26_)._
### A solvable model near \(z_{1}\) and \(z_{2}\)
We introduce jump contour
\[\Sigma^{(3)}=L_{1}\cup\overline{L}_{1}\cup L_{2}\cup\overline{L}_{2},\]
and consider the following RH problem
**RHP 3.5**.: _Find a analytical function \(M^{loc}(z):\mathbb{C}\setminus\Sigma^{(3)}\to SL_{2}(\mathbb{C})\) with the following properties_
1. \(M^{loc}(z)=I+\mathcal{O}(z^{-1})\) _as_ \(z\to\infty\)_._
2. _For_ \(z\in\Sigma^{(3)}\)_, the boundary values satisfy the jump relation_ \[M^{loc}_{+}(z)=M^{loc}_{-}(z)V^{(2)}(z),\] _where_ \(V^{(2)}(z)\) _is given by (_3.25_)._
This RH problem is solvable and its solution is given by
\[M^{loc}(z)=M^{pc}(\zeta_{1})+M^{pc}(\zeta_{2}),\]
where \(M^{pc}(\zeta_{j})\) is the solution of the parabolic cylinder model in A, and \(\zeta_{j}=\sqrt{(-1)8t(\alpha+6\beta z_{j})}(z-z_{j}),\ j=1,2\).
Direct calculation shows that there is constant \(c>0\) such that
\[\|V_{l}-I\|_{L^{\infty}(l_{1}\cup l_{2})}\lesssim e^{-ct},\quad t\to\infty.\]
Further by using small RH problem, it can be shown that
**Proposition 3.3**.: \[M^{rhp}(z)=M^{loc}(z)(I+O(e^{-ct})).\]
### The \(\overline{\partial}\) argument
It is well understood that the solution to the \(\overline{\partial}\) problem 3.4 can be given with the following Cauchy integral
\[M^{(3)}(z)=I+\frac{1}{2\pi i}\int_{\mathbb{C}}\frac{M^{(3)}(z)W^{(3)}(z)}{\varsigma -z}d\varsigma. \tag{3.28}\]
Let \(||r||_{H^{s}(\mathbb{R})\cap L^{2,s}(\mathbb{R})}\leq\lambda_{0}\), define the following operator
\[JH(z):=\frac{1}{\pi}\int_{\mathbb{C}}\frac{H(\varsigma)W(\varsigma)}{\varsigma -z}dA(\varsigma), \tag{3.29}\]
then it can be shown that
**Lemma 3.4**.: _The operator \(J:L^{\infty}(\mathbb{C})\to L^{\infty}(\mathbb{C})\cap C^{0}(\mathbb{C})\), and there exist a \(C=C(\lambda_{0})\) such that_
\[||J||_{L^{\infty}(\mathbb{C})\to L^{\infty}(\mathbb{C})}\leq Ct^{\frac{1-2s}{ 4}},\,\,\,t>0. \tag{3.30}\]
Proof.: For convenience of expression the proof below only consider the case when \(z=z_{2}\) and from the defination of the operator of \(J\) we suppose \(H\in L^{\infty}(\Omega_{21})\) then
\[|JH(z)|\leq||H||_{L^{\infty}(\Omega_{21})}||\delta^{-2}||_{L^{\infty}(\Omega_ {21})}\int_{\Omega_{21}}\frac{|\overline{\partial}R_{21}(\varsigma)e^{2it \theta}|}{|\varsigma-z|}dA(\varsigma), \tag{3.31}\]
and from the properties of \(\delta(z)\) we acquire that \(||\delta^{-2}||_{L^{\infty}(\Omega_{21})}\leq 1\). For \(j=1,2,3\) we have the bound of \(I_{j}\) from (3.18) following
\[I_{j}=\int_{\Omega_{21}}\frac{|X_{j}(\zeta)e^{2it\theta}|}{| \varsigma-z|}dA(\varsigma),\qquad X_{1}(z):=\partial_{\mathbbm{R}\mathrm{e}z} \mathbf{r}(z),\\ X_{2}(z):=||r||_{H^{s}(\mathbb{R})}|z-z_{2}|^{s-\frac{3}{2}}, \quad X_{3}(z):=(\mathrm{Im}z)^{-1}\psi_{\mathbbm{I}\mathrm{m}z}*r(\mathrm{Re} z). \tag{3.32}\]
Following the prove in Section 2.4[29] and recall the expression of \(z_{2}\) we set \(\varsigma-z_{2}=u+iv\) and \(z-z_{2}=\alpha+i\beta\), the region \(\Omega_{21}\) corresponds to \(u\geq v\geq 0\) we have
\[I_{1} =\int_{\Omega_{21}}\frac{|\partial_{u}\mathbf{r}(\varsigma)|e^{- 8t(\alpha+6\beta z_{2})uv}}{|\varsigma-z|}dudv\lesssim\int_{0}^{\infty}dv\int_{ v}^{\infty}\frac{|\partial_{u}\mathbf{r}(\varsigma)|e^{-8tuv}e^{-tz_{2}uv}}{| \varsigma-z|}du\] \[\lesssim\int_{0}^{\infty}dve^{-8tv^{2}}e^{-tz_{2}v^{2}}|| \partial_{u}\mathbf{r}(u,v)||_{L^{2}_{u}(v,\infty)}||((u-\alpha)^{2}+(v-\beta )^{2}))^{-1}||_{L^{2}_{u}(v,\infty)}. \tag{3.33}\]
It is easy to check the relationship \(||((u-\alpha)^{2}+(v-\beta)^{2})^{-1}||_{L^{2}_{u}(v,\infty)}\leq C|v-\beta|^{- \frac{1}{2}}\) is achieved. For fixed \(C\) by using the Plancherel we obtain
\[||\partial_{u}\mathbf{r}(u,v)||_{L^{2}_{u}} =||\partial_{u}\int_{\mathbb{R}}v^{-1}\chi(v^{-1}(u-t))r(t)dt||_{L ^{2}_{u}}=||\xi\hat{\chi}(v\xi)\hat{r}(\xi)||_{L^{2}}\] \[\leq v^{s-1}||\xi^{1-s}\hat{\chi}(\xi)||_{L^{\infty}}||r||_{H^{s} }\leq Cv^{s-1}||r||_{H^{s}}. \tag{3.34}\]
Direct calculation yields
\[I_{1} \lesssim||r||_{H^{s}}\int_{\mathbb{R}}dve^{-8tv^{2}}e^{-tz_{2}v^{2 }}|v|^{s-1}|v-\beta|^{-\frac{1}{2}}\] \[\lesssim||r||_{H^{s}}\int_{\mathbb{R}}dve^{-t(1+z_{2})v^{2}}|v|^ {s-1}|v-\beta|^{-\frac{1}{2}}\] \[\lesssim((1+z_{2})t)^{\frac{1-2s}{4}}||r||_{H^{s}}\int_{\mathbb{ R}}dve^{-v^{2}}(|v|^{s-\frac{3}{2}}+|v-\sqrt{t}\beta|^{s-\frac{3}{2}})\] \[\lesssim(\int_{\mathbb{R}}e^{-v^{2}}|v|^{s-\frac{3}{2}}dv)||r||_{ H^{s}}t^{\frac{1-2s}{4}}. \tag{3.35}\]
For the last inequality we used the fact that for any \(c\in\mathbb{R}\)
\[\int_{\mathbb{R}}e^{-v^{2}}|v-c|^{s-\frac{3}{2}}dv\leq\int_{|v| \leq|v-c|}e^{-v^{2}}|v|^{s-\frac{3}{2}}dv\] \[+\int_{|v|\geq|v-c|}e^{-(v-c)^{2}}|v-c|^{s-\frac{3}{2}}dv\leq 2 \int_{\mathbb{R}}e^{-v^{2}}|v|^{s-\frac{3}{2}}dv. \tag{3.36}\]
The estimate for \(I_{3}\) is similar after replacing (3.34) with
\[||v^{-2}\int \psi(v^{-1}(u-t))r(t)dt||_{L^{2}_{u}}=||v^{-1}\xi^{-s}\hat{\psi}( v\xi)\xi^{s}\hat{r}(\xi)||_{L^{2}}\] \[\leq v^{s-1}||\xi^{-s}\hat{\psi}(\xi)||_{L^{\infty}}||r||_{H^{s}} \leq Cv^{s-1}||r||_{H^{s}}, \tag{3.37}\]
the schwartz function \(\hat{\psi}\) with \(\hat{\psi}(0)=0\) lead to the latter bound in above inequality. In the similar method we estimate the \(I_{2}\) as
\[I_{2}\lesssim\int_{0}^{\infty}e^{-8tv^{2}}e^{-tz_{2}v^{2}}dv||| \varsigma-z_{2}|^{s-\frac{3}{2}}||_{L^{p}(v,\infty)}|||\varsigma-z|^{-1}||_{L^ {q}(v,\infty)}, \tag{3.38}\]
where\(\frac{1}{p}+\frac{1}{q}=1\). By [16] we get
\[|||\varsigma-z|^{-1}||_{L^{q}(v,\infty)}\leq C|v-\beta|^{\frac{1}{q}-1}, \tag{3.39}\]
and
\[|||\zeta-z_{2}|^{s-\frac{3}{2}}||_{L^{p}(v,\infty)}=\big{(}\int_{v}^{ \infty}|u+iv|^{p(s-\frac{3}{2})}du\big{)}^{\frac{1}{p}}\] \[=\big{(}\int_{v}^{\infty}(u^{2}+v^{2})^{p\frac{2s-3}{4}}du\big{)}^{ \frac{1}{p}}=v^{\frac{2s-3}{2}+\frac{1}{p}}\big{(}\int_{v}^{\infty}(u^{2}+1)^{p \frac{2s-3}{4}}du\big{)}^{\frac{1}{p}}. \tag{3.40}\]
So by(3.38) and using again (3.36), we obtain
\[I_{2} \lesssim\int_{0}^{\infty}e^{-8tv^{2}}e^{-tz_{2}v^{2}}v^{\frac{2s- 3}{2}+\frac{1}{p}}|v-\beta|^{\frac{1}{q}-1}dv\] \[\lesssim\int_{0}^{\infty}e^{-8tv^{2}}e^{-tz_{2}v^{2}}v^{\frac{2s- 3}{2}}dv\leq Ct^{\frac{1-2s}{4}}. \tag{3.41}\]
And by above estimates using standard facts, like dominated convergence we can proof the \(J(L^{\infty})\subset C^{0}\) and the readers can prove it for themselves.
This lemma implies that the integral equation (3.28) has an unique solution, and we further obtain the following result.
**Proposition 3.5**.: _There exists \(\epsilon_{0}>0\) such that for \(||r||_{H^{s}(\mathbb{R})\cap L^{2,s}(\mathbb{R})}<\epsilon\) there exist constants \(T\) and \(c\) such that for \(t\geq T\) and for \(z\in\Omega_{j2}\cup\Omega_{j5}\)_
\[M^{(3)}(z)=I+M^{(3)}_{1}z^{-1}+\mathcal{O}(z^{-2}), \tag{3.42}\]
_where_
\[|M^{(3)}_{1}|\leq c||q_{0}||_{H^{1}(\mathbb{R})\cap L^{2,s}( \mathbb{R})}t^{-\frac{2s+1}{4}},\quad\text{for}\quad t\geq T. \tag{3.43}\]
Proof.: In general we just consider the case when \(j=2,i=1\). According to \(M^{(3)}_{1}=\frac{1}{\pi}\int_{\mathbb{C}}M^{(3)}WdA\) we acquire that
\[|M^{(3)}_{1}|\leq\frac{||M^{(3)}||_{\infty}}{\pi}\int_{\Omega_{ 21}}|W|dA.\]
By using
\[||e^{-8t(\alpha+6\beta z_{2})uv}||_{L^{q}_{u}(v,\infty)}=(8qt( \alpha+6\beta z_{2})v)^{-\frac{1}{q}}e^{-8qt(\alpha+6\beta z_{2})v^{2}},\]
We obtain
\[\int_{\Omega_{21}}|X_{1}(\zeta)e^{2it\theta}|dA\leq||r||_{H^{s} \cap L^{2,1}}\int_{0}^{\infty}v^{s-1}||e^{-8t(\alpha+6\beta z_{2})uv}||_{L^{2} _{u}(v,\infty)}dv\] \[\qquad\lesssim t^{-\frac{1}{2}}\int_{0}^{\infty}v^{s-\frac{3}{2}} e^{-t(\alpha+6\beta z_{2})v^{2}}dv||r||_{H^{s}\cap L^{2,1}}=C_{s}t^{-\frac{2s+1}{4}} ||r||_{H^{s}\cap L^{2,1}}.\]
For \(l=2\), we get
\[\int_{\Omega_{21}}|X_{2}e^{2it\theta}|dA\leq||r||_{H^{s}\cap L^{2,1}} \int_{0}^{\infty}|||\varsigma-z_{2}|^{s-\frac{3}{2}}||_{L^{p}(v,\infty)}||e^{-8t (\alpha+6\beta z_{2})uv}||_{L^{q}_{u}(v,\infty)}dv\] \[\leq C||r||_{H^{s}\cap L^{2,1}}t^{-\frac{1}{q}}\int_{0}^{\infty}v ^{\frac{2s-3}{2}+\frac{1}{p}-\frac{1}{q}}e^{-t(\alpha+6\beta z_{2})v^{2}}dv \leq C_{s}t^{-\frac{2s+1}{4}}||r||_{H^{s}\cap L^{2,1}}.\]
Now we acquire (3.42) by the Lipschitz continuous of Lemma 2.2.
**Theorem 3.6**.: _Fix \(s\in(\frac{1}{2},1]\) and let \(q_{0}\in H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})\cap\mathcal{G}_{0}\). Then there exist constants \(C(q_{0})>0\) and \(T(q_{0})>0\) such that the solution of the Hirota equation (1.1) satisfies_
\[||q(t,\cdot)||_{L^{\infty}(\mathbb{R})}\leq C(q_{0})t^{-\frac{1}{2}},\text{ for all }|t|\geq T(q_{0}). \tag{3.44}\]
_There are further constants \(C_{0}>0,T_{0}>0\) and small \(\epsilon>0\) such that for \(||q_{0}||_{H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})}<\epsilon\), we can take \(C(q_{0})=C_{0}||q_{0}||_{H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})}\) and \(T(u_{0})=T_{0}\)._
Proof.: Recalling a series of transformations (3.10), (3.24) and (3.27), we have
\[M(z)=M^{(3)}(z)M^{rhp}(z)R^{(2)}(z)^{-1}\delta^{\sigma_{3}}.\]
Taking the limit \(z\to\infty\) along \(R^{(2)}(z)=I\) leads to
\[M_{1}=M_{1}^{(3)}+\sum_{j=1}^{2}\frac{M_{1}^{pc}}{\sqrt{(-1)^{j}8t(\alpha+6 \beta z_{j})}}+\delta^{\sigma_{3}}.\]
By using reconstruction formula (2.29), (3.43) for \(t\geq T(s,\lambda_{0})\) and a fixed \(C=C(s,\lambda_{0})\), we have
\[|q(t,x)|\leq Ct^{-\frac{1}{2}},\]
which proves Theorem 3.6 for \(q_{0}\in H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})\cap\mathcal{G}_{0}\).
## 4 The asymptotic stability of the solitons
### The Backlund transformation
In this section we consider scattering data \(\{r\equiv 0,\{(z_{k},c_{k})_{k=1}^{N}\}\) for which the reflection coefficient vanishes identically correspond to \(N\)-soliton solution of (1.1). Especially when \(N=1\), the single soliton is given by (1.3), which is a localized pulse with speed \(v=-(12\beta\xi^{2}-4\beta\eta^{2}+4\alpha\eta^{2})\) and maximum amplitude \(2\eta\). Since \(\mathcal{G}_{1}\) is an open subset of \(L^{1}(\mathbb{R})\) and the soliton (1.3)
belong to it. If the value of \(\epsilon_{0}>0\) in the bound (1.4) is small enough, then the initial datum \(q_{0}\) belong to \(\mathcal{G}_{1}\). Notice also that the positive constant \(\epsilon_{0}\) can be taken independent of \((\eta_{0},x_{0})\).
In this section we consider the initial datum \(q_{0}\) satisfying the bound (1.4) and the scattering datum associated with it belongs to the space \(\mathcal{S}(1,1)\) defined in (2.22) is close to those of the soliton \(q_{(\eta_{0},\xi_{0},\gamma_{0}))}(0,x)\) by Lemma 2.1 and Lemma 2.2, we also obtain that when \(q_{0}\in H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})\) implies \(r\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\). Furthermore, by the Lipschitz continuity of \(q_{0}\to r\) and the fact that the soliton has \(r\equiv 0\), we have \(||r||_{H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})}\leq C\epsilon\) with \(C=C(\eta_{0},\xi_{0},\gamma_{0})\) and the value of \(\epsilon\) is given in (1.4).
We define now a map
\[\mathcal{G}_{1}\times\mathbb{C}_{+}\times\mathbb{C}_{*}\ni(q_{0},z_{s},c_{1}) \mapsto\tilde{q}_{0}\in\mathcal{G}_{0}, \tag{4.1}\]
via the transformation
\[\tilde{r}(z):=r(z)\frac{z-z_{s}}{z-z_{s}^{*}}. \tag{4.2}\]
From its definition, we acquire that \(\tilde{r}\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\) if \(r\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\) and there exist a constant \(C>0\) such that \(||\tilde{r}||_{H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})}\leq C||r||_{H^{s}( \mathbb{R})\cap L^{2,1}(\mathbb{R})}\). Then we define \(\tilde{q}\in\mathcal{G}_{0}\,\cap\,H^{1}\,\cap\,L^{2,s}(\mathbb{R})\) by the reconstruction formula (2.29) with the corresponding RH problem is solved for the scattering datum in \(\mathcal{S}(1,0)=\{\tilde{r}\in H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})\}\). By Lemma 2.5 we know that \(\tilde{q_{0}}\in\mathcal{G}_{0}\,\cap\,H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{ R})\) with norm \(||\tilde{q}_{0}||_{H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})}\leq C||\tilde{r} ||_{H^{s}(\mathbb{R})\cap L^{2,1}(\mathbb{R})}\leq C\epsilon\).
The BT technique is a method that can derive multisoliton solution starting from a trivial seed solution in a purely algebraic procedure for a integrable equation. Main feature of BT is that the Lax pair associated with the integrable equation remains covariant under the gauge transformation.
**Lemma 4.1**.: _Define the BT that generates a new solution of the Hirota equation as_
\[\tilde{q}=q-\mathbf{B},\quad\mathbf{B}:=2i(z_{s}-z_{s}^{*})\frac{f_{1}f_{2}^{ *}}{|f_{1}|^{2}+|f_{2}|^{2}}, \tag{4.3}\]
_where_
\[f_{1}:=e^{-ixz_{s}}m_{11}(t,x,z_{s})-\frac{c_{1}m_{12}(t,x;z_{s})e^{ixz_{s}+i( 4\beta z_{s}^{3}+2\alpha z_{s}^{2})t}}{2i\text{Im}(z_{s})},\]
\[f_{2}:=e^{-ixz_{s}}m_{21}(t,x,z_{s})-\frac{c_{1}m_{22}(t,x,z_{s})e^{ixz_{s}+i( 4\beta z_{s}^{3}+2\alpha z_{s}^{2})t}}{2i\text{Im}(z_{s})}.\]
_The proof will be given in the Appendix B._
**Remark 4.2**.: _The soliton (1.3) can be recovered with the BT (4.3) by taking_
\[r=0,\quad z_{s}=\xi+i\eta,\quad\tilde{q}=0,\] \[f_{1}=e^{-ixz_{s}},\quad\text{and}\quad f_{2}=-\frac{c_{1}}{2i\eta }e^{ixz_{s}+i(4\beta z_{s}^{3}+2\alpha z_{s}^{2})t}.\]
**Lemma 4.3**.: _Let \(z_{s}=\alpha_{1}+i\beta_{1}\) with \(\beta_{1}>0\). There is \(\epsilon_{0}\) sufficiently small such that for \(||q_{0}||_{H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})}<\epsilon_{0}\), there is a constant \(C\) such that_
\[|I-V_{+}(z_{s})|\leq Ce^{-t8\beta_{1}^{2}}||q_{0}||_{H^{1,s}( \mathbb{R})},\text{ if }z_{s}\in\Omega_{j1}+z_{j},\] \[|I-U_{R}^{-1}(z_{s})|\leq Ce^{-t8\beta_{1}^{2}}||q_{0}||_{H^{1,s }(\mathbb{R})},\text{ if }z_{s}\in\Omega_{j3}+z_{j}. \tag{4.4}\]
Proof.: For \(j=2,i=1,3\) from (3.20) we obtain that
\[||R_{i}||_{L^{\infty}(\Omega_{2i}+z_{2})}\leq C^{\prime}||r||_{H^{s}(\mathbb{ R})\cap L^{2,2}(\mathbb{R})}\leq C||q_{0}||_{L^{2,s}(\mathbb{R})\cap H^{2}( \mathbb{R})}.\]
If \(z_{s}\in\Omega_{21}+z_{2}\) we have \(\alpha_{1}-z_{2}\geq\beta_{1}\) and so
\[|e^{-2it\theta}|\lesssim e^{-8t(\alpha-z_{2})\beta_{1}}\lesssim e^{-t8\beta_{1 }^{2}}.\]
If \(z_{s}\in\Omega_{23}+z_{2}\), we have similarly \(|e^{2it\theta}|\leq e^{-t8\beta_{1}^{2}}\) which yield (4.4).
By Theorem 3.6, we know that there exist constants \(C_{0}\) and \(T>0\) such that for all \(t>T\), we have
\[||\tilde{q}(t,\cdot)||_{L^{\infty}(\mathbb{R})}\leq C_{0}\epsilon|t|^{-\frac{ 1}{2}}.\]
Since there is a constant \(C>0\) such that \(||\tilde{q}_{0}||_{H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})}\leq C\epsilon\). In order to prove Theorem 1.1 we need to focus only on \(\mathbf{B}\), we focusing on \(t\gg 1\) we know that
**Lemma 4.4**.: _Suppose that \(||q_{0}||_{H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})}<\epsilon_{0}\) and \(z_{s}\in\mathbb{C}_{+}\), then there are a \(\epsilon_{0}>0\), a \(c>0\) and a \(T>0\) such that_
\[|I-M^{(3)}(z_{s})|\leq ct^{-\frac{2s+1}{4}}||q_{0}||_{H^{1}(\mathbb{R})\cap L^ {2,s}(\mathbb{R})},\quad\text{for}\quad t\geq T. \tag{4.5}\]
Proof.: Notice the function \(M^{(3)}(z)\) satisfying the inequality with
\[|M^{(3)}-I|\leq\frac{||M^{(3)}||_{\infty}}{\pi}\sum_{i}\int_{\Omega_{2i}}\frac {|W|}{|\zeta-z_{s}|}dA,\]
and similarly as in Proposition 3.5 we just consider the case when \(j=2,i=1\). We set \(z_{s}=\alpha_{1}+i\beta_{1}\) and for \(l=1,3\) and just like the case in the (3.32) and (3.33) we obtain
\[\int_{\Omega_{21}}\frac{|X_{l}(\varsigma)e^{2it\theta}|}{|\varsigma-z_{2}|}dA \leq||r||_{H^{s}(\mathbb{R})}[A_{1}+A_{2}], \tag{4.6}\]
where
\[A_{1}:=\int_{0}^{\frac{\rho_{1}}{2}}v^{s-1}\Big{|}\Big{|}\frac{e^{-8t(\alpha+6 \beta z_{2})uv}}{|\varsigma-z_{2}|}\Big{|}\Big{|}_{L^{2}_{u}(v,\infty)}dv,\]
and
\[A_{2}:=\int_{\frac{\rho_{1}}{2}}^{\infty}v^{s-1}\Big{|}\Big{|}\frac{e^{-8t( \alpha+6\beta z_{2})uv}}{|\varsigma-z_{2}|}\Big{|}\Big{|}_{L^{2}_{u}(v,\infty )}dv.\]
We get
\[A_{1} =\int_{0}^{\frac{\rho_{1}}{2}}v^{s-1}\Big{|}\Big{|}\frac{e^{-8t( \alpha+6\beta z_{2})uv}}{\sqrt{(u+z_{2}-\alpha_{1})^{2}+(v-\beta_{1})^{2}}} \Big{|}\Big{|}_{L^{2}_{u}(v,\infty)}dv\] \[\leq C^{\prime}(\beta_{1})\int_{0}^{\frac{\rho_{1}}{2}}v^{s-1} \Big{|}\Big{|}e^{-8t(\alpha+6\beta z_{2})uv}\Big{|}\Big{|}_{L^{2}_{u}(v,\infty )}dv\leq C(s,\beta_{1})t^{-\frac{4s+1}{8}}. \tag{4.7}\]
By using (4.4) and consider the case when \(t\geq 1\) and \(e^{-8tv^{2}}\leq e^{-t\gamma_{1}^{2}}e^{-4v^{2}}\) for \(v\geq\frac{\beta_{1}}{2}\) we acquire the bound for \(A_{2}\) is
\[A_{2} \leq\int_{\frac{\rho_{1}}{2}}^{\infty}e^{-8t(\alpha+6\beta z_{2}) v^{2}}v^{s-1}|||\varsigma-z_{2}|^{-1}||_{L^{2}_{u}(v,\infty)}dv\] \[\leq C\int_{\frac{\rho_{1}}{2}}^{\infty}e^{-8t(\alpha+6\beta z_{2 })v^{2}}v^{s-1}|v-\beta_{1}|^{-\frac{1}{2}}dv\] \[\leq Ce^{-t\beta_{1}^{2}}\int_{0}^{\infty}e^{-4(\alpha+6\beta z_{ 2})v^{2}}v^{s-1}|v-\beta_{1}|^{-\frac{1}{2}}dv\leq C^{\prime}e^{-t\beta_{1}^{2 }}. \tag{4.8}\]
For \(l=2\) we similarly have
\[\int_{\Omega_{21}}\frac{|X_{2}(\varsigma)e^{2it\theta}|}{|\varsigma-z_{2}|}dA \leq||r||_{H^{s}(\mathbb{R})}[B_{1}+B_{2}], \tag{4.9}\]
where
\[B_{1}:=\int_{0}^{\frac{\rho_{1}}{2}}\int_{v}^{\infty}|\varsigma-z_{2}|^{s- \frac{3}{2}}\frac{e^{-8t(\alpha+6\beta z_{2})uv}}{|\varsigma-z_{2}|}dv,\]
and
\[B_{2}:=\int_{\frac{\rho_{1}}{2}}^{\infty}\int_{v}^{\infty}|\varsigma-z_{2}|^{s -\frac{3}{2}}\frac{e^{-8t(\alpha+6\beta z_{2})uv}}{|\varsigma-z_{2}|}dv.\]
Then we obtain \(B_{1}\leq C(\beta,s)t^{-\frac{2s+1}{8}}\) and \(|\varsigma-z_{2}|\geq\beta_{1}/2\). And when \(t\geq 1\) we have
\[B_{2} \leq\int_{\frac{\rho 1}{2}}^{\infty}e^{-8t(\alpha+6\beta z_{2})v^{2} }|||\varsigma-z_{2}|^{s-\frac{3}{2}}||_{L^{p}(v,\infty)}|||\varsigma-z_{2}|^{- 1}||_{L^{q}(v,\infty)}dv\] \[\leq Ce^{-t\beta_{1}^{2}/2}\int_{0}^{\infty}e^{-4(\alpha+6\beta z _{2})v^{2}}v^{\frac{2s-3}{2}+\frac{1}{p}}|v-\beta_{1}|^{\frac{1}{q}-1}dv\leq C _{s}e^{-ty_{1}^{2}/2}. \tag{4.10}\]
**Lemma 4.5**.: _Fix \(\lambda_{0}>0\). Then there is a \(C>0\) and a \(T>0\) such that for \(||\tilde{r}||_{H^{s}(\mathbb{R})}<\lambda_{0}\) we have for \(t\geq T\)_
\[\big{|}m_{11}(t,x,z_{s})-\delta(z_{s})\big{|}+\big{|}m_{22}(t,x,z_ {s})-\delta^{-1}(z_{s})\big{|}\] \[\leq C||\tilde{r}||_{H^{s}(\mathbb{R})}t^{-\frac{1}{2}}(||\tilde {r}||_{H^{s}(\mathbb{R})}+t^{-\frac{2\varepsilon-1}{8}}), \tag{4.11}\] \[\big{|}m_{12}(t,x,z_{s})-\frac{\delta^{-1}(z_{s})\beta_{12}}{\sqrt {(-1)^{j}8t(\alpha+6\beta z_{j})}(z_{s}-z_{j})}\big{|}\] \[\qquad\qquad\qquad+|m_{22}(t,x,z_{s})-\frac{\delta(z_{s})\beta_{2 1}}{\sqrt{(-1)^{j}8t(\alpha+6\beta z_{j})}(z_{s}-z_{j})}\big{|}\] \[\qquad\qquad\qquad\leq C(||\tilde{r}||_{H^{s}(\mathbb{R})}t^{- \frac{2s+1}{8}}). \tag{4.12}\]
Proof.: We focus on \(j=2\) and recall the Lemma 4.4 we have the expansion of function \(M^{(3)}=I+\mathcal{O}(||\tilde{r}||_{H^{s}(\mathbb{R})}t^{-\frac{2s+1}{8}})\) and Lemma 4.3 we get similar expansions for \(V_{+}(z_{s})\) and \(U_{R}(z_{s})\). We furthermore know by the property of \(|\delta^{\pm}|\leq 1+\rho^{2}\) for \(\rho=||\tilde{r}||_{L^{\infty}(\mathbb{R})}\) and we acquire \(|\beta_{12}|+|\beta_{21}|<C||\tilde{r}||_{L^{\infty}(\mathbb{R})}\). From A we have the expansion
\[M^{pc}(\zeta_{2})=I+\frac{M_{1}^{pc}}{\sqrt{8t(\alpha+6\beta z_{2})}(z_{s}-z_{ 2})}+\mathcal{O}(||\tilde{r}||_{H^{s}(\mathbb{R})}t^{-1}).\]
These observations yield Lemma 4.5.
### The proof of main result
Now we start to analyze the term \(\mathbf{B}\) in (4.3). Consider the following inequalities:
\[\big{|}e^{-ixz_{s}}m_{11}(t,x,z_{s})\big{|}>10\big{|}\frac{c_{1}m_ {12}(t,x;z_{s})e^{ixz_{s}+i(2\alpha z_{s}^{2}+4\beta z_{s}^{3})t}}{2i\text{Im}( z_{s})}\big{|}, \tag{4.13}\] \[10\big{|}e^{-ixz_{1}}m_{21}(t,x;z_{s})\big{|}<\big{|}\frac{c_{1}m_ {22}(t,x,z_{s})e^{ixz_{s}+i(2\alpha z_{s}^{2}+4\beta z_{s}^{3})t}}{2i\text{Im}( z_{s})}\big{|}. \tag{4.14}\]
**Lemma 4.6**.: _Given \(\epsilon>0\) small, there exist \(T(\epsilon_{0})>0\) and \(C>0\) such that if \(||\tilde{r}||_{H^{s}(\mathbb{R})\cap L^{2,s}(\mathbb{R})}<\epsilon_{0}\) and if \((t,x)\) is such that at least one of (4.13) and (4.14) is false, then we have \(|\mathbf{B}|<Ct^{-\frac{1}{2}}\epsilon\) for \(t\geq T(\epsilon_{0})\)._
Proof.: For the \(\epsilon\) of (1.4) and \(\rho=||\tilde{r}||_{L^{\infty}(\mathbb{R})}\). We are only consider the case when \(t\) is large by assuming that for \((t,x)\) inequality(4.13) is false, Lemma 4.5 implies for \(t\geq T\)
\[|m_{12}| \leq(1+\rho^{2})|k_{1}|t^{-\frac{1}{2}}+C\epsilon t^{-\frac{2s+1} {8}}\] \[\leq t^{-\frac{1}{2}}\epsilon K\big{(}\frac{1}{2}(1+\rho^{2})^{- 1}-Ct^{\frac{1-2s}{8}}-C\epsilon\big{)}\] \[\leq t^{-\frac{1}{2}}\epsilon K|m_{22}|, \tag{4.15}\]
for a fixed and sufficiently large constant \(K\). Then, if (4.13) is false and \(t\geq T\), both terms in (4.13) are bounded from above by
\[\Big{|}\frac{c_{1}m_{22}(t,x;z_{s})e^{ixz_{s}+i(2\alpha z_{s}^{2}+4\beta z_{s} ^{3})t}}{2i\text{Im}(z_{s})}\Big{|}.\]
For \(t\geq T\) by the same argument of (4.15) we have also
\[\big{|}e^{-ixz_{1}}m_{21}(x,t;z_{s})\big{|}\leq t^{-\frac{1}{2}}\epsilon K|e^{ -ixz_{s}}m_{11}(x,t;z_{s})\big{|}. \tag{4.16}\]
We conclude that for \(t\geq T\) and if \((t,x)\) is in the domain where (4.15) is false, we have for some fixed \(K\)
\[|\mathbf{B}|\leq K\frac{\big{|}m_{12}e^{ixz_{s}-i(2\alpha z_{s}^{2}+4\beta z_{ s}^{3})t}m_{22}^{*}e^{-ixz_{s}^{*}-i(2\alpha z_{s}^{*2}+4\beta z_{s}^{*3})t} \big{|}}{|m_{22}e^{ixz_{s}+i(2\alpha z_{s}^{2}+4\beta z_{s}^{3})t}|^{2}}=K \frac{|m_{12}|}{|m_{22}|}\leq\frac{CK}{\sqrt{t}}\epsilon. \tag{4.17}\]
Now we assume that \((t,x)\) is such that (4.13) is true. Notice that by (4.13) and (4.16) we have for a fixed \(K\)
\[\frac{|f_{1}e^{ixz_{s}^{*}}m_{21}^{*}|}{||f||^{2}}\leq K\frac{\big{|}e^{-ixz_{s }}m_{11}e^{-ixz_{s}^{*}}m_{21}^{*}|}{|e^{-ixz_{s}}m_{11}|^{2}}=K\frac{|m_{21}|} {|m_{11}|}\leq\frac{CK}{\sqrt{t}}\epsilon. \tag{4.18}\]
Consider the suppose that \((t,x)\) is such that (4.13) and (4.14) are not both true, we assume now that (4.13) is true and (4.14) is false. Then by (4.18), for a fixed \(K\)
\[|\mathbf{B}|\leq 4|\text{Im}z_{s}|\frac{f_{1}f_{2}}{||f||^{2}}\leq K\frac{|f_{ 1}e^{ixz_{s}^{*}}m_{21}^{*}|}{||f||^{2}}\leq\frac{CK}{\sqrt{t}}\epsilon. \tag{4.19}\]
This prove the Lemma 4.5 for values of \((t,x)\) for which (4.13) and (4.14) are true.
**Lemma 4.7**.: _Fix \(\rho_{0}>0\) and let \(\rho:=||r||_{L^{\infty}(\mathbb{R})}\) and assume \(\rho<\rho_{0}\), then for fix \(z_{s}=\alpha_{1}+i\beta_{1}\) with \(\beta_{1}>0\) there exists a constant \(C\) independent from \(z_{j}\) such that_
\[|\delta(z_{s})-\nu(z_{s})|\leq C||r||_{L^{2}}^{2},\] \[\text{where}\ \ \nu(z_{s}):=\exp\Big{(}\frac{1}{2\pi i}\int_{- \infty}^{\alpha_{1}}\frac{\log(1+|r(\varsigma)|^{2}}{\varsigma-z_{s}}d\varsigma \Big{)}, \tag{4.20}\]
_and fix \(K>0\) then for \(|z_{j}-\alpha_{1}|\leq K/\sqrt{t}\) there exists a constant \(C\) such that_
\[|\delta(z_{s})-\nu(z_{s})|\leq\frac{C}{\sqrt{t}\beta_{1}}\log(1+\rho^{2}). \tag{4.21}\]
Proof.: According to the Proposition 3.5 and for a fixed \(c\) and for
\[\gamma(z):=\frac{1}{2\pi i}\int_{z_{1}}^{z_{2}}\frac{\log(1+|r(s)|^{2})}{s-z}ds,\]
rewrite as \(\delta(z)=\delta(z)^{\gamma(z)}\) and we obtain
\[\Big{|}\gamma(z_{s})-\frac{1}{2\pi i}\int_{z_{1}}^{\alpha_{1}}\frac{\log(1+|r (s)|^{2})}{s-z_{s}}ds\Big{|}=\frac{1}{2\pi}\Big{|}\int_{\alpha_{1}}^{z_{2}} \frac{\log(1+r(s)|^{2})}{s-\alpha_{1}-i\beta_{1}}ds\Big{|}\leq\frac{c}{\beta_{ 1}}||r||_{L^{2}}^{2}. \tag{4.22}\]
This yield (4.20) since the bound \(|\delta(z)|\leq(1+\rho^{2})\) is independent from \(z_{2}\). Similarly (4.21) follows from
\[\Big{|}\gamma(z_{s})-\frac{1}{2\pi i}\int_{z_{1}}^{\alpha_{1}}\frac{\log(1+r( s)|^{2})}{s-z_{s}}ds\Big{|}=\frac{1}{2\pi}\Big{|}\int_{\alpha_{1}}^{z_{2}} \frac{\log(1+|r(s)|^{2})}{s-\alpha_{1}-i\beta_{1}}ds\Big{|}. \tag{4.23}\]
This yield Lemma 4.7.
Then, We assume that (4.13) and by (4.14) are true, from the inequality (4.17) and (4.18), up to terms bounded by \(Ct^{-\frac{1}{2}}\epsilon\), what is left is the analysis of
\[-2i\frac{e^{-ixz_{s}}m_{11}c_{1}^{*}m_{22}^{*}e^{-ixz_{s}^{*}-(2\alpha z_{s}^{ *2}+4\beta z_{s}^{*3})t}}{||f||^{2}}. \tag{4.24}\]
Set now
\[b^{2}=\big{|}e^{-ixz_{s}}m_{11}\big{|}^{2}+\big{|}\frac{c_{1}m_{22}e^{ixz_{s}+( 2\alpha z_{s}^{2}+4\beta z_{s}^{3})t}}{2i\mathrm{Im}(z_{s})}\big{|}^{2},\]
and expand
\[||f||^{2}=b^{2}\big{(}1+\mathcal{O}(b^{-1}\big{|}c_{1}m_{12}e^{ixz_{s}+(2 \alpha z_{s}^{2}+4\beta z_{s}^{3})t}\big{|})+\mathcal{O}(b^{-1}|m_{21}(t,x;z_{1 })e^{-ixz_{1}})\big{)}.\]
Then the quantity in (4.14) is of the form
\[-2ie^{-ixz_{s}}m_{11}\frac{c_{1}^{*}m_{22}^{*}e^{-ixz_{s}^{*}-(2 \alpha z_{s}^{*2}+4\beta z_{s}^{*3})t}}{b^{2}}\\ \times\big{(}1+\mathcal{O}(b^{-1}\big{|}m_{12}e^{ixz_{s}+(2\alpha z _{s}^{2}+4\beta z_{s}^{3})t}\big{|}\big{)}+\mathcal{O}(b^{-1}|m_{21}e^{-ixz_{s} })\big{)}. \tag{4.25}\]
We claim that the quality in (4.24) equals
\[-2i\frac{e^{-ixz_{1}}\delta(z_{s})(\delta(z_{s}))^{-1}c_{1}^{*}e^{-ixz_{s}^{*}-( 2\alpha z_{s}^{*2}+4\beta z_{s}^{*3})t}}{b^{2}}(1+\mathcal{O}(\epsilon t^{- \frac{1}{2}})). \tag{4.26}\]
To prove this claim, since \(m_{ii}=\delta^{-(-1)^{i}}(z_{1})+\mathcal{O}(\epsilon^{-\frac{1}{2}})\) and \(|\delta^{\pm 1}(z_{1})|\geq\langle\rho\rangle^{-2}\), we have
\[b^{2}=(\big{|}\delta(z_{s})e^{-ixz_{s}}\big{|}^{2}+\big{|}\frac{c_{1}\delta(z _{s})^{-1}e^{ixz_{s}+(2\alpha z_{s}^{2}+4\beta z_{s}^{3})t}}{2i\text{Im}(z_{s })}\big{|}^{2})(1+\mathcal{O}(\epsilon t^{-\frac{1}{2}})).\]
We have \(\mathcal{O}(b^{-1}\big{|}c_{1}m_{12}e^{ixz_{s}+(2\alpha z_{s}^{2}+4\beta z_{s }^{3})t}\big{|})=\mathcal{O}(\epsilon t^{-\frac{1}{2}})\) by
\[b^{-1}\big{|}m_{21}e^{ixz_{s}}\big{|}\leq\frac{|m_{12}e^{-ixz_{s}}|}{|m_{11}e^ {-ixz_{s}}|}=\frac{|m_{21}|}{|m_{11}|}\leq C\epsilon t^{-\frac{1}{2}}.\]
hence (4.26) is proved. Consider the term in (4.26), now for \(z_{s}=\xi+i\eta\) and \(\nu(z_{s})\) defined in (3.5) and inserting trivial factors \(\nu/\nu=1\) and \(\nu^{*}/\nu^{*}=1\), the expression in (4.26) equals
\[\frac{4\eta e^{-ixz_{s}-i(2\alpha z_{s}^{2}+4\beta z_{s}^{3})t}e^{-ixz_{s}^{*} -i(2\alpha z_{s}^{*2}+4\beta z_{s}^{*3})t}\frac{\delta(z_{s})}{\nu(z_{s})} \frac{\nu^{*}(z_{s})}{\delta^{*}(z_{s})}\frac{\nu(z_{s})}{\nu(z_{s})}}{\tilde {b}^{2}}, \tag{4.27}\]
where
\[\tilde{b}^{2}=\big{|}e^{-2\eta x-24\beta\xi^{2}t+8\beta\eta^{3}t- 8\alpha\eta\xi t}\big{|}^{2}|\frac{\delta(z_{s})}{\nu(z_{s})}||\nu(z_{s})|\\ +\big{|}e^{2\eta x+24\beta\xi^{2}t-8\beta\eta^{3}t+8\alpha\eta\xi t }\big{|}^{2}|\frac{\nu(z_{s})}{\delta(z_{s})}||\nu(z_{s})|^{-1}.\]
Now fix a constant \(\kappa>0\),Then (4.27) differs from the soliton solution
\[2\eta e^{2i(-\xi x-4\beta\xi^{3}t-2\alpha\xi^{2}t+12\beta\xi\eta^{2} t+2\alpha\eta^{2}t)+i\gamma}\\ \times\operatorname{sech}(-2\eta x-24\beta\xi^{2}t+8\beta\eta^{3}t -8\alpha\eta\xi t+\log(|\nu(z_{s})|)), \tag{4.28}\]
by less than \(c\kappa t^{-\frac{1}{2}}\epsilon\) by the sum of the following two error terms: notice the (4.27) and (4.28) can be bounded, up to a constant factor \(C=C(\xi,\eta,\alpha,\beta)\), by the sum of the following two error terms:
\[\frac{\big{|}\frac{\delta(z_{s})}{\nu(z_{s})}\frac{\nu^{*}(z_{s})}{\delta^{*}( z_{s})}-1\big{|}}{e^{8(3\beta|z_{2}-\xi|^{2}t-\beta\eta^{3}t+\alpha\eta|z_{2}-\xi|t) }(1+||\tilde{r}||^{2}_{L^{\infty}(\mathbb{R})})^{-1}}, \tag{4.29}\]
and
\[\big{|}\operatorname{sech}(-8(3\beta|z_{2}-\xi|^{2}t-\beta\eta^{ 3}t+\alpha\eta|z_{2}-\xi|t+\log(|\nu(z_{s})|))\\ -\operatorname{sech}(-8(3\beta|z_{2}-\xi|^{2}t-\beta\eta^{3}t+ \alpha\eta|z_{2}-\xi|t+\log(|\nu(z_{s})|)+\log\big{(}\frac{|\delta(z_{s})|}{| \nu(z_{s})|}\big{)})\big{|}. \tag{4.30}\]
At first we bounded (4.29) and for \(|z_{2}-\xi|\geq\kappa t^{-\frac{1}{2}}\) formula (4.29) is bounded by \(Ce^{8(3\beta|\sqrt{t}\kappa}e^{-\beta\eta^{3}t+\alpha\eta\kappa\sqrt{t})}\). For \(|z_{2}-\xi|\leq\kappa t^{-\frac{1}{2}}\) we bounded (4.29) by (4.21)
\[(1+||\tilde{r}||^{2}_{L^{\infty}(\mathbb{R})})\Big{|}\frac{\delta(z_{s})}{\nu (z_{s})}\frac{\nu^{*}(z_{s})}{\delta^{*}(x_{s})}-1\Big{|}\leq 4\frac{C}{ \sqrt{t}}(1+||\tilde{r}||_{L^{2}_{\infty}(\mathbb{R})})\leq Kt^{-\frac{1}{2}} \epsilon^{2}.\]
According to Lagrange Theorem, (4.30) is bounded
\[\operatorname{sech}(-8(3\beta|z_{2}-\xi|^{2}t-\beta\eta^{3}t+ \alpha\eta|z_{2}-\xi|t+\log(|\nu(z_{s})|)\\ +\log\big{(}\frac{|\delta(z_{s})|}{|\nu(z_{s})|})\big{)}\Big{|} \log\Big{(}\frac{|\delta(z_{s})|}{|\nu(z_{s})|}\Big{)}\Big{|},\]
for some \(c\in(0,1)\). This satisfies bounds similar to those satisfied by (4.29).
We consider initial potential \(q_{0}\in H^{1}(\mathbb{R})\cap L^{2,s}(\mathbb{R})\) we need to show that when one of (4.13) and (4.14) is false to complete the proof of Theorem 1.1, then the function in (4.28) is \(\mathcal{O}(\epsilon t^{-\frac{1}{2}})\). By Lemma 4.5 the fact that (4.13) and (4.14) false means that for a fixed \(C=C(\rho_{0})>0\) we have
\[|e^{-2ixz_{s}-2i(\alpha z_{s}^{*2}+2\beta z_{s}^{*3})t}|\leq C\epsilon t^{- \frac{1}{2}},\]
and
\[|e^{2ixz_{s}^{*}+2i(\alpha z_{s}^{2}+2\beta z_{s}^{3})t}|\leq C\epsilon t^{- \frac{1}{2}}.\]
Any of these yield our claim that the function in (4.28) is \(\mathcal{O}(\epsilon t^{-\frac{1}{2}})\) Then we complete the proof of Theorem 1.1 for \(q_{0}\in H^{1}(\mathbb{R})\,\cap\,L^{2,s}(\mathbb{R})\) and we notice that when \(|t|\geq T(\epsilon_{0})\) the soliton in formula (1.3) is given by formula (4.28).
In the end of this paper of we will explain the ground states \(q_{\xi,\eta,\gamma\pm,\alpha,\beta}(t,x-x_{\pm})\) in the statement of Theorem 1.1 are in general distinct. The \(+\) ground state has been computed explicitly in (4.28).
**Lemma 4.8**.: _Considering \(t<-T(\lambda_{0})\) ground state is given by formula (4.30) but with \(\nu(z_{s})\) replaced by_
\[\Lambda(z_{s})=\exp\Big{(}\frac{1}{2\pi i}\int_{\alpha_{1}}^{\infty}\frac{( \log(1+|r(s)|^{2})}{s-z_{s}}ds)\Big{)}. \tag{4.31}\]
Proof.: Notice that if \(q(t,x)\) solves (1.1) then \(u(t,x):=q^{*}(-t,x)\) solves the Hirota equation with initial value \(q_{0}^{*}(x)\), and if \((r(z),z_{s},c_{1})\) are the spectral data of \(q_{0}\in\mathcal{G}_{1}\), then we have \(q_{0}^{*}\in\mathcal{G}_{1}\) with spectral data \((r^{*}(-z),-z_{s}^{*},-c_{1}^{*})\). According to (4.28) we obtain when \(t\to-\infty\)
\[u(-t,x) \sim-2\eta e^{2i(\xi x+4\beta\xi^{3}t+2\alpha\xi^{2}t-12\beta\xi \eta^{2}t-2\alpha\eta^{2}t)-i\gamma}\] \[\times\operatorname{sech}(-2\eta x-24\beta\xi^{2}t+8\beta\eta^{3} t-8\alpha\eta\xi t+\log(|\Lambda(z_{s})|)).\]
And the complex conjugate of \(\Lambda(z_{s})\) can be written as
\[\Lambda^{*}(z_{s})=\exp\Big{(}\frac{1}{2\pi i}\int_{-\infty}^{\xi}\frac{(\log (1+|r(s)|^{2})}{s+\xi-i\eta}ds)\Big{)}.\]
Then (4.31) is true and using \(q(t,x)=u^{*}(-t,x)\) and so taking the complex conjugate of the above formula, we obtain for \(t\to-\infty\)
\[q(t,x) \sim 2\eta e^{2i(-\xi x-4\beta\xi^{3}t-2\alpha\xi^{2}t+12\beta\xi \eta^{2}t+2\alpha\eta^{2}t)+i\gamma}\] \[\times\operatorname{sech}(-2\eta x-24\beta\xi^{2}t+8\beta\eta^{3} t-8\alpha\eta\xi t+\log(|\Lambda(z_{s})|)).\]
thus completing the proof of Lemma 4.8.
## Appendix A A parabolic cylinder model
Here we describe the solution of the parabolic cylinder model problem [19]. Let \(\Sigma^{pc}=\cup_{n=1}^{4}\Sigma_{j}\), where \(\Sigma_{j}\) denotes the complex contour
\[\Sigma_{j}=\{\zeta\in\mathbb{C}|\arg\zeta=\frac{2j-1}{4}\pi\},\quad j=1,\ldots,4. \tag{10}\]
Denote \(\Omega_{j},j=1,\ldots,6\) be the six maximally connected open sectors in \(\mathbb{C}\setminus(\Sigma^{pc}\cup\mathbb{R})\) where its labelled sequentially as one encircles the origin in a counterclockwise fashion. Finally, fix \(r_{0}\in\mathbb{C}\) and let
\[\kappa:=-\frac{1}{2\pi}\log(1+|r_{0}|^{2}). \tag{11}\]
Then consider the following RH problem.
**RHP Appendix**: **A.1**: _Fix \(r_{0}\in\mathbb{C}\), find an analytic function \(M^{pc}(\cdot):\mathbb{C}\setminus\Sigma^{pc}\to SL_{2}(\mathbb{C})\)_
1. \(M^{pc}(\zeta)=I+\zeta^{-1}M_{1}^{pc}+\mathcal{O}(\zeta^{-2})\) _uniformly as_ \(\zeta\to\infty\)_._
2. _For_ \(\zeta\in\Sigma^{pc}\)_, the continuous values_ \(M_{\pm}^{pc}(\zeta)\) _satisfy the jump relation_ \[M_{+}^{pc}(\zeta)=M_{-}^{pc}(\zeta)V^{pc}(\zeta),\] _where_ \[V^{pc}(\zeta)=\begin{cases}\begin{pmatrix}1&0\\ r_{0}\zeta^{-2i\kappa}e^{i\zeta^{2}/2}&1\end{pmatrix},&\text{arg}\zeta=\frac{ \pi}{4},\\ \\ \begin{pmatrix}1&r_{0}^{*}\zeta^{2i\kappa}e^{-i\zeta^{2}/2}\\ 0&1\end{pmatrix},&\text{arg}\zeta=-\frac{\pi}{4},\\ \\ \begin{pmatrix}1&\frac{r_{0}^{*}}{1+|r_{0}|^{2}}\zeta^{2i\kappa}e^{-\zeta^{2}/2 }\\ 0&1\end{pmatrix},&\text{arg}\zeta=\frac{3\pi}{4},\\ \\ \begin{pmatrix}1&0\\ \frac{r_{0}}{1+|r_{0}|^{2}}\zeta^{-2i\kappa}e^{i\zeta^{2}/2}&1\end{pmatrix},& \text{arg}\zeta=-\frac{\pi}{4},\end{cases}\] (12)
_See Figure 1._
The solution of the RH problem A.1 can be given by
\[M^{pc}(\zeta,r)=I+\frac{1}{\zeta}\begin{pmatrix}0&-i\beta_{12}\\ i\beta_{21}&0\end{pmatrix}+\mathcal{O}(\zeta^{-2}),\] (A.4)
where \(\beta_{12}\) and \(\beta_{21}\) are the complex constants
\[\beta_{12} =\frac{-i\sqrt{2\pi}e^{i\pi/4}e^{-\pi\kappa/2}}{r_{0}\Gamma(-i \kappa)},\] \[\beta_{21} =\frac{i\sqrt{2\pi}e^{-i\pi/4}e^{-\pi\kappa/2}}{r_{0}^{*}\Gamma( i\kappa)}=\frac{\kappa}{\beta_{12}}.\] (A.5)
It can be shown that
**Lemma A.1**.: _Let \(\rho=||r||_{L^{\infty}(\mathbb{R})}\), there exists a \(C\) such that_
\[|M^{pc}(\zeta)|\leq C,\quad\text{for all }\zeta\notin\mathbb{R},\] (A.6) \[\left|M^{pc}(\zeta)-I-\zeta^{-1}{M^{pc}}_{1}\right|\leq C\rho| \zeta|^{-2}\quad\text{for }\ |\zeta|\geq 1.\] (A.7)
## Appendix B RH problem under BT
The following calculations are standard and can be found in [25]. Let \(q(x)\in H^{1,1}(\mathbb{R})\) be given and consider the associated ZS-AKNS operator and its reflection coefficient function \(r(z)\in H^{1,1}(\mathbb{R})\). Support that for each \(x\in\mathbb{R}\), \(2\times 2\) matrix \(\Phi(x,t;z)=\Psi(x,t;z)e^{it\theta(z)\sigma_{3}}\) solves the corresponding RH problem with a finite number of simple bound states at
\[\mathcal{Z}=\{z\,|\,z=z_{1},\ldots,z_{n}\in\mathbb{C}^{+}\},\ \mathcal{Z}^{*}=\{z|z=z_{1}^{*},\ldots,z_{n}^{*}\in\mathbb{C}_{-}\}.\]
Figure A.1: The contours \(\Sigma_{j}\) and sectors \(\Omega_{j}\) in the \(\zeta\)-plane defining RHP A.1
**RHP Appendix B.1**.: _Find an analytic function \(\Psi:\mathbb{C}\setminus(\mathbb{R}\cup\mathcal{Z}\cup\mathcal{Z}^{*})\to SL_{2}( \mathbb{C})\) with the following properties_
1. \(\Psi(x,z)e^{it\theta\sigma_{3}}=I+\mathcal{O}(z^{-1})\) _as_ \(z\to\infty\)_._
2. \(\Psi(x,z)e^{it\theta\sigma_{3}}\) _takes continuous boundary values_ \(\Psi_{\pm}(x,z)e^{it\theta\sigma_{3}}\) _which satisfy the jump relation_ \(\Psi_{+}(x,z)e^{it\theta\sigma_{3}}=\Psi_{-}(x,z)e^{it\theta\sigma_{3}}V(z)\) _where_ \[V(z)=\begin{pmatrix}1+|r(z)|^{2}&r^{*}(z)\\ r(z)&1\end{pmatrix}.\] (B.1)
3. \(\Psi(z)\) _has simple poles at each_ \(z_{k}\in\mathcal{Z}\) _and_ \(z_{k}^{*}\in\mathcal{Z}^{*}\) _(_\(1\leq k\leq n\)_) at which_ \[\underset{z=z_{k}}{\text{Res}}\Psi(x,z)e^{it\theta\sigma_{3}}=\underset{z\to z _{k}}{\text{lim}}\Psi(x,z)e^{it\theta\sigma_{3}}\begin{pmatrix}0&0\\ c_{k}&0\end{pmatrix},\] (B.2) \[\underset{z=z_{k}^{*}}{\text{Res}}\Psi(x,z)e^{it\theta\sigma_{3}}=\underset{z \to z_{k}^{*}}{\text{lim}}\Psi(x,z)e^{it\theta\sigma_{3}}\begin{pmatrix}0&-c_ {k}^{*}\\ 0&0\end{pmatrix}.\] (B.3)
The goal is to add in another simple bound state at \(z=\xi\in\mathbb{C}^{+}\setminus\{z_{1},\ldots,z_{n}\}\) and simultaneously at \(z=\xi^{*}\in\mathbb{C}^{-}\setminus\{z_{1}^{*},\ldots,z_{n}^{*}\}.\) We use a Darboux transformation \((z+P)(\partial_{x}-L)=(\partial_{x}-\overline{L})(z+P)\) where \(P\) can chosen in the form \(P=\mathfrak{b}(x)P_{0}\mathfrak{b}^{-1}(x)\) and \(P_{0}\) is a constant matrix and \(\mathfrak{b}=\mathfrak{b}(x)\) solves the equation \(b^{\prime}=Qb-i\frac{\sigma_{3}}{2}bP_{0}\), here the appropriate choice here is \(P_{0}=-\begin{pmatrix}\xi&0\\ 0&\overline{\xi}\end{pmatrix}\), where \(\mathfrak{b}\) is determined below. Set
\[\tilde{\Psi}(x,z)=\mathfrak{b}(x)\mu(z)\mathfrak{b}^{-1}(x)\Psi(x,z)\mu^{-1}( z),\] (B.4)
where \(\mu(z)=z+P_{0}=\begin{pmatrix}z-\xi&0\\ 0&z-\xi^{*}\end{pmatrix}\). Note that \(\tilde{\Psi}(x,z)e^{it\theta\sigma_{3}}\to I\) as \(z\to\infty\). Let \(\tilde{c}(\xi)\) be any nonzero constant. We want to choose \(\mathfrak{b}(x)\) so that \(\tilde{\Psi}\) has a simple pole in the first column at \(z=\xi\) and a simple pole in the second column at \(z=\xi^{*}\) such that for \(x\in\mathbb{R}\),
\[\underset{z=\xi}{\text{Res}}\tilde{\Psi}e^{it\theta\sigma_{3}}=\underset{z\to \xi}{\text{lim}}\tilde{\Psi}e^{it\theta\sigma_{3}}\begin{pmatrix}0&0\\ \tilde{c}(\xi)&0\end{pmatrix},\]
\[\underset{z=\xi^{*}}{\text{Res}}\tilde{\Psi}e^{it\theta\sigma_{3}}=\underset{z \to\xi^{*}}{\text{lim}}\tilde{\Psi}e^{it\theta\sigma_{3}}\begin{pmatrix}0&-c^ {*}(\xi)\\ 0&0\end{pmatrix}.\]
Since
\[\mathfrak{b}^{-1}\tilde{\Psi}=\begin{pmatrix}(\mathfrak{b}^{-1}\Psi)_{11}&( \mathfrak{b}^{-1}\Psi)_{12}\frac{z-\xi}{z-\xi^{*}}\\ (\mathfrak{b}^{-1}\Psi)_{21}\frac{z-\xi^{*}}{z-\xi}&(\mathfrak{b}^{-1}\Psi)_{2 1}\end{pmatrix},\]
we have
\[\operatorname*{Res}_{z_{\xi}}\mathfrak{b}^{-1}\tilde{\Psi}=\begin{pmatrix}0&0\\ (\mathfrak{b}^{-1}\Psi)_{21}(x,\xi)(\xi-\xi^{*})&0\end{pmatrix},\]
but
\[\lim_{z\to\xi}\mathfrak{b}^{-1}\tilde{\Psi}\begin{pmatrix}0&0\\ \tilde{c}(\xi)&0\end{pmatrix}=\begin{pmatrix}0&0\\ \tilde{c}(\xi)(\mathfrak{b}^{-1}\Psi)_{22}(x,\xi)&0\end{pmatrix},\]
and hence we must have
\[(\xi-\xi^{*})(e_{2},\mathfrak{b}^{-1}\Psi(x,\xi)e_{1})=\tilde{c}(\xi)(e_{2}, \mathfrak{b}^{-1}\Psi(x,\xi)e_{2}).\]
Therefore, it follows necessarily that
\[\mathfrak{b}(x)e_{1}=c_{1}(x)\big{(}\Psi(x,\xi)e_{1}-\frac{\tilde{c}(\xi)}{ \xi-\xi^{*}}\Psi(x,\xi)e_{2}\big{)},\]
for some nonzero function \(c_{1}(x)\). Similarly for \(z=\xi^{*}\), we see that
\[\mathfrak{b}(x)e_{2}=c_{2}(x)\big{(}-\frac{\tilde{c}^{*}(\xi)}{\xi-\xi^{*}} \Psi(x,\xi^{*})e_{1}+\Psi(x,\xi^{*})e_{2}\big{)},\]
for some nonzero function \(c_{2}(x)\), Observe that \(c_{1}(x)\) and \(c_{2}(x)\) factor out in the formula (B.1) for \(\tilde{\Psi}\). Set
\[\mathfrak{b}=\big{(}\Psi(x,\xi)\begin{pmatrix}1\\ -\frac{\tilde{c}(\xi)}{\xi-\xi^{*}}\end{pmatrix}\quad\Psi(x,\xi^{*})\begin{pmatrix} -\frac{\tilde{c}^{*}(\xi)}{\xi-\xi^{*}}\\ 1\end{pmatrix}\big{)}.\] (B.5)
From the symmetry we see that \(\mathfrak{b}_{2}=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\mathfrak{b}_{1}^{*}\), where \(\mathfrak{b}=(\mathfrak{b}_{1},\mathfrak{b}_{2})\). Thus, \(\det\mathfrak{b}(x)=|\mathfrak{b}_{1})_{1}(x)|^{2}+|(\mathfrak{b}_{1})_{2}(x )|^{2}>0\) and hence \(\mathfrak{b}(x)\) is invertible for all \(x\in\mathbb{R}\). The jump matrix \(\tilde{V}\) for \(\Psi(x,z)\) is given by
\[\tilde{V}(z) =\tilde{\Psi}_{-}^{-1}(x,z)\tilde{\Psi}_{+}(x,z)=\mu(z)V(z)\mu^{ -1}(z)\] (B.6) \[=\begin{pmatrix}1+|\tilde{r}(z)|^{2}&\tilde{r}^{*}(z)\\ \tilde{r}(z)&1\end{pmatrix},\qquad z\in\mathbb{R},\] (B.7)
where
\[\tilde{r}(z)=r(z)\frac{z-\xi}{z-\xi^{*}}.\]
A straightforward calculation shows that for \(1\leq k\leq n\),
\[\operatorname*{Res}_{z_{k}}\tilde{\Psi} =\lim_{z\to z_{k}}\tilde{\Psi}\begin{pmatrix}0&0\\ \tilde{c}_{k}(z_{k})&0\end{pmatrix},\] \[\operatorname*{Res}_{z_{k}^{*}}\tilde{\Psi} =\lim_{z\to z_{k}^{*}}\tilde{\Psi}\begin{pmatrix}0&-\tilde{c}_{k}(z _{k})^{*}\\ 0&0\end{pmatrix},\] (B.8)
where
\[\tilde{c}(z_{k})=c(z_{k})\frac{z_{k}-\xi^{*}}{z_{k}-\xi}.\]
The above calculations show that \(\tilde{M}(x,z)=\tilde{\Psi}(x,z)e^{-it\theta\sigma_{3}}\) solves the RH Problem 2.1. Note that
\[\tilde{a}(z)=\frac{z-\xi}{z-\xi^{*}}a(z),\]
where \(z(z)\) and \(\tilde{a}(z)\) are the scattering functions for \(\Psi(x,z)\) and \(\tilde{\Psi}(x,z)\), respectively. From the fact that \(\Psi_{x}+iz[\sigma_{3},\Psi]=Q\Psi,Q=\begin{pmatrix}0&q\\ -q^{*}&0\end{pmatrix}\), we have
\[Q=-i[\sigma_{3},\Phi_{1}],\quad\Psi=I+\frac{\Phi_{1}}{z}+\mathcal{O}(z^{-1}),\]
as \(z\to\infty\) in any cone \(|\mathrm{Im}z|>c|\mathrm{Re}z|\), and \(c>0\). Let \(\mu_{1}\begin{pmatrix}\xi&0\\ 0&\xi^{*}\end{pmatrix}.\) For \(\tilde{\Psi}(x,z)=\tilde{\Phi}(x,z)e^{-it\theta\sigma_{3}}\)
\[\tilde{\Phi} =\mathfrak{b}\big{(}I-\frac{\mu_{1}}{z}\big{)}\mathfrak{b}^{-1} \big{(}I+\frac{\Phi_{1}}{z}+\mathcal{O}(z^{-1})\big{)}\big{(}I-\frac{\mu_{1}} {z}\big{)}^{-1}\] (B.9) \[=I+\frac{\Psi_{1}-\mathfrak{b}\mu_{1}\mathfrak{b}^{-1}+\mu_{1}}{ z}+\mathcal{O}(z^{-1}),\] (B.10)
and hence
\[\tilde{q}(x) =-i[\sigma_{2},\Phi_{1}-\mathfrak{b}\mu_{1}\mathfrak{b}^{-1}+\mu _{1}]_{12}\] (B.11) \[=q(x)+i(\xi-\xi^{*})\frac{(\mathfrak{b}_{1})_{1}(\mathfrak{b}_{ 1})_{2}^{*}}{|(\mathfrak{b}_{1})_{1}|^{2}+|(\mathfrak{b}_{1})_{2}|^{2}}.\] (B.12)
One can also Darboux transformations similar to (B.1) to remove eigenvalues. We do not provide any further details, except to not that at each step, if the poles at \(z=z_{k},z_{k}^{*}\) are removed, then \(r(z)\to\tilde{r}(z)\frac{z-z_{k}^{*}}{z-z_{k}}\), etc.
**Acknowledgements**
This work is supported by the National Natural Science Foundation of China (Grant No. 12271104, 51879045).
**Data Availability Statements**
The data that supports the findings of this study are available within the article.
**Conflict of Interest**
The authors have no conflicts to disclose. |
2310.15278 | Phase Transitions In An Implicit Solvent Minimal Model Of Lipids: Role
Of Head-Tail Size Ratio | We present Monte Carlo simulations under constant NVT conditions on a minimal
three beads coarse grained implicit solvent model of lipid molecules, with the
hydrophilic head represented by one bead and the hydrophobic tail represented
by two beads. We consider two lipids, one with the head and tail bead sizes
equal and the other with the tail beads smaller than the head. When cooled to
the ambient temperature from an initial isotropic phase at high temperature,
the first lipid transforms spontaneously to a lamellar phase while the second
lipid transforms to a micellar phase, showing the crucial role of the head and
tail size ratio on lipid phases. | Biplab Bawali, Jayashree Saha, Alokmay Datta | 2023-10-08T12:31:25Z | http://arxiv.org/abs/2310.15278v1 | # Phase Transitions In An Implicit Solvent Minimal Model Of Lipids: Role Of Head-Tail Size Ratio
###### Abstract
We present Monte Carlo simulations under constant NVT conditions on a minimal 'three beads' coarse grained implicit solvent model of lipid molecules, with the hydrophilic head represented by one bead and the hydrophobic tail represented by two beads. We consider two lipids, one with the head and tail bead sizes equal and the other with the tail beads smaller than the head. When cooled to the ambient temperature from an initial isotropic phase at high temperature, the first lipid transforms spontaneously to a lamellar phase while the second lipid transforms to a micellar phase, showing the crucial role of the head-tail size ratio on lipid phases.
Minimal coarse-grained model, implicit solvent model, lipid phases, head-tail size ratio
## Introduction
Phase transitions in lipid assemblies have been at the centre of attention because of their association with biology and medicine [1-8]. Depending upon amphiphilic concentration, different phases like micellar, hexagonal, lamellar can be achieved in these systems. Other than lipid concentration, parameters like temperature, pH, the type of amphiphilic, water content and additives can change lipid phases.
From molecular structural perspectives, the lipid phase in a solvent depends on ratio of the size of head group to the aliphatic chain present in the amphiphilic lipid because the phase structures are dependent on the degree of curvature generated by the packing arrangement of the amphiphilic molecule. Low curvatures create lamellar phases whereas larger increment in curvature results in the formation of micellar phases.
The implicit solvent model accounts for the aqueous solvent by including its effects on the model lipid molecules. Here we present a minimal implicit solvent coarse grained model of lipids which is able to produce different phases by changing head to tail size ratio of the molecules.
## Model
In this model each lipid molecule consists of three spherical atoms (Figure 1). The blue sphere represents the hydrophilic head group while the two red spheres represent the hydrophobic tail group. This model is a modified version of the model proposed by Cooke and Deserno [9]. By eliminating the finite extensible nonlinear bonds between the beads we have fixed the bond length which makes the model simpler.
In this model, the effects of the solvent on each molecule have been considered implicitly in the interaction of the molecules. To incorporate the hydrophilic and hydrophobic effect each bead interacts with others via Weeks-Chandler-Andersen (WCA) repulsive potential
\[\phi\left(r;\sigma_{ij}\right)=\begin{cases}4\varepsilon\left(\left(\frac{a_{ ij}}{r}\right)^{12}-\left(\frac{a_{ij}}{r}\right)^{6}\right)+\varepsilon,&r\leq r_{c} \\ 0,&r>r_{c}\end{cases}\] (a)
To stabilize the lipid structure, an extra attractive potential between all tail beads are considered.
Figure 1: The three atom coarse grained lipid model. Lipid 1 has longer tail on the other hand Lipid 2 has smaller tail. Variation in the length of the tail chain is modeled by varying the diameter of the tail beads.
\(\nu_{tailtail}(r)=\)
\(-\varepsilon\)\(,r<r_{c}+w_{f}\)
\(4\epsilon\left[\left(\frac{\alpha_{lj}}{r-w_{f}}\right)^{12}-\left(\frac{\alpha_{lj }}{r-w_{f}}\right)^{6}\right]\)\(,r_{c}\leq r\leq w_{f}+w_{cut}\) b)
\(0\)\(,r>w_{f}+w_{cut}\)
with \(r_{c}=(2)^{1/6}\sigma\), where \(\alpha_{lj}\) is the effective diameter of each atom. We have considered two lipids (Lipid-1 and Lipid-2). For Lipid-1, \(\sigma_{lj}\) is chosen as \(\sigma_{head,head}=0.95\sigma,\sigma_{head,tail}=0.975\sigma\), \(\sigma_{tailtail}=\sigma\) i.e. head and tails have same size. For Lipid-2, \(\sigma_{head,head}=0.95\sigma\), \(\sigma_{head,tail}=0.875\sigma\), \(\sigma_{tailtail}=0.80\sigma\) i.e. tails are smaller than the head. Here \(\sigma\) is the unit of length and \(\varepsilon\) is the unit of energy. The value of \(w_{cut}=2.5\sigma\) and stabilization energy \(w_{f}\) is chosen as \(w_{f}=0.4\sigma\). It should be mentioned that Lipid-2 may represent a different conformation of Lipid-1 with bond disorder in the tails.
## Result
We have a system of 800 lipid molecules within a cubic box of side21.0 \(\sigma\). We carried out Monte-Carlo (MC) simulations on the system for the two different types of lipid, under constant NVT conditions. First we have achieved the isotropic phase at a sufficiently high temperature for both lipids (Figure 2(a)). We then started lowering the temperature until it reached the reduced temperature 1.0, corresponding to ambient temperature. We then kept the temperature fixed and simulate both systems further with larger Monte-Carlo steps under identical conditions and without changing any other parameter.
The systems reduced to the lamellar phase for Lipid-1 (Figure 2(b)) and micellar phase for Lipid-2 (Figure 2(c)) after 8 lakh MC steps. No further change was observed.
## Conclusion
Lipid mesophases depend crucially on the head-tail size ratio of the lipid molecule. Here, for Lipid-1 the effective sizes of the head and tail beads are equal, which create zero mean curvature and gives rise to the lamellar phase. For Lipid-2 the head group is larger than the tail beads. This causes a positive large curvature and a micellar phase is observed in case of Lipid-2. Our minimal coarse-grained implicit solvent model of lipid molecules shows spontaneous phase transitions based entirely on the head-tail size ratio, underscoring this dependence.
## Acknowledgement
B.B gratefully acknowledges the support of Council of Scientific & Industrial Research (CSIR), India, for providing Senior Research Fellowship. A.D. acknowledges the Department of Atomic Energy for a Raja Ramanna Fellowship. OVITO software has been used to visualize the system.
|
2310.12517 | Maximizing weighted sums of binomial coefficients using generalized
continued fractions | Let $m,r\in\mathbb{Z}$ and $\omega\in\mathbb{R}$ satisfy $0\leqslant
r\leqslant m$ and $\omega\geqslant1$. Our main result is a generalized
continued fraction for an expression involving the partial binomial sum $s_m(r)
= \sum_{i=0}^r\binom{m}{i}$. We apply this to create new upper and lower bounds
for $s_m(r)$ and thus for $g_{\omega,m}(r)=\omega^{-r}s_m(r)$. We also bound an
integer $r_0 \in \{0,1,\dots,m\}$ such that
$g_{\omega,m}(0)<\cdots<g_{\omega,m}(r_0-1)\leqslant g_{\omega,m}(r_0)$ and
$g_{\omega,m}(r_0)>\cdots>g_{\omega,m}(m)$. For real $\omega\geqslant\sqrt3$ we
prove that
$r_0\in\{\lfloor\frac{m+2}{\omega+1}\rfloor,\lfloor\frac{m+2}{\omega+1}\rfloor+1\}$,
and also $r_0 =\lfloor\frac{m+2}{\omega+1}\rfloor$ for $\omega\in\{3,4,\dots\}$
or $\omega=2$ and $3\nmid m$. | S. P. Glasby, G. R. Paseman | 2023-10-19T06:40:12Z | http://arxiv.org/abs/2310.12517v3 | # Maximizing weighted sums of binomial coefficients using generalized continued fractions
###### Abstract
Let \(m,r\in\mathbb{Z}\) and \(\omega\in\mathbb{R}\) satisfy \(0\leqslant r\leqslant m\) and \(\omega\geqslant 1\). Our main result is a generalized continued fraction for an expression involving the partial binomial sum \(s_{m}(r)=\sum_{i=0}^{r}\binom{m}{i}\). We apply this to create new upper and lower bounds for \(s_{m}(r)\) and thus for \(g_{\omega,m}(r)=\omega^{-r}s_{m}(r)\). We also bound an integer \(r_{0}\in\{0,1,\ldots,m\}\) such that \(g_{\omega,m}(0)<\cdots<g_{\omega,m}(r_{0}-1)\leqslant g_{\omega,m}(r_{0})\) and \(g_{\omega,m}(r_{0})>\cdots>g_{\omega,m}(m)\). For real \(\omega\geqslant\sqrt{3}\) we prove that \(r_{0}\in\{\lfloor\frac{m+2}{\omega+1}\rfloor,\lfloor\frac{m+2}{\omega+1} \rfloor+1\}\), and also \(r_{0}=\lfloor\frac{m+2}{\omega+1}\rfloor\) for \(\omega\in\{3,4,\ldots\}\) or \(\omega=2\) and \(3\nmid m\).
**Keywords:** partial sum, binomial coefficients, continued fraction, bounds
**2020 Mathematics Subject Classification:** 05A10, 11B65, 11Y65
## 1 Introduction
Given a real number \(\omega\geqslant 1\) and integers \(m,r\) satisfying \(0\leqslant r\leqslant m\), set
\[s_{m}(r):=\sum_{i=0}^{r}\binom{m}{i}\qquad\text{and}\qquad g(r)=g_{\omega,m}(r ):=\omega^{-r}s_{m}(r), \tag{1}\]
where the binomial coefficient \(\binom{m}{i}\) equals \(\prod_{k=1}^{i}\frac{m-k+1}{k}\) for \(i>0\) and \(\binom{m}{0}=1\). The weighted binomial sum \(g_{\omega,m}(r)\) and the partial binomial sum \(s_{m}(r)=g_{1,m}(r)\) appear in many formulas and inequalities, e.g. the cumulative distribution function \(2^{-m}s_{m}(r)\) of a binomial random variable with \(p=q=\frac{1}{2}\) as in Remark 5.3, and the Gilbert-Varshamov bound [6, Theorem 5.2.6] for a code \(C\subseteq\{0,1\}^{n}\). Partial sums of binomial coefficients are found in probability theory, coding theory, group theory, and elsewhere. As \(s_{m}(r)\) cannot be computed exactly for most values of \(r\), it is desirable for certain applications to find simple sharp upper and lower bounds for \(s_{m}(r)\). Our interest in bounding \(2^{-r}s_{m}(r)\) was piqued in [4] by an application to Reed-Muller codes \(\operatorname{RM}(m,r)\), which are linear codes of dimension \(s_{m}(r)\).
Our main result is a generalized continued fraction \(a_{0}+\mathcal{K}_{i=1}^{r}\frac{b_{i}}{a_{i}}\) (using Gauss' Kettenbruch notation) for \(Q:=\frac{(r+1)}{s_{m}(r)}\binom{m}{r+1}\). From this we derive useful approximations to \(Q,2+\frac{Q}{r+1}\), and \(s_{m}(r)\), and with these find a maximizing input \(r_{0}\) for \(g_{\omega,m}(r)\).
The \(j^{\text{th}}\)_tail_ of the generalized continued fraction \(\mathcal{K}_{i=1}^{r}\frac{b_{i}}{a_{i}}\) is denoted by \(\mathcal{T}_{j}\) where
\[\mathcal{T}_{j}:=\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}\mathcal{K}_{i=j}^{r}}\frac{b_{i}}{a_{i}}=\frac{b_{j}}{a_{j}+\frac{b_{j+1 }}{a_{j+1}+\frac{b_{j+2}}{a_{r-1}+\frac{b_{r}}{a_{r-1}+\frac{b_{r}}{a_{r}}}}}}= \frac{b_{j}}{a_{j}+\mathcal{T}_{j+1}}\qquad\qquad\text{and }1\leqslant j \leqslant r. \tag{2}\]
If \(\mathcal{T}_{j}=\frac{B_{i}}{A_{j}}\), then \(\mathcal{T}_{j}=\frac{b_{j}}{a_{j}+\mathcal{T}_{j+1}}\) shows \(b_{j}A_{j}-a_{j}B_{j}=\mathcal{T}_{j+1}B_{j}\). By convention \(\mathcal{T}_{r+1}=0\).
It follows from \(\binom{m}{r-i}=\binom{m}{r}\prod_{k=1}^{i}\frac{r-k+1}{m-r+k}\) that \(x^{i}\binom{m}{r}\leqslant\binom{m}{r-i}\leqslant y^{i}\binom{m}{r}\) for \(0\leqslant i\leqslant r\) where \(x:=\frac{1}{m}\) and \(y:=\frac{r}{m-r+1}\). Hence \(\frac{1-x^{r+1}}{1-x}\binom{m}{r}\leqslant s_{m}(r)\leqslant\frac{1-y^{r+1}}{ 1-y}\binom{m}{r}\). These bounds are close if \(\frac{r}{m}\) is near \(0\). If \(\frac{r}{m}\) is near \(\frac{1}{2}\) then better approximations involve the Berry-Esseen inequality [7] to estimate the binomial cumulative distribution function \(2^{-m}s_{m}(r)\). The cumulative distribution function \(\Phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-t^{2}/2}\,dt\) is used in Remark 5.3 to show that \(|2^{-m}s_{m}(r)-\Phi(\frac{2r-m}{\sqrt{m}})|\leqslant\frac{0.4215}{\sqrt{m}}\) for \(0\leqslant r\leqslant m\) and \(m\neq 0\). Each binomial \(\binom{m}{i}\) can be estimated using Stirling's approximation as in [10, p. 2]: \(\binom{m}{i}=\frac{C^{i}}{\sqrt{2\pi p(1-p)m}}\left(1+O(\frac{1}{m})\right)\) where \(C=C_{i}=\frac{1}{p^{p}(1-p)^{1-p}}\) and \(p=p_{i}=i/m\). However, the sum \(\sum_{i=0}^{r}\binom{m}{i}\) of binomials is harder to approximate. The preprint [11] discusses different approximations to \(s_{m}(r)\).
Sums of binomial coefficients modulo prime powers, where \(i\) lies in a congruence class, can be studied using number theory, see [5, p. 257]. Theorem 1.1 below shows how to find excellent rational approximations to \(s_{m}(r)\) via generalized continued fractions.
**Theorem 1.1**.: _Fix \(r,m\in\mathbb{Z}\) where \(0\leqslant r\leqslant m\) and recall that \(s_{m}(r)=\sum_{i=0}^{r}\binom{m}{i}\)._
1. _If_ \(b_{i}=2i(r+1-i)\)_,_ \(a_{i}=m-2r+3i\) _for_ \(0\leqslant i\leqslant r\)_, then_ \[Q:=\frac{(r+1)\binom{m}{r+1}}{s_{m}(r)}=a_{0}+\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}\mathcal{K}_{i=1}^{r}}\frac{b_{i}}{a_{i}}.\]
2. _If_ \(1\leqslant j\leqslant r\)_, then_ \(\mathcal{T}_{j}=R_{j}/R_{j-1}>0\) _where_ \(R_{j}:=2^{j}j!\sum_{k=0}^{r-j}\binom{r-k}{j}\binom{m}{k}\) _satisfies_ \(b_{j}R_{j-1}-a_{j}R_{j}=R_{j+1}\)_. Also,_ \((m-r)\binom{m}{r}-a_{0}R_{0}=R_{1}\)_._
Since \(s_{m}(m)=2^{m}\), it follows that \(s_{m}(m-r)=2^{m}-s_{m}(r-1)\) so we focus on values of \(r\) satisfying \(0\leqslant r\leqslant\lfloor\frac{m}{2}\rfloor\). Theorem 1.1 allows us to find a sequence of successively sharper upper and lower bounds for \(Q\) (which can be made arbitrarily tight), the coarsest being \(m-2r\leqslant Q\leqslant m-2r+\frac{2r}{m-2r+3}\) for \(1\leqslant r<\frac{m+3}{2}\), see Proposition 2.3 and Corollary 2.4.
The fact that the tails \(\mathcal{T}_{1},\ldots,\mathcal{T}_{r}\) are all positive is unexpected as \(b_{i}/a_{i}\) is negative if \(\frac{m+3i}{2}<r\). This fact is crucial for approximating \(\mathcal{T}_{1}=\mathcal{K}_{i=1}^{r}\frac{b_{i}}{a_{i}}\), see Theorem 1.3. Theorem 1.1 implies that \(\mathcal{T}_{1}\mathcal{T}_{2}\cdots\mathcal{T}_{r}=R_{r}/R_{0}\). Since \(R_{0}=s_{m}(r)\), \(R_{r}=2^{r}r!\), \(\mathcal{T}_{j}=\frac{b_{j}}{a_{j}+\mathcal{T}_{j+1}}\) and \(\prod_{j=1}^{r}b_{j}=2^{r}(r!)^{2}\), the surprising factorizations below follow _c.f._ Remark 2.1.
**Corollary 1.2**.: _We have \(s_{m}(r)\prod_{j=1}^{r}\mathcal{T}_{j}=2^{r}r!\) and \(r!s_{m}(r)=\prod_{j=1}^{r}(a_{j}+\mathcal{T}_{j+1})\)._
Suppose that \(\omega>1\) and write \(g(r)=g_{\omega,m}(r)\). We extend the domain of \(g(r)\) by setting \(g(-1)=0\) and \(g(m+1)=\frac{g(m)}{\omega}\) in keeping with (1). As \(g(-1)<g(0)=1\) and \(g(m)>g(m+1)\), there exists some \(r_{0}\in\{0,1,\dots,m\}\) that satisfies
\[g_{\omega,m}(-1)<\dots<g_{\omega,m}(r_{0}-1)\leqslant g_{\omega,m}(r_{0}) \quad\text{and}\quad g_{\omega,m}(r_{0})>\dots>g_{\omega,m}(m+1) \tag{3}\]
and both chains of inequalities are non-empty. Equation (3) _defines \(r_{0}\)_, and says that \(g(r)\) is a _unimodal_ function _c.f._[2].
We use Theorem 1.1 to show that \(r_{0}\) is commonly close to \(r^{\prime}:=\lfloor\frac{m+2}{\omega+1}\rfloor\). We always have \(r^{\prime}\leqslant r_{0}\) (by Lemma 3.3) and though \(r_{0}-r^{\prime}\) approaches \(\frac{m}{2}\) as \(\omega\) approaches \(1\) (see Remark 4.4), if \(\omega\geqslant\sqrt{3}\) then \(0\leqslant r_{0}-r^{\prime}\leqslant 1\) by the next theorem.
**Theorem 1.3**.: _If \(\omega\geqslant\sqrt{3}\), \(m\in\{0,1,\dots\}\) and \(r^{\prime}:=\lfloor\frac{m+2}{\omega+1}\rfloor\), then \(r_{0}\in\{r^{\prime},r^{\prime}+1\}\), that is_
\[g(0)<\dots<g(r^{\prime}-1)\leqslant g(r^{\prime}),\quad\text{and}\quad g(r^{ \prime}+1)>g(r^{\prime}+2)>\dots>g(m).\]
Sharp bounds for \(Q\) seem powerful: they enable short and elementary proofs of results that previously required substantial effort. For example, our proof in [4, Theorem 1.1] for \(\omega=2\) of the formula \(r_{0}=\lfloor\frac{m}{3}\rfloor+1\) involved a lengthy argument, and our first proof of Theorem 1.4 below involved a delicate induction. By this theorem there is a unique maximum, namely \(r_{0}=r^{\prime}=\lfloor\frac{m+2}{\omega+1}\rfloor\) when \(\omega\in\{3,4,5,\dots\}\) and \(\omega\neq m+1\), _c.f._ Remark 3.4. In particular, strict inequality \(g_{\omega,m}(r^{\prime}-1)<g_{\omega,m}(r^{\prime})\) holds.
**Theorem 1.4**.: _Suppose that \(\omega\in\{3,4,5,\dots\}\) and \(r^{\prime}=\lfloor\frac{m+2}{\omega+1}\rfloor\). Then_
\[g_{\omega,m}(0)<\dots<g_{\omega,m}(r^{\prime}-1)\leqslant g_{\omega,m}(r^{ \prime})>g_{\omega,m}(r^{\prime}+1)>\dots>g_{\omega,m}(m),\]
_with equality if and only if \(\omega=m+1\)._
Our motivation was to analyze \(g_{\omega,m}(r)\) by using estimates for \(Q\) given by the generalized continued fraction in Theorem 1.1. This gives tighter estimates than the method involving partial sums used in [4]. The plots of \(y=g_{\omega,m}(r)\) for \(0\leqslant r\leqslant m\) are highly asymmetrical if \(\omega-1\) and \(m\) are small. However, if \(m\) is large the plots exhibit an 'approximate symmetry' about the vertical line \(r=r_{0}\) (see Figure 1). Our observation that \(r_{0}\) is close to \(r^{\prime}\) for many choices of \(\omega\) was the starting point of our research.
Byun and Poznanovic [2, Theorem 1.1] compute the maximizing input, call it \(r^{*}\), for the function \(f_{m,a}(r):=(1+a)^{-r}\sum_{i=0}^{r}\binom{m}{i}a^{i}\) where \(a\in\{1,2,\dots\}\). Their function equals \(g_{\omega,m}(r)\) only when \(\omega=1+a=2\). Some of their results and methods are similar to those in [4] which studied the case \(\omega=2\). They prove that \(r^{*}=\lfloor\frac{a(m+1)+2}{2a+1}\rfloor\) provided \(m\not\in\{3,2a+4,4a+5\}\) or \((a,m)\neq(1,12)\) when \(r^{*}=\lfloor\frac{a(m+1)+2}{2a+1}\rfloor-1\).
In Section 2 we prove Theorem 1.1 and record approximations to our generalized continued fraction expansion. When \(m\) is large, the plots of \(y=g_{\omega,m}(r)\) are reminiscent of a normal distribution with mean \(\mu\approx\frac{m}{\omega+1}\). Section 3 proves key lemmas for estimating \(r_{0}\)
and applies Theorem 1.1 to prove Theorem 1.4. Non-integral values of \(\omega\) are considered in Section 4 where Theorem 1.3 is proved. In Section 5 we estimate the maximum height \(g(r_{0})\) using elementary methods and estimations, see Lemma 5.1. A'statistical' approximation to \(s_{m}(r)\) is given in Remark 5.3, and it is compared in Remark 5.4 to the 'generalized continued fraction approximations' of \(s_{m}(r)\) in Proposition 2.3.
## 2 Generalized continued fraction formulas
In this section we prove Theorem 1.1, namely that \(Q:=\frac{r+1}{s_{m}(r)}\binom{m}{r+1}=a_{0}+\mathcal{T}_{1}\) where \(\mathcal{T}_{1}=\mathcal{K}_{i=1}^{r}\,\frac{b_{i}}{a_{i}}\). The equality \(s_{m}(r)=\frac{r+1}{a_{0}+\mathcal{T}_{1}}\binom{m}{r+1}\) is noted in Corollary 2.2.
A version of Theorem 1.1(a) was announced in the SCS2022 Poster room, created to run concurrently with vICM 2022, see [9].
Proof of Theorem 1.1.: Set \(R_{-1}=Q\,s_{m}(r)=(r+1)\binom{m}{r+1}=(m-r)\binom{m}{r}\) and
\[R_{j}=2^{j}j!\sum_{k=0}^{r-j}\binom{r-k}{j}\binom{m}{k}\qquad\text{for $0\leqslant j \leqslant r+1$.}\]
Clearly \(R_{0}=s_{m}(r)\), \(R_{r+1}=0\) and \(R_{j}>0\) for \(0\leqslant j\leqslant r\). We will prove in the following paragraph that the quantities \(R_{j}\), \(a_{j}=m-2r+3j\), and \(b_{j}=2j(r+1-j)\) satisfy the following \(r+1\) equations, where the first equation (4) is atypical:
\[R_{-1}-a_{0}R_{0} =R_{1}, \tag{5}\] \[b_{j}R_{j-1}-a_{j}R_{j} =R_{j+1}\quad\text{where $1\leqslant j\leqslant r$.} \tag{4}\]
Assuming (5) is true, we prove by induction that \(\mathcal{T}_{j}=R_{j}/R_{j-1}\) holds for \(r+1\geqslant j\geqslant 1\). This is clear for \(j=r+1\) since \(\mathcal{T}_{r+1}=R_{r+1}=0\). Suppose that \(1\leqslant j\leqslant r\) and \(\mathcal{T}_{j+1}=R_{j+1}/R_{j}\) holds. We show that \(\mathcal{T}_{j}=R_{j}/R_{j-1}\) holds. Using (5) and \(R_{j}>0\) we have \(b_{j}R_{j-1}/R_{j}-a_{j}=R_{j+1}/R_{j}=\mathcal{T}_{j+1}\). Hence \(R_{j}/R_{j-1}=b_{j}/(a_{j}+\mathcal{T}_{j+1})=\mathcal{T}_{j}\), completing the induction. Equation (4) gives \(Q=R_{-1}/R_{0}=a_{0}+R_{1}/R_{0}=a_{0}+\mathcal{T}_{1}\) as claimed. Since \(R_{j}>0\) for \(0\leqslant j\leqslant r\), we have \(\mathcal{T}_{j}=R_{j}/R_{j-1}>0\) for \(1\leqslant j\leqslant r\). This proves the first half of Theorem 1.1(b), and the recurrence \(\mathcal{T}_{j}=b_{j}/(a_{j}+\mathcal{T}_{j+1})\) for \(1\leqslant j\leqslant r\), proves part (a).
We now show that (4) holds. The identity \(R_{0}=2^{0}0!\sum_{k=0}^{r}\binom{m}{k}=s_{m}(r)\) gives
\[R_{-1}-a_{0}R_{0} =(r+1)\binom{m}{r+1}-(m-2r)\sum_{i=0}^{r}\binom{m}{i}\] \[=(r+1)\binom{m}{r+1}-\sum_{i=0}^{r}(-i+m-i-2r+2i)\binom{m}{i}\] \[=\sum_{i=0}^{r}\left[(i+1)\binom{m}{i+1}-(m-i)\binom{m}{i}\right] +2\sum_{i=0}^{r-1}(r-i)\binom{m}{i}.\]
As \((i+1)\binom{m}{i+1}=(m-i)\binom{m}{i}\), we get \(R_{-1}-a_{0}R_{0}=2\sum_{k=0}^{r-1}\binom{r-k}{1}\binom{m}{k}=R_{1}\).
We next show that (5) holds. To simplify our calculations, we divide by \(C_{j}:=2^{j}j!\).
Using \((j+1)\binom{r-k}{j+1}=(r-k-j)\binom{r-k}{j}\) gives
\[\frac{R_{j+1}}{C_{j}} =\sum_{k=0}^{r-j-1}2(j+1)\binom{r-k}{j+1}\binom{m}{k}\] \[=\sum_{k=0}^{r-j}2(r-k-j)\binom{r-k}{j}\binom{m}{k}\] \[=\sum_{k=0}^{r-j+1}(j-k)\binom{r-k}{j}\binom{m}{k}-\sum_{k=0}^{r-j }(k-2r+3j)\binom{r-k}{j}\binom{m}{k}\]
noting that the term with \(k=r-j+1\) in the first sum is zero as \(\binom{j-1}{j}=0\). Writing \(L=\sum_{k=0}^{r-j}(k-2r+3j)\binom{r-k}{j}\binom{m}{k}\) and using the identity \(j\binom{r-k}{j}=(r+1-j-k)\binom{r-k}{j-1}\) gives
\[\frac{R_{j+1}}{C_{j}} =\sum_{k=0}^{r-j+1}\left[(r+1-j-k)\binom{r-k}{j-1}-k\binom{r-k}{j} \right]\binom{m}{k}\ -L\] \[=\sum_{k=0}^{r-j+1}\left[(r+1-j)\binom{r-k}{j-1}-k\binom{r-k}{j-1 }-k\binom{r-k}{j}\right]\binom{m}{k}\ -L\] \[=\sum_{k=0}^{r-j+1}\left[(r+1-j)\binom{r-k}{j-1}-k\binom{r-k+1}{j} \right]\binom{m}{k}\ -L.\]
However, \(k\binom{m}{k}=(m-k+1)\binom{m}{k-1}\), and therefore,
\[\sum_{k=0}^{r-j+1}k\binom{r-k+1}{j}\binom{m}{k} =\sum_{k=1}^{r-j+1}(m-k+1)\binom{r-k+1}{j}\binom{m}{k-1}\] \[=\sum_{\ell=0}^{r-j}(m-\ell)\binom{r-\ell}{j}\binom{m}{\ell}.\]
Thus
\[\frac{R_{j+1}}{C_{j}} =\sum_{k=0}^{r-j+1}(r-j+1)\binom{r-k}{j-1}\binom{m}{k}-\sum_{k=0}^ {r-j}(m-k)\binom{r-k}{j}\binom{m}{k}\ -L\] \[=\sum_{k=0}^{r-j+1}(r-j+1)\binom{r-k}{j-1}\binom{m}{k}-\sum_{k=0}^ {r-j}(m-k+k-2r+3j)\binom{r-k}{j}\binom{m}{k}\] \[=\sum_{k=0}^{r-j+1}(r-j+1)\binom{r-k}{j-1}\binom{m}{k}-\sum_{k=0}^ {r-j}\overbrace{(m-2r+3j)}\binom{r-k}{j}\binom{m}{k}\] \[=\overbrace{\frac{2j(r-j+1)}{2j(j-1)!}}^{b_{j}}\sum_{k=0}^{r-j+1 }\binom{r-k}{j-1}\binom{m}{k}-\sum_{k=0}^{r-j}a_{j}\binom{r-k}{j}\binom{m}{k}\]
Hence \(\frac{R_{j+1}}{C_{j}}=\frac{b_{j}R_{j-1}}{C_{j}}-\frac{a_{j}R_{j}}{C_{j}}\) for \(1\leqslant j\leqslant r\). When \(j=r\), our convention gives \(R_{r+1}=0\). This proves part (b) and completes the proof of part (a).
**Remark 2.1**.: View \(m\) as an indeterminant, so that \(r!s_{m}(r)\) is a polynomial in \(m\) over \(\mathbb{Z}\) of degree \(r\). The factorization \(r!s_{m}(r)=\prod_{j=1}^{r}(a_{j}+\mathcal{T}_{j+1})\) in Corollary 1.2 involves the rational functions \(a_{j}+\mathcal{T}_{j+1}\). However, Theorem 1.1(b) gives \(\mathcal{T}_{j+1}=\frac{R_{j+1}}{R_{j}}\), so that \(a_{j}+\mathcal{T}_{j+1}=\frac{a_{j}R_{j}+R_{j+1}}{R_{j}}=\frac{b_{j}R_{j-1}}{R_ {j}}\). This determines the numerator and denominator of the rational function \(a_{j}+\mathcal{T}_{j+1}\), and explains why \(\prod_{j=1}^{r}(a_{j}+\mathcal{T}_{j+1})=\frac{R_{0}}{R_{c}}\prod_{j=1}^{r}b_{ j}=r!s_{m}(r)\). This is different from, but reminiscent of, the ratio \(p_{j+1}/p_{j}\) described on p. 26 of [8]\(\diamond\)
**Corollary 2.2**.: _If \(r,m\in\mathbb{Z}\) and \(1\leqslant r\leqslant m\), then_
\[s_{m}(r):=\sum_{i=0}^{r}\binom{m}{i}=\frac{(r+1)\binom{m}{r+1}}{m-2r+\mathcal{ T}_{1}}\qquad\text{where }\mathcal{T}_{1}=\mathop{\mathsf{K}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Corollary 2.4**.: _We have \(m-2r\leq\frac{(r+1)\binom{m}{r+1}}{s_{m}(r)}\) for \(r\geqslant 0\), and \(\frac{(r+1)\binom{m}{r+1}}{s_{m}(r)}\leqslant m-2r+\frac{2r}{m-2r+3}\) for \(0\leqslant r<\frac{m+3}{2}\). Hence \(\frac{m+2}{r+1}\leqslant t_{m}(r)+1\) for \(r\geqslant 0\), and_
\[\frac{m+2}{r+1}\leqslant t_{m}(r)+1\leqslant\frac{m+2}{r+1}+\frac{2r}{(r+1)(m -2r+3)}\quad\text{for }0\leqslant r<\frac{m+3}{2}.\]
_Also \(\frac{m+2}{r+1}<t_{m}(r)+1\) for \(r>0\), and the above upper bound is strict for \(1<r<\frac{m+3}{2}\)._
Proof.: We proved \(Q=\frac{(r+1)\binom{m}{r}}{s_{m}(r)}=(m-2r)+\mathcal{K}_{i=1}^{r}\frac{2i(r+1- i)}{m-2r+3i}\) in Theorem 1.1. Hence \(m-2r=\frac{(r+1)\binom{m}{r+1}}{s_{m}(r)}\) if \(r=0\) and \(m-2r<\frac{(r+1)\binom{m}{r+1}}{s_{m}(r)}\) if \(1\leqslant r<\frac{m+3}{2}\) by Proposition 2.3. Clearly \(m-2r<0\leqslant\frac{(r+1)\binom{m}{r+1}}{s_{m}(r)}\) if \(\frac{m+3}{2}\leqslant r\leqslant m\). Similarly \(\frac{(r+1)\binom{m}{r+1}}{s_{m}(r)}=m-2r+\frac{2r}{m-2r+3}\) if \(r=0,1\), and again Proposition 2.3 shows that \(\frac{(r+1)\binom{m}{r+1}}{s_{m}(r)}<m-2r+\frac{2r}{m-2r+3}\) if \(1<r<\frac{m+3}{2}\).
The remaining inequalities (and equalities) follow similarly since \(t_{m}(r)+1=2+\frac{\binom{m}{r+1}}{s_{m}(r)}\) and \(2+\frac{m-2r}{r+1}=\frac{m+2}{r+1}\).
## 3 Estimating the maximizing input \(r_{0}\)
Fix \(\omega>1\). In this section we consider the function \(g(r)=g_{\omega,m}(r)\) given by (1). As seen in Table 1, it is easy to compute \(g(r)\) if \(r\) is near \(0\) or \(m\). For \(m\) large and \(r\) near \(0\), we have'sub-exponential' growth \(g(r)\approx\frac{m^{r}}{r!\omega^{r}}\). Similarly for \(r\) near \(m\), we have exponential decay \(g(r)\approx\frac{2^{m}}{\omega^{r}}\). The middle values require more thought.
On the other hand, the plots \(y=g(r)\), \(0\leqslant r\leqslant m\), exhibit a remarkable visual symmetry when \(m\) is large. The relation \(s_{m}(m-r)=2^{m}-s_{m}(r-1)\) and the distorting scale factor of \(\omega^{-r}\) shape the plots. The examples in Figure 1 show an approximate left-right symmetry about a maximizing input \(r\approx\frac{m}{\omega+1}\). It surprised the authors that in many cases there exists a simple exact formula for the maximizing input (it is usually unique as Corollary 3.2 suggests). In Figure 1 we have used different scale factors for the \(y\)-axes. The maximum value of \(g_{\omega,m}(r)\) varies considerably as \(\omega\) varies (_c.f._ Lemma 5.1), so we scaled the maxima (rounded to the nearest integer) to the same height.
**Lemma 3.1**.: _Recall that \(g(r)=\omega^{-r}s_{m}(r)\) by (1) and \(t(r)=\frac{s_{m}(r+1)}{s_{m}(r)}\) by (6)._
1. \(t(r-1)>t(r)>\frac{m-r}{r+1}\) _for_ \(0\leqslant r\leqslant m\) _where_ \(t(-1):=\infty\)_;_
2. \(g(r)<g(r+1)\) _if and only if_ \(t(r)>\omega\)_;_
3. \(g(r)\leqslant g(r+1)\) _if and only if_ \(t(r)\geqslant\omega\)_;_
4. \(g(r)>g(r+1)\) _if and only if_ \(\omega>t(r)\)_;_
5. \(g(r)\geqslant g(r+1)\) _if and only if_ \(\omega\geqslant t(r)\)_;_
Proof.: The result is clear when \(r^{\prime}=0\). If \(r^{\prime}=1\), then \(r^{\prime}\leqslant\frac{m+2}{\omega+1}\) gives \(\omega\leqslant m+1\) or \(g(0)\leqslant g(1)\). Hence \(g(0)<g(1)\) if \(\omega\neq m+1\). Suppose that \(r^{\prime}>1\). By Lemma 3.1(c,f) the chain \(g(0)<\cdots<g(r^{\prime})\) is equivalent to \(g(r^{\prime}-1)<g(r^{\prime})\), that is \(t(r^{\prime}-1)>\omega\). However, \(t(r^{\prime}-1)+1>\frac{m+2}{r^{\prime}}\) by Corollary 2.4 and \(r^{\prime}\leqslant\frac{m+2}{\omega+1}\) implies \(\frac{m+2}{r^{\prime}}\geqslant\omega+1\). Hence \(t(r^{\prime}-1)+1>\omega+1\), so that \(t(r^{\prime}-1)>\omega\) as desired.
Proof of Theorem 1.4.: Suppose that \(\omega\in\{3,4,\dots\}\). Then \(g(0)<\cdots<g(r^{\prime}-1)\leqslant g(r^{\prime})\) by Lemma 3.3 with strictness when \(\omega\neq m+1\). If \(\omega=m+1\), then \(r^{\prime}=\lfloor\frac{m+2}{\omega+1}\rfloor=1\) and \(g(0)=g(1)\) as claimed. It remains to show that \(g(r^{\prime})>g(r^{\prime}+1)>\cdots>g(m)\). However, we need only prove that \(g(r^{\prime})>g(r^{\prime}+1)\) by Lemma 3.1(f), or equivalently \(\omega>t(r^{\prime})\) by Lemma 3.1(d).
Clearly \(\omega\geqslant 3\) implies \(r^{\prime}\leqslant\frac{m+2}{\omega+1}\leqslant\frac{m+2}{4}\). As \(0\leqslant r^{\prime}<\frac{m+3}{2}\), Corollary 2.4 gives
\[\frac{m+2}{r^{\prime}+1}+\frac{2r^{\prime}}{(r^{\prime}+1)(m-2r^{\prime}+3)} \geqslant t(r^{\prime})+1.\]
Hence \(\omega+1>t(r^{\prime})+1\) holds if \(\omega+1>\frac{m+2}{r^{\prime}+1}+\frac{2r^{\prime}}{(r^{\prime}+1)(m-2r^{\prime }+3)}\). Since \(\omega+1\) is an integer, we have \(m+2=r^{\prime}(\omega+1)+c\) where \(0\leqslant c\leqslant\omega\). It follows from \(0\leqslant r^{\prime}\leqslant\frac{m+2}{4}\) that \(\frac{2r^{\prime}}{m-2r^{\prime}+3}<1\). This inequality and \(m+2\leqslant r^{\prime}(\omega+1)+\omega\) gives
\[m+2+\frac{2r^{\prime}}{m-2r^{\prime}+3}<r^{\prime}(\omega+1)+\omega+1=(r^{ \prime}+1)(\omega+1).\]
Thus \(\omega+1>\frac{m+2}{r^{\prime}+1}+\frac{2r^{\prime}}{(r^{\prime}+1)(m-2r^{ \prime}+3)}\geqslant t(r^{\prime})+1\), so \(\omega>t(r^{\prime})\) as required.
**Remark 3.4**.: The proof of Theorem 1.4 can be adapted to the case \(\omega=2\). If \(m+2=3r^{\prime}+c\) where \(c\leqslant\omega-1=1\), then \(\frac{2r^{\prime}}{m-2r^{\prime}+3}=\frac{2r^{\prime}}{r^{\prime}+c+1}<2\), and if \(c=\omega=2\), then a sharper \(\mathcal{H}_{2}\)-bound must be used. This leads to a much shorter proof than [4, Theorem 1.1]. \(\diamond\)
## 4 Non-integral values of \(\omega\)
In this section, we prove that the maximum value of \(g(r)\) is \(g(r^{\prime})\) or \(g(r^{\prime}+1)\) if \(\omega\geqslant\sqrt{3}\). Before proving this result (Theorem 1.3), we shall prove two preliminary lemmas.
**Lemma 4.1**.: _Suppose that \(\omega>1\) and \(r^{\prime}:=\lfloor\frac{m+2}{\omega+1}\rfloor\). If \(\frac{m+2}{r^{\prime}+1}\geqslant\sqrt{3}+1\), then_
\[g(-1)<g(0)<\cdots<g(r^{\prime}-1)\leqslant g(r^{\prime}),\quad\text{and}\quad g (r^{\prime}+1)>g(r^{\prime}+2)>\cdots>g(m).\]
Proof.: It suffices, by Lemma 3.1(f) and Lemma 3.3 to prove that \(g(r^{\prime}+1)>g(r^{\prime}+2)\). The strategy is to show \(\omega>t(r^{\prime}+1)\), that is \(\omega+1>t(r^{\prime}+1)+1\). However, \(\omega+1>\frac{m+2}{r^{\prime}+1}\), so it suffices to prove that \(\frac{m+2}{r^{\prime}+1}\geqslant t(r^{\prime}+1)+1\). Since \(r^{\prime}+1\leqslant\frac{m+2}{\sqrt{3}+1}<\frac{m+2}{2}\), we can use Corollary 2.4 and just prove that \(\frac{m+2}{r^{\prime}+1}\geqslant\frac{m+2}{r^{\prime}+2}+\frac{2r^{\prime}+2} {(r^{\prime}+2)(m-2r^{\prime}+1)}\). This inequality is equivalent to \(\frac{m+2}{r^{\prime}+1}\geqslant\frac{2r^{\prime}+2}{m-2r^{\prime}+1}\). However, \(\frac{m+2}{r^{\prime}+1}\geqslant\sqrt{3}+1\), so we need only show that \(\sqrt{3}+1\geqslant\frac{2(r^{\prime}+1)}{m-2r^{\prime}+1}\), or equivalently \(m-2r^{\prime}+1\geqslant(\sqrt{3}-1)(r^{\prime}+1)\). This is true since \(\frac{m+2}{r^{\prime}+1}\geqslant\sqrt{3}+1\) implies \(m-2r^{\prime}+1\geqslant(\sqrt{3}-1)r^{\prime}+\sqrt{3}>(\sqrt{3}-1)(r^{ \prime}+1)\).
**Remark 4.2**.: The strict inequality \(g(r^{\prime}-1)<g(r^{\prime})\) holds by Lemma 3.3 if \(r^{\prime}>1\) or \(\omega\neq m+1\). It holds vacuously for \(r^{\prime}=0\). Hence adding the additional hypothesis that \(\omega\neq m+1\) if \(r^{\prime}=1\) to Lemma 4.1 (and Theorem 1.3), we may conclude that the inequality \(g(r^{\prime}-1)\leqslant g(r^{\prime})\) is strict. \(\diamond\)
**Remark 4.3**.: In Lemma 4.1, the maximum can occur at \(r^{\prime}+1\). If \(\omega=2.5\) and \(m=8\), then \(r^{\prime}=\lfloor\frac{10}{3.5}\rfloor=2\) and \(\frac{m+2}{r^{\prime}+1}=\frac{10}{3}\geqslant\sqrt{3}+1\) however \(g_{2.5,8}(2)=\frac{740}{125}<\frac{744}{125}=g_{2.5,8}(3)\). \(\diamond\)
**Remark 4.4**.: The gap between \(r^{\prime}\) and the largest maximizing input \(r_{0}\) can be arbitrarily large if \(\omega\) is close to \(1\). For \(\omega>1\), we have \(r^{\prime}=\lfloor\frac{m+2}{\omega+1}\rfloor<\frac{m+2}{2}\). If \(1<\omega\leqslant\frac{1}{1-2^{-m}}\), then \(g(m-1)\leqslant g(m)\), so \(r_{0}=m\). Hence \(r_{0}-r^{\prime}>\frac{m-2}{2}\). \(\diamond\)
**Remark 4.5**.: Since \(r^{\prime}\leqslant\lfloor\frac{m+2}{\omega+1}\rfloor<r^{\prime}+1\), we see that \(r^{\prime}+1\approx\frac{m+2}{\omega+1}\), so that \(\frac{m+2}{r^{\prime}+1}\approx\omega+1\). Thus Lemma 4.1 suggests that if \(\omega\gtrsim\sqrt{3}\), then \(g_{\omega,m}(r)\) may have a maximum at \(r^{\prime}\) or \(r^{\prime}+1\). This heuristic reasoning is made rigorous in Theorem 1.3.
**Remark 4.6**.: Theorem 1.1 can be rephrased as \(t_{m}(r)=\frac{s_{m}(r+1)}{s_{m}(r)}=\frac{m-r+1}{r+1}+\frac{K_{m}(r)}{r+1}\) where
\[\mathcal{K}_{m}(r):=\mathop{\mbox{\Large$\sf K$}}\limits_{\!=\!=\!\!=\!\!=\!\!= \!\!=}^{r}\frac{2i(r+1-i)}{m-2r+3+\frac{4r-4}{m-2r+6+\frac{6r-12}{\ddots}}}. \tag{7}\]
The following lemma repeatedly uses the expression \(\omega>t_{m}(r+1)\). This is equivalent to \(\omega>\frac{m-r}{r+2}+\frac{\mathcal{K}_{m}(r+1)}{r+2}\), that is \((\omega+1)(r+2)>m+2+\mathcal{K}_{m}(r+1)\). \(\diamond\)
**Lemma 4.7**.: _Let \(m\in\{0,1,\dots\}\) and \(r^{\prime}=\lfloor\frac{m+2}{\omega+1}\rfloor\). If any of the following three conditions are met, then \(g_{\omega,m}(r^{\prime}+1)>\dots>g_{\omega,m}(m)\) holds:_
(a)_\(\omega\geqslant 2\), or (b)_\(\omega\geqslant\frac{1+\sqrt{97}}{6}\) and \(r^{\prime}\neq 2\), or (c)_\(\omega\geqslant\sqrt{3}\) and \(r^{\prime}\not\in\{2,3\}\)._
Proof.: The conclusion \(g_{\omega,m}(r^{\prime}+1)>\dots>g_{\omega,m}(m)\) holds trivially if \(r^{\prime}+1\geqslant m\). Suppose henceforth that \(r^{\prime}+1<m\). Except for the excluded values of \(r^{\prime},\omega\), we will prove that \(g_{\omega,m}(r^{\prime}+1)>g_{\omega,m}(r^{\prime}+2)\) holds, as this implies \(g_{\omega,m}(r^{\prime}+1)>\dots>g_{\omega,m}(m)\) by Lemma 3.1(f). Hence we must prove that \(\omega>t_{m}(r^{\prime}+1)\) by Lemma 3.1(d).
Recall that \(r^{\prime}\leqslant\frac{m+2}{\omega+1}<r^{\prime}+1\). If \(r^{\prime}=0\), then \(m+2<\omega+1\), that is \(\omega>m+1>t(1)\) as desired. Suppose now that \(r^{\prime}=1\). There is nothing to prove if \(m=r^{\prime}+1=2\). Assume that \(m>2\). Since \(m+2<2(\omega+1)\), we have \(2<m<2\omega\). The last line of Remark 4.6 and (7) give the desired inequality:
\[\omega>\frac{m}{2}\geqslant\frac{m-1}{3}+\frac{4}{3\left(m-1+\frac{4}{m+2} \right)}=t_{m}(2).\]
In summary, \(g_{\omega,m}(r^{\prime}+1)>\dots>g_{\omega,m}(m)\) holds for all \(\omega>1\) if \(r^{\prime}\in\{0,1\}\).
We next prove \(g_{\omega,m}(r^{\prime}+1)>g_{\omega,m}(r^{\prime}+2)\), or equivalently \(\omega>t_{m}(r^{\prime}+1)\) for \(r^{\prime}\) large enough, depending on \(\omega\). We must prove that \((\omega+1)(r^{\prime}+2)>m+2+\mathcal{K}_{m}(r^{\prime}+1)\) by Remark 4.6. Writing \(m+2=(\omega+1)(r^{\prime}+\varepsilon)\) where \(0\leqslant\varepsilon<1\), our goal, therefore, is to show \((\omega+1)(2-\varepsilon)>\mathcal{K}_{m}(r^{\prime}+1)\). Using (7) gives
\[\mathcal{K}_{m}(r^{\prime}+1)=\frac{2(r^{\prime}+1)}{m-2(r^{\prime}+1)+3+ \mathcal{T}}=\frac{2(r^{\prime}+1)}{(\omega+1)(r^{\prime}+\varepsilon)-2(r^{ \prime}+1)+1+\mathcal{T}}\]
where \(\mathcal{T}>0\) by Theorem 1.1 as \(r^{\prime}>0\). Rewriting the denominator using
\[(\omega+1)(r^{\prime}+\varepsilon)-2(r^{\prime}+1)=(\omega-1)(r^{\prime}+1)-( \omega+1)(1-\varepsilon),\]
our goal \((\omega+1)(2-\varepsilon)>\mathcal{K}_{m}(r^{\prime}+1)\) becomes
\[(\omega+1)(2-\varepsilon)\left[(\omega-1)(r^{\prime}+1)-(\omega+1)(1- \varepsilon)+1+\mathcal{T}\right]>2(r^{\prime}+1).\]
Dividing by \((2-\varepsilon)(r^{\prime}+1)\) and rearranging gives
\[(\omega^{2}-1)+\frac{(\omega+1)(1+\mathcal{T})}{r^{\prime}+1}>\frac{2}{2- \varepsilon}+\frac{(\omega+1)^{2}(1-\varepsilon)}{r^{\prime}+1}.\]
This inequality may be written \((\omega^{2}-1)+\lambda>\frac{2}{2-\varepsilon}+\mu(1-\varepsilon)\) where \(\lambda=\frac{(\omega+1)(1+\mathcal{T})}{r^{\prime}+1}>0\) and \(\mu=\frac{(\omega+1)^{2}}{r^{\prime}+1}>0\). We view \(f(\varepsilon):=\frac{2}{2-\varepsilon}+\mu(1-\varepsilon)\) as a function of a real variable \(\varepsilon\) where \(0\leqslant\varepsilon<1\). However, \(f(\varepsilon)\) is concave as the second derivative \(f^{\prime\prime}(\varepsilon)=\frac{4}{(2-\varepsilon)^{3}}\) is positive for \(0\leqslant\varepsilon<1\). Hence the maximum value occurs at an end point: either \(f(0)=1+\mu\) or \(f(1)=2\). Therefore, it suffices to prove that \((\omega^{2}-1)+\lambda>\max\{2,1+\mu\}\).
If \(2\geqslant 1+\mu\), then the desired bound \((\omega^{2}-3)+\lambda>0\) holds as \(\omega\geqslant\sqrt{3}\). Suppose now that \(2<1+\mu\). We must show \((\omega^{2}-1)+\lambda>1+\mu\), that is \(\omega^{2}-2>\mu-\lambda=\frac{(\omega+1)(\omega-\mathcal{T})}{r^{\prime}+1}\). Since \(\mathcal{T}>0\), a stronger inequality (that implies this) is \(\omega^{2}-2\geqslant\frac{(\omega+1)\omega}{r^{\prime}+1}\). The (equivalent) quadratic inequality \(r^{\prime}\omega^{2}-\omega-2(r^{\prime}+1)\geqslant 0\) in \(\omega\) is true provided \(\omega\geqslant\frac{1+\sqrt{1+8r^{\prime}(r^{\prime}+1)}}{2r^{\prime}}\). This says \(\omega\geqslant 2\) if \(r^{\prime}=2\), and \(\omega\geqslant\frac{1+\sqrt{97}}{6}\) if \(r^{\prime}=3\). If \(r^{\prime}\geqslant 4\), we have
\[\frac{1+\sqrt{1+8r^{\prime}(r^{\prime}+1)}}{2r^{\prime}}=\frac{1}{2r^{\prime}} +\sqrt{\frac{1}{4(r^{\prime})^{2}}+2\left(1+\frac{1}{r^{\prime}}\right)} \leqslant\frac{1}{8}+\sqrt{\frac{1}{64}+\frac{5}{2}}<\sqrt{3}.\]
The conclusion now follows from the fact that \(2>\frac{1+\sqrt{97}}{6}>\sqrt{3}\).
Proof of Theorem 1.3.: By Lemma 4.1 it suffices to show that \(g(r^{\prime}+1)>g(r^{\prime}+2)\) holds when \(r^{\prime}+1<m\) and \(\omega\geqslant\sqrt{3}\). By Lemma 4.7(a), we can assume that \(\sqrt{3}\leqslant\omega<2\) and \(r^{\prime}\in\{2,3\}\). For these choices of \(\omega\) and \(r^{\prime}\), we must show that \(\omega>t_{m}(r^{\prime}+1)\) by Lemma 3.1 for all permissible choices of \(m\). Since \((\omega+1)r^{\prime}\leqslant m+2<(\omega+1)(r^{\prime}+1)\), when \(r^{\prime}=2\) we have \(5<2(\sqrt{3}+1)\leqslant m+2<9\) so that \(4\leqslant m\leqslant 6\). However, \(t_{m}(3)\) equals \(\frac{16}{15}\frac{31}{26}\frac{19}{14}\) for these values of \(m\). Thus \(\sqrt{3}>t_{m}(3)\) holds as desired. Similarly, if \(r^{\prime}=3\), then \(8<3(\sqrt{3}+1)\leqslant m+2<12\) so that \(7\leqslant m\leqslant 9\). In this case \(t_{m}(4)\) equals \(\frac{40}{33}\frac{219}{163}\frac{191}{128}\) for these values of \(m\). In each case \(\sqrt{3}>t_{m}(4)\), so the proof is complete.
**Remark 4.8**.: We place Remark 4.4 in context. The conclusion of Theorem 1.3 remains true for values of \(\omega\) smaller than \(\sqrt{3}\) and not 'too close to 1' and \(m\) is'sufficiently large'. Indeed, by adapting the proof of Lemma 4.7 we can show there exists a sufficiently large integer \(d\) such that \(m>d^{4}\) and \(\omega>1+\frac{1}{d}\) implies \(g(r^{\prime}+d)>g(r^{\prime}+d+1)\). This shows that \(r^{\prime}\leqslant r_{0}\leqslant r^{\prime}+d\), so \(r_{0}-r^{\prime}\leqslant d\). We omit the technical proof of this fact. \(\diamond\)
**Remark 4.9**.: The sequence, \(a_{0}+\mathcal{H}_{1},\ldots,a_{0}+\mathcal{H}_{r}\) terminates at \(\frac{r+1}{s_{m}(r)}\binom{m}{r+1}\) by Theorem 1.1. We will not comment here on _how quickly_ the alternating sequence in Proposition 2.3 converges when \(r<\frac{m+3}{2}\). If \(r=m\), then \(a_{0}=-m\) and \(\frac{m+1}{s_{m}(m+1)}\binom{m}{m+1}=0\), so Theorem 1.1 gives the curious identity \(\mathcal{H}_{m}=\mathcal{K}_{i=1}^{m}\frac{2i(m+1-i)}{3i-m}=m\). If \(\omega\) is less than \(\sqrt{3}\) and 'not too close to 1', then we believe that \(r_{0}\) is approximately \(\lfloor\frac{m+2}{\omega+1}+\frac{2}{\omega^{2}-1}\rfloor\), _c.f._ Remark 4.8.
## 5 Estimating the maximum value of \(g_{\omega,m}(r)\)
In this section we relate the size of the maximum value \(g_{\omega,m}(r_{0})\) to the size of the binomial coefficient \(\binom{m}{r_{0}}\). In the case that we know a formula for a maximizing input \(r_{0}\), we can readily estimate \(g_{\omega,m}(r_{0})\) using approximations, such as [10], for binomial coefficients.
**Lemma 5.1**.: _The maximum value \(g_{\omega,m}(r_{0})\) of \(g_{\omega,m}(r)\), \(0\leqslant r\leqslant m\), satisfies_
\[\frac{1}{(\omega-1)\omega^{r_{0}}}\binom{m}{r_{0}+1}<g_{\omega,m}(r_{0}) \leqslant\frac{1}{(\omega-1)\omega^{r_{0}-1}}\binom{m}{r_{0}}.\]
Proof.: Since \(g(r_{0})\) is a maximum value, we have \(g(r_{0}-1)\leqslant g(r_{0})\). This is equivalent to \((\omega-1)s_{m}(r_{0}-1)\leqslant\binom{m}{r_{0}}\) as \(s_{m}(r_{0})=s_{m}(r_{0}-1)+\binom{m}{r_{0}}\). Adding \((\omega-1)\binom{m}{r_{0}}\) to both sides gives the equivalent inequality \((\omega-1)s_{m}(r_{0})\leqslant\omega\binom{m}{r_{0}}\). This proves the upper bound.
Similar reasoning shows that the following are equivalent: (a) \(g(r_{0})>g(r_{0}+1)\); (b) \((\omega-1)s_{m}(r_{0})>\binom{m}{r_{0}+1}\); and (c) \(g_{\omega,m}(r_{0})>\frac{1}{(\omega-1)\omega^{r_{0}}}\binom{m}{r_{0}+1}\).
In Theorem 1.3 the maximizing input \(r_{0}\) satisfies \(r_{0}=r^{\prime}+d\) where \(d\in\{0,1\}\). In such cases when \(r_{0}\) and \(d\) are known, we can bound the maximum \(g_{\omega,m}(r_{0})\) as follows.
**Corollary 5.2**.: _Set \(r^{\prime}:=\lfloor\frac{m+2}{\omega+1}\rfloor\) and \(k:=m+2-(\omega+1)r^{\prime}\). Suppose that \(r_{0}=r^{\prime}+d\) and \(G=\frac{1}{(\omega-1)\omega^{r_{0}-1}}\binom{m}{r_{0}}\). Then \(0\leqslant k<\omega+1\), \(d\geqslant 0\) and \(1-\frac{1+d-\frac{d+2-k}{\omega}}{r_{0}+1}<\frac{g_{\omega,m}(r_{0})}{G}\leqslant 1\)._
Proof.: By Lemma 3.3, \(r_{0}=r^{\prime}+d\) where \(d\in\{0,1,\dots\}\). Since \(r^{\prime}=\lfloor\frac{m+2}{\omega+1}\rfloor\), we have \(m+2=(\omega+1)r^{\prime}+k\) where \(0\leqslant k<\omega+1\). The result follows from Lemma 5.1 and \(m=(\omega+1)(r_{0}-d)+k-2\) as \(\binom{m}{r_{0}+1}=\frac{m-r_{0}}{r_{0}+1}\binom{m}{r_{0}}\) and \(\frac{m-r_{0}}{r_{0}+1}\) equals
\[\frac{\omega(r_{0}-d)-d+k-2}{r_{0}+1}=\omega-\frac{\omega+\omega d+d+2-k}{r_{ 0}+1}=\omega\left(1-\frac{1+d+\frac{d+2-k}{\omega}}{r_{0}+1}\right).\qed\]
The following remark is an application of the Chernoff bound, _c.f._[11, Section 4]. Unlike Theorem 1.1, it requires the cumulative distribution function \(\Phi(x)\), which is a non-elementary integral, to approximate \(s_{m}(r)\). It seems to give better approximations only for values of \(r\) near \(\frac{m}{2}\), see Remark 5.4.
**Remark 5.3**.: We show how the Berry-Esseen inequality for a sum of binomial random variables can be used to approximate \(s_{m}(r)\). Let \(B_{1},\dots,B_{m}\) be independent identically distributed binomial variables with a parameter \(p\) where \(0<p<1\), so that \(P(B_{i}=1)=p\) and \(P(B_{i}=0)=q:=1-p\). Let \(X_{i}:=B_{i}-p\) and \(X:=\frac{1}{\sqrt{mpq}}(\sum_{i=1}^{m}X_{i})\). Then
\[E(X_{i})=E(B_{i})-p=0,\quad E(X_{i}^{2})=E(B_{i}^{2})=pq,\quad\text{and}\quad E (|X_{i}|^{3})=pq(p^{2}+q^{2}).\]
Hence \(E(X)=\frac{1}{\sqrt{mpq}}(\sum_{i=1}^{m}E(X_{i}))=0\) and \(E(X^{2})=\frac{1}{mpq}(\sum_{i=1}^{m}E(X_{i}^{2}))=1\). By [7, Theorem 2] the Berry-Esseen inequality applied to \(X\) states that
\[|P(X\leqslant x)-\Phi(x)|\leqslant\frac{Cpq(p^{2}+q^{2})}{(pq)^{3/2}\sqrt{m}} =\frac{C(p^{2}+q^{2})}{\sqrt{mpq}}\qquad\text{for all $m\in\{1,2,\dots\}$ and $x\in\mathbb{R}$},\]
where the constant \(C:=0.4215\) is close to the lower bound \(C_{0}=\frac{10+\sqrt{3}}{6\sqrt{2\pi}}=0.4097\cdots\) and \(\Phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{t^{2}/2}\,dt=\frac{1}{2}(1+ \operatorname{erf}(\frac{x}{\sqrt{2}}))\) is the cumulative distribution function for standard normal distribution.
Writing \(B=\sum_{i=1}^{m}B_{i}\) we have \(P(B\leqslant b)=\sum_{i=0}^{\lfloor b\rfloor}\binom{m}{i}p^{i}q^{m-i}\) for \(b\in\mathbb{R}\). Thus \(X=\frac{B-mpq}{\sqrt{mpq}}\) and \(x=\frac{b-mpq}{\sqrt{mpq}}\) satisfy
\[\left|P(B\leqslant b)-\Phi\left(\frac{b-mp}{\sqrt{mpq}}\right)\right|\leqslant \frac{C(p^{2}+q^{2})}{\sqrt{mpq}}\qquad\text{for all $m\in\{1,2,\dots\}$ and $b\in\mathbb{R}$}\]
Setting \(p=q=\frac{1}{2}\), and taking \(b=r\in\{0,1,\dots,m\}\) shows
\[\left|2^{-m}s_{m}(r)-\Phi\left(\frac{2r-m}{\sqrt{m}}\right)\right|\leqslant \frac{0.4215}{\sqrt{m}}\qquad\text{for $m\in\{1,2,\dots\}$}.\]
**Remark 5.4**.: Let \(a_{0}+\mathcal{H}_{k}\) be the generalized continued fraction approximation to \(\frac{(r+1)\binom{m}{s_{m}}}{s_{m}(r)}\) suggested by Theorem 1.1, where \(\mathcal{H}_{k}:=\mathcal{K}_{i=1}^{k}\,\frac{b_{i}}{a_{i}}\), and \(k\) is the depth of the generalized continued fraction. We compare the following two quantities:
\[e_{m,r,k}:=1-\frac{(r+1)\binom{m}{r+1}}{(a_{0}+\mathcal{H}_{k})s_{m}(r)}\qquad \text{and}\qquad E_{m,r}:=\left|1-\frac{2^{m}\Phi(\frac{2r-m}{\sqrt{m}})}{s_{m} (r)}\right|\leqslant\frac{0.4215\cdot 2^{m}}{\sqrt{m}\,s_{m}(r)}.\]
The sign of \(e_{m,r,k}\) is governed by the parity of \(k\) by Proposition 2.3. We shall assume that \(r\leqslant\frac{m}{2}\). As \(\frac{2^{m}}{s_{m}(r)}\) is close to (or exactly) \(\frac{1}{2}\) when \(r=\lfloor\frac{m}{2}\rfloor\), it is clear that the upper bound for \(E_{m,r}\) will be huge unless \(r\) is close to \(\frac{m}{2}\). The computer code [3] verifies that the same is true for \(E_{m,r}\), and shows that \(|e_{m,r,k}|\) is small, even when \(k\) is tiny, except when \(r\) is close to \(\frac{m}{2}\), see Table 2. Hence the 'generalized continued fraction' approximation to \(s_{m}(r)\) is complementary to the'statistical' approximation, as shown in Table 2. The reader can extend Table 2 by running the code [3] writen in the Magma[1] language, using the online calculator [http://magma.maths.usyd.edu.au/calc/](http://magma.maths.usyd.edu.au/calc/), for example.
AcknowledgmentSPG gratefully acknowledges support from the ARC Research Council Discovery Project Grant DP190100450. GRP thanks his family.
|
2306.11920 | NILUT: Conditional Neural Implicit 3D Lookup Tables for Image
Enhancement | 3D lookup tables (3D LUTs) are a key component for image enhancement. Modern
image signal processors (ISPs) have dedicated support for these as part of the
camera rendering pipeline. Cameras typically provide multiple options for
picture styles, where each style is usually obtained by applying a unique
handcrafted 3D LUT. Current approaches for learning and applying 3D LUTs are
notably fast, yet not so memory-efficient, as storing multiple 3D LUTs is
required. For this reason and other implementation limitations, their use on
mobile devices is less popular. In this work, we propose a Neural Implicit LUT
(NILUT), an implicitly defined continuous 3D color transformation parameterized
by a neural network. We show that NILUTs are capable of accurately emulating
real 3D LUTs. Moreover, a NILUT can be extended to incorporate multiple styles
into a single network with the ability to blend styles implicitly. Our novel
approach is memory-efficient, controllable and can complement previous methods,
including learned ISPs. Code, models and dataset available at:
https://github.com/mv-lab/nilut | Marcos V. Conde, Javier Vazquez-Corral, Michael S. Brown, Radu Timofte | 2023-06-20T22:06:39Z | http://arxiv.org/abs/2306.11920v3 | # NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement
###### Abstract
3D lookup tables (3D LUTs) are a key component for image enhancement. Modern image signal processors (ISPs) have dedicated support for these as part of the camera rendering pipeline. Cameras typically provide multiple options for picture styles, where each style is usually obtained by applying a unique handcrafted 3D LUT. Current approaches for learning and applying 3D LUTs are notably fast, yet not so memory-efficient, as storing multiple 3D LUTs is required. For this reason and other implementation limitations, their use on mobile devices is less popular.
In this work, we propose a Neural Implicit LUT (NILUT), an implicitly defined continuous 3D color transformation parameterized by a neural network. We show that NILUTs are capable of accurately emulating real 3D LUTs. Moreover, a NILUT can be extended to incorporate multiple styles into a single network with the ability to blend styles implicitly. Our novel approach is memory-efficient, controllable and can complement previous methods, including learned ISPs. Code, models and dataset available at: [https://github.com/mv-lab/nilut](https://github.com/mv-lab/nilut).
## 1 Introduction
Image signal processors (ISP) are hardware units used in cameras to process the RAW sensor images to the final output image [18, 24, 39]. The ISP hardware applies a series of processing steps to render the RAW image to its final photo-finished output. 3D lookup tables (**3D LUTs**) are one of the core components used in conventional ISPs. Specifically, a 3D LUT is a global color operator that maps an RGB color to a new RGB color. 3D LUTs are commonly used to model a desired stylistic look as shown in Fig. 1. Most cameras can render the same image to several different pictures, where each picture style has its own associated 3D LUT to enhance the color and tone [13, 25, 50, 53].
Modern cameras use conventional ISPs [11, 13, 24] and apply further photo-finishing deep-learning models [23, 44]. These usually run on dedicated processors _e.g_. neural processing units (NPUs), with important limitations such as memory allocation and allowed operations [21]. In addition, there is already an active trend to replace many conventional ISP components with deep learning-based algorithms, _e.g_., image denoising [8], color constancy [19], super-resolution [9, 22], and even the entire ISP [28, 31, 32].
In particular, methods for image enhancement via color and tone manipulation are often based on 3D LUTs [48, 50] due to their runtime efficiency. However, many of these methods are not suitable for mobile devices due to their memory limitations (_i.e_. storing multiple 3D LUTs would be too memory exhaustive) and required operations.
ContributionWe propose **NILUT**, a novel application of implicit neural representations (**INRs**) [41] for color manipulation. Our NILUT is an implicitly defined, continuous 3D color transformation parameterized by a neural network. NILUTs can accurately mimic existing professional 3D LUTs, as shown in Fig. 2. Moreover, NILUTs can be extended to encode multiple styles into a single network. During inference, the NILUT can be conditioned on a particular picture style, and even blend between multiple styles implicitly. This novel multi-style formulation allows controllable image enhancement and customization. We believe NILUTs can complement previous methods for image enhancement [48, 50, 53, 46], including learned ISPs [20, 23] designed for smartphones. As part of this effort, we provide a dataset of curated 3D LUTs and images for evaluation.
## 2 Related Work
### 3D LUTs for Color Manipulation
3D LUTs are a mechanism to approximate a nonlinear 3D color transform by sparsely sampling the transformation by a discrete 3D lattice [24, 29, 50]. The model is defined as a mapping \(\phi\), usually in the RGB color space, where an input color \(\mathbf{I}=[r,g,b]\) is mapped into \(\mathbf{I}^{\prime}=[r^{\prime},g^{\prime},b^{\prime}]\):
\[\phi:\mathbb{R}^{3}\mapsto\mathbb{R}^{3}\quad\phi(\mathbf{I})=\mathbf{I}^{ \prime}. \tag{1}\]
3D LUTs appear at different stages of the image signal processor pipeline, as detailed in [24, 25]. 3D LUTs are often manually created by camera engineers and professional photographers. Methods such as lattice regression [14, 29] parameterize the 3D LUT's lattice from the sparse samples using various regularizations.
More recently, deep learning techniques have been employed for estimating 3D LUTs and learning interpolation and sampling strategies within the LUT with the goal of color image enhancement [30, 46, 49]. For example, Zeng _et al_. [50] proposed a method that learned the parameters of three 3D LUTs and an additional per-image adaption to blend between the LUTs. Similarly, Wang _et al_. [46] proposed a similar idea to learn the parameters of 3D LUTs, but included a spatial blending over the image, effectively implementing a spatially varying 3D LUT. Yang _et al_. AdaInt [48] focused on improving the sampling within the 3D LUTs (based on [50]). They proposed a learned method to improve the classical trilinear interpolation between uniform sampling points in the LUT [29]. Also, Yang _et al_. [49] proposed to learn separated component-correlated sub-transforms as 1D and 3D LUTs.
The aforementioned methods focus on conventional 3D LUTs and their interpolation mechanisms. In contrast, our NILUT is interested in providing the functionality of the 3D LUTs, but as a neural network to be more compatible with NPU-based hardware. In addition, we are interested in encoding multiple picture styles within the same network.
### ISPs and Image Enhancement
In recent years, most low-level computer vision tasks have witnessed promising results from deep learning methods. For example, significant advances in image denoising [8, 51], image deblurring [38], image super-resolution [9, 27], and image enhancement [12, 16, 33, 36, 45, 52, 33] amongst many other tasks. Most of these tasks are crucial stages in modern smartphone ISPs.
Moreover, recent end-to-end learned ISPs [31, 32, 23, 28] have also obtained promising results. Due to this current deep learning trend, smartphone manufacturers are incorporating special neural processing units (NPUs) [2, 3, 4, 5, 6, 7, 21].
This said, the previously introduced methods are designed to run on consumer GPUs (_e.g_. NVIDIA V100), and therefore integrating such powerful tools into a smartphone is extremely challenging, and sometimes impossible due to the memory and computational limitations [21].
Our NILUTs represent a new plug-and-play module for modern deep learning-based ISPs and image enhancement pipelines such as Ignatov _et al_. real-time image super-resolution [22] and end-to-end learned ISPs [20] tested in real commercial smartphones NPUs.
### Implicit Neural Representations
In recent years, implicit neural representations (INRs) [15, 37, 41] have become increasingly popular in image processing as a novel way to parameterize an image. Also known as coordinate-based networks, these approaches use multilayer perceptrons (MLPs) to overfit to the image. Multiple works have demonstrated the potential of MLPs as continuous, memory-efficient implicit representations for images [41, 42]; we find especially inspiring SIREN [41] and Fourier Feature Networks [43]. This technique was also successfully applied to model shapes [15, 34] and 3D scenes [35, 37].
Conventional signal representations are usually discrete _e.g_., an image is a discrete grid of pixels (subspace of \(\mathbb{R}^{2}\)) with output values bounded in an \(\mathbb{R}^{3}\) RGB space. In contrast, INRs parameterize a signal as a continuous function that maps the source domain \(\mathcal{S}\) of the signal (_i.e_., a coordinate) to its corresponding value in the target \(\mathcal{T}\) (_i.e_., the corresponding RGB intensity value). This function is approximated using a neural network, and therefore it is not analytically tractable. This can be formulated as:
\[\Phi:\mathbb{R}^{2}\mapsto\mathbb{R}^{3}\quad\mathbf{x}\rightarrow\Phi( \mathbf{x})=[r,g,b], \tag{2}\]
where \(\Phi\) is the learned INR function, the domains \(\mathcal{S}\in\mathbb{R}^{2}\) and \(\mathcal{T}\in\mathbb{R}^{3}\), the input coordinates \(\mathbf{x}\), and the output RGB value \([r,g,b]\). Note that the behavior of this function is similar to a lookup table--see (1)-- yet being continuous, differentiable, and learnable. Our new application of INRs consists of learning a mapping between two color representations.
Figure 2: **Top:** shows a conventional 3D lookup table able to enhance the color and tone of the image. **Bottom:** shows the same functionality based on the proposed NILUT.
## 3 Neural Implicit LUT
As discussed in the prior section, the most common INRs in the literature [41, 43] are coordinate-based representations of image signals, implemented using MLPs. In this work, our goal is to learn a 3D transformation in the RGB color space, therefore we map \(\mathbb{R}^{3}\) coordinates (_i.e_. color values \(\mathbf{I}\)) from a source \(\mathcal{S}\) to a target \(\mathcal{T}\), being both 3D domains.
Specifically, we want a continuous function \(\Phi\) with the following properties:
\[\Phi:\mathbb{R}^{3}\mapsto\mathbb{R}^{3} \Phi(\mathbf{I})\in[0,1]^{3}, \tag{3}\] \[\Phi(\mathbf{I})\approx\phi(\mathbf{I}),\] (4) \[\nabla\Phi\approx\nabla\phi. \tag{5}\]
We represent the RGB space as a set \(\mathcal{X}=\{x_{i}\}\) of color pixels \(x_{i}=(r_{i},g_{i},b_{i})\). This set contains \(\approx 16\) million elements if we consider the complete RGB space (_i.e_. \(256^{3}\)). To learn our continual representations \(\Phi\) we minimize:
\[\mathcal{L}=\sum_{i}\|\Phi(x_{i})-\phi(x_{i})\|_{1}, \tag{6}\]
where \(\phi\) is the real 3D LUT -see Eq. 1-.
This function \(\Phi\) is an implicit neural representation of a 3D lookup table \(\phi\) and can be formulated as:
\[\Phi(x)=\mathbf{W}_{n}(\varsigma_{n-1}\circ\varsigma_{n-2}\circ \ldots\circ\varsigma_{0})(x)+\mathbf{b}_{n} \tag{7}\] \[\varsigma_{i}(x_{i})=\alpha\left(\mathbf{W}_{i}\mathbf{x}_{i}+ \mathbf{b}_{i}\right),\]
where \(\varsigma_{i}\) are the layers of the network (considering their corresponding weight matrix \(\mathbf{W}\) and bias \(\mathbf{b}\)), and \(\alpha\) is a nonlinear activation. We study three different networks: (i) simple ReLU-MLPs or Tanh-MLPs [41], (ii) SIREN [41] and (iii) Residual MLPs (referred as MLP-Res).
We note here that SIREN [41] has some drawbacks in terms of its implementation. First, it can not use INRs speed-up techniques, such as the one in [37], and second, their custom activations are still not supported for mobile devices accelerator hardware.
Another alternative to SIREN would be a residual-based MLP (MLP-Res). This approach assumes the 3D LUT does not make drastic color changes (e.g., red becoming blue), but instead performs color manipulation via reasonable displacements (residual) between input and output RGB values. Our visualizations of 3D LUTs in Fig. 3 help to reveal that this is often the case. This approach also serves to help regularize the MLP in regions with no changes (i.e., the residual is 0 over the three changes). Our ablations in Section 4 also reveal this is a good strategy.
Conditional Neural LUTWe can further improve the proposed representation to model more complex relationships in the image color domain:
\[\Psi:\mathbb{R}^{3+m}\mapsto\mathbb{R}^{3}\quad\Psi(\mathbf{z})=\mathbf{I}^{ \prime}, \tag{8}\]
where \(\mathbf{z}=[\mathbf{I},\mathbf{c}]\) is the concatenation of the input RGB intensity \(\mathbf{I}\in\mathbb{R}^{3}\), and a condition vector \(\mathbf{c}\in\mathbb{R}^{m}\) with \(m\) possible styles or LUTs using one-hot class encoding. Therefore this continual function \(\Psi\) maps an input intensity \(\mathbf{I}\) into \(\mathbf{I}^{\prime}\) under the condition \(\mathbf{c}\), representing a conditional neural implicit LUT, a more general and powerful representation than the previously introduced NILUT (\(\Phi\)). We illustrate this in Fig. 3, where we show the codification of the style as a condition vector using one-hot encoding. The derivation of our CNILUT also allows us to blend among different styles by just modifying the values of the condition vector. Details of this are provided in Section 4.1.
### Discussion
We end this section by discussing some benefits and differences between NILUTs over conventional 3D LUTs.
Firstly, previous approaches for learning 3D LUTs [48, 48, 50] are notably fast and faithful, allowing real-time processing using regular GPUs (_e.g_. 24Gb memory). We understand that a neural network (_e.g_. NILUT) is limited and cannot surpass the efficiency of a lookup operation. However, in the mobile devices scenario, 3D LUTs have two important limitations: **(i)** allocating in memory multiple 3D LUTs (usually 33-dim, thus \(107K\) FP parameters [50]) is not possible due to the hard memory limitations found in mobile devices chips _e.g_. NPUs [21]. Note that dedicated processors such as the ISP usually have their own memory and typically only compact models can run on these [20, 22]. **(ii)** many operations, including Zeng _et al_. trilinear interpolation on CUDA [50] are not supported by PyTorch Mobile or TFLite, the most common frameworks for developing efficient mobile models. We believe these are the reasons why there are very few works about image enhancement on mobile devices using 3D LUTs. Thus, we aim to complement previous approaches and offer a memory-efficient alternative for mobile devices.
Secondly, NILUTs offer other benefits such as being naturally differentiable, allowing end-to-end learning, and are mobile-ready allowing to complement deep learning-based image processing pipelines as a plug-and-play module, in mobile devices. Finally, NILUTs, as a novel representation, are conditional, allowing a single compact neural network to deal with multiple styles or LUTs that are presented in the imaging process. This clearly contrasts with current LUTs, in which a different process should be run for each specific one. This is key to being memory-efficient as one NILUT can mimic the transformation of five real 3D LUTs. Also, this property allows us to blend among the different styles at inference time in contrast with current LUTs where each blending is individually computed either at the 3D LUT or at the image level (_e.g_. as in [50]).
## 4 Experiments
Learning a complete 3D LUT transformation of the 8-bit RGB space requires \(256^{3}=16.78\) million input (and output) colors. According to Eq. (6), we represent these \(16M\) colors of the RGB space as a set \(\mathcal{X}=\{x_{i}\}\) of size \(16M\times 3\). This set is equivalent to an image of dimension \(4096\times 4096\times 3\) that we call \(\mathcal{M}\). We will denote this image as the RGB map, illustrated in Fig. 4. We then process image \(\mathcal{M}\) using professional image editing software (Adobe Photoshop) and real 3D LUTs designed by photographers. These processed images are reshaped back to dimension \(16M\times 3\), and correspond to \(\phi(x_{i})\) on Eq. (6), _i.e_. our ground-truth for the minimization.
The coordinate-based MLP \(\Phi\), as it is the standard practice with INRs [41], is trained to "overfit" the mapping between its result to the input colors (\(\Phi(x_{i})\)) and the output of the real LUT \(\phi(x_{i})\), as previously introduced in Section 2.3. We provide visualizations of this training for a subset of the colors in \(\mathcal{M}\) in the supplementary material.
Note that using this setup we do not require natural images to learn real 3D LUTs, just the corresponding RGB maps (Halds). This is indeed the way professional photographers create 3D LUTs [1].
EvaluationWe consider two different metrics for our work. We report PSNR, and CIELAB \(\Delta\)E error. We choose these two measures because i) PSNR is a standard fidelity metric in the literature and ii) \(\Delta\)E is a perceptual color difference metric that measures differences between two colors [40], and therefore well suited for our problem. Considering that we fit our NILUTs using RGB maps as mentioned before, we evaluate the quality of our NILUT representation in two different ways: (i) RGB mapping quality (Fig. 4), (ii) using natural unseen images from MIT5K [10].
Figure 4: From left to right, original RGB map, output of 3D LUT βCyberpunkβ, output of 3D LUT style βNightcolorsβ. We can appreciate the color transformation clearly. In graphics, this is referred to as a **Hald** image, a graphical representation of 3D LUT in the form of a color table that contains all of the color gradations of 3D LUT [1].
Figure 3: **Top:** A conventional 3D LUT framework. Each 3D LUT is stored and processed individually. **Bottom:** The new framework introduced by CNILUTs. We use as input both the image and a condition vector (one-hot encoding of the style), allowing for **i)** selection of multiple styles with a single network, and **ii)** blending different 3D LUTs styles by modifying the input condition vector. This happens implicitly without additional computational cost.
Our dataset consists on a set of 5 professional 3D LUTs. We report the average metrics over the five.
**I)** We compare the fidelity between the NILUT and real 3D LUT RGB maps. More in detail, we evaluate the differences between the NILUT-generated map \(\Phi(\mathcal{M})\) and the real 3D LUT output map \(\phi(\mathcal{M})\) -see Eq.(6)-. These results are referred as PSNR\({}_{\textbf{rgb}}\) and \(\Delta\)E\({}_{\textbf{rgb}}\) in Table 1
**II)** We randomly selected 100 RAW images from the Adobe MIT5K dataset [10], captured using diverse DSLR cameras. We then processed the images using Adobe Photoshop and the same set of 3D LUTs discussed before. Once a NILUT is fitted using the corresponding RGB map, we apply it to this set of images and measure the fidelity between the real 3D LUT and NILUT processed images. These results are referred as PSNR\({}_{\textbf{Sk}}\) and \(\Delta\)E\({}_{\textbf{Sk}}\) in Table 1.
Table 1 presents our results for different configurations of MLPs -a basic MLP, SIREN, and a Residual MLP-, under two different numbers of neurons (N) and layers (L). Note that differences in \(\Delta\)E smaller than 2 are indistinguishable by human observers [40]. Also, results with more than 40 dB PSNR are considered of high quality. Attending to Table 1 results, we can affirm that NILUTs can mimic almost perfectly the RGB transformation of real 3D LUTs on natural images from photographers [10].
We also present in Fig. 5 results for the case of MLP-Res \(N=128\) and \(L=2\). We show from left to right, the input image, the ground-truth image processed by real a 3D LUT, and our result. We also display a miniature error map for each of our results. The error map is scaled between 0 and 5 \(\Delta\)E Units. We provide more results in the supplementary material.
### Conditional NILUT
As we introduced in Section 3, our NILUT can be conditioned to different styles (CNILUT), and by doing this, we can learn implicitly multiple 3D LUT transformations using a single NILUT. This novel feature would allow reducing the memory requirements of storing and calling multiple 3D LUTs in a camera pipeline [24]. In our experiments, we set to three and five the number of 3D LUTs to learn using this approach. This however denotes a clear _trade-off_ since, in exchange of this ability, CNILUTs require longer and more complex training, and there is a slight performance degradation in comparison to learning three/five separated -and specialized- NILUTs. This said we are able to obtain consistent values larger than \(42\) dBs in PSNR and errors smaller than \(1.5\)\(\Delta\)E in the RGB mapping, therefore being
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & N & L & \# Par. (K) & PSNR \(\uparrow\) & CIELAB \(\Delta\)E \(\downarrow\) \\ \hline SIREN [41] & 256 & 2 & 133.3 & 41.95 & 1.45 \\ SIREN [41] & 256 & 3 & 199.1 & 43.00 & 1.33 \\ SIREN [41] & 128 & 2 & 33.90 & 44.43 & 1.04 \\ SIREN [41] & 128 & 3 & 50.40 & 42.74 & 1.43 \\ SIREN [41] & 64 & 2 & 8.700 & 45.37 & 0.96 \\ SIREN [41] & 64 & 3 & 12.90 & 43.03 & 1.35 \\ MLP-Res & 256 & 2 & 133.3 & 45.84 & 1.09 \\ MLP-Res & 256 & 3 & 199.1 & **46.19** & **0.91** \\ MLP-Res & 128 & 2 & 33.90 & 45.34 & 0.97 \\ MLP-Res & 128 & 3 & 50.40 & 45.47 & 0.96 \\ MLP-Res & 64 & 2 & 8.700 & 43.84 & 1.11 \\ MLP-Res & 64 & 3 & 12.90 & 43.55 & 1.14 \\ MLP & 256 & 2 & 133.3 & 44.34 & 1.00 \\ MLP & 256 & 3 & 199.1 & 44.49 & 1.03 \\ MLP & 128 & 2 & 33.90 & 43.18 & 1.19 \\ MLP & 128 & 3 & 50.40 & 42.26 & 1.34 \\ MLP & 64 & 2 & 8.700 & 41.41 & 1.41 \\ MLP & 64 & 3 & 12.90 & 40.86 & 1.49 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study of different MLPs. Results are computed as the differences between the NILUT generated map \(\Phi(\mathcal{M})\) and the real 3D LUT map \(\phi(\mathcal{M})\). Metrics are the average over the five 3D LUTs in our dataset.
Figure 8: Performance evolution for the CNILUT fitting. Our single CNILUT can accurately learn five different styles from five real 3D LUTs. The reported PSNR is calculated over the five respective reference RGB maps \((\mathcal{M})\). The final average PSNR over the five learned LUTs is 45.34.
Figure 7: Results for our conditional NILUTs (CNILUTs). A **single CNILUT** is able to represent three different styles in a single model. All the images in this figure have less than 3 \(\Delta\)E error with respect to the real LUT. Best viewed in color.
almost unnoticeable for human observers. Fig. 8 shows the evolution of the training of the CNILUT using an MLP-Res 256x2. Note that as before, we train using five RGB maps, and use as the main metric -in this case- the average avg over the five representations. As we show, some LUTs are easier to learn than others, yet we can learn five LUTs with an average RGB mapping PSNR of 45.34dB, and \(\Delta\)E 0.85. In Table 3, we provide an ablation study where we show the performance of different MLP-Res architectures when learning 3 and 5 different 3D LUTs. The error is computed as the average for the different 3D LUTs. Given the large PSNR and errors smaller than \(1.2\)\(\Delta\)E in the RGB mapping, we can confirm that the difference between the CNILUT mapping and the corresponding three/five 3D LUTs is almost unnoticeable for human observers [40]. The main difference between CNILUT and NILUT is the training: CNILUT requires longer training to obtain good performance. We provide this study in the supplementary material.
Multi-Styles Qualitative ResultsIn Fig. 7 we present some qualitative results of our CNILUT. A single CNILUT can successfully emulate the behavior of three different 3D LUTs by imposing the corresponding condition vectors.
Memory EfficiencyFrom previous experiments, we can conclude that a CNILUT can compress the representation of five 3D LUTs, without additional computational cost _i.e._ the number of operations due to the integration of the condition vector does not affect the runtime. Considering a standard 33-dimensional 3D LUTs [48, 50] with \(\approx 107K\) parameters, if stored as FP32 would require \(\approx 0.43\)MB. Our NILUT has 8.7K parameters (\(\approx 0.032\)MB) and can accurately reproduce the behavior and properties of such complex 3D LUTs, which implies a compression of \(13\times\). This facilitates its storage in mobile devices with limited on-chip memory.
Blending StylesThe derivation of our CNILUT also allows us to blend among different styles by just modifying the values of the condition vector. When this vector is a one-hot encoder we have one of the basis implicit 3D LUTs; but we can generalize this vector as a set of blending weights. The multi-style blending occurs as an implicit interpolation within the neural network itself. This is the same principle as in SIREN [41] (interpolation of pixels for 2D images) and NeRF [35]. We analyze the output of blending the test images with weights \([0.33,0.33,0.33]\), and compare with the linear interpolation of the real processed 2D RGB images, the PSNR is 40.05dB, indicating the implicit CNILUT blending is equivalent to explicit images interpolation.
We provide examples of our blending capabilities in Fig. 9. We believe the blending property itself represents an interesting future work. We provide more details in the supplementary material.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Learn **x3**} & \multicolumn{2}{c}{Learn **x5**} \\ & PSNR \(\uparrow\) & \(\Delta\)E \(\downarrow\) & PSNR \(\uparrow\) & \(\Delta\)E \(\downarrow\) \\ \hline
128x2 & 43.72 & 1.07 & 42.67 & 1.19 \\
128x3 & 44.03 & 0.99 & 43.71 & 1.01 \\
256x2 & 45.37 & 0.90 & 45.34 & 0.85 \\
256x3 & **47.26** & **0.74** & **46.05** & **0.8** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Conditional NILUT ablation study. All the reported architectures are MLP-Res. We report results when learning 3 and 5 different 3D LUTs styles in a single CNILUT. We train our CNILUT for 10000 steps.
Figure 9: Example of the blending capabilities of the proposed CNILUT. In particular, the proposed MLP-Res 128x2 is trained on a set of three 3D LUTs, which achieves an average RGB mapping quality of 43.72dB. We show the three learned styles (one-hot encoded condition vectors), and two random βblendingsβ given the specified condition vectors.
### Plug-in deep learning based ISPs
We explore the integration of NILUTs into modern learned ISPs [11, 44]. Our NILUT is a possible plug-and-play module to further enhance the colors and apply different styles. Moreover, it is differentiable, which facilitates end-to-end ISP optimization. In Fig. 10, we show how to complement a learned ISP [22] designed to run -and tested-on smartphones, using our NILUTs to enhance the image further and manipulate colors.
This is a core step in any traditional ISP [24]. Further work on training or enhancing ISPs is out of the scope of this work, as it is a research topic by itself [23, 28, 32, 44].
### Applications
We provide a demo application in Fig. 11. Following Section 4.2, we deploy a small CNILUT (32x2) on mobile devices using _AI Benchmark_[21] and provide the results in Tab. 4. The model can process 4K images in smartphones without memory problems at \(\approx 10\) FPS on regular GPUs (2020). The CNILUT (3 styles) is just 4KB in comparison to the 1.2 MB (\(3\times 0.4\)MB) of the three complete 3D LUTs.
This technique allows controllable color manipulation by just changing the input condition vectors. By definition, CNILUT is a pixel-wise transformation, therefore the structure of the image is perfectly preserved.
### Limitations and Other Methods
Despite the promising results of learning 3D LUTs as INRs, there are some limitations that we must consider.
Firstly, as we previously discussed in Section 3.1, we understand that a neural network (_e.g_. NILUT) is limited and cannot surpass the efficiency of a lookup operation.
Secondly, by design and mathematical convenience, other methods for learning 3D LUTs such as Zeng _et al_. [50] also achieve "perfect" mapping (_i.e_. \(>50\)dB). However, despite the great fitting, the application of classical 3D LUTs on mobile devices is not trivial as discussed in Sec. 3.1.
Thirdly, many previous approaches [48, 49, 50] did not focus on learning explicit 3D LUTs, but instead focused on constructing a mapping between RGB and enhanced RGB (_e.g_. "Expert C" in MIT5K [10]), which was not necessarily limited to 3D LUT operations. We aim to complement these approaches [46, 48, 49, 50, 53] with the proposed NILUT that offers multi-style functionality and is suitable for modern mobile devices.
## 5 Concluding remarks
We have introduced NILUTs, a new approach for modeling 3D LUTs as INRs. Our NILUTs are implicitly defined, continuous, 3D color transformations parameterized by a neural network. They present several advantages: i) are memory-efficient and can run on mobile devices, therefore, are easy to integrate into modern deep learning ISPs; ii) can perform multiple style modifications with a single architecture; and iii) can compute blend between different styles. The novel multi-style blending formulation allows controllable image enhancement and customization. Quantitative and qualitative results demonstrate the effectiveness of our newly defined NILUTs for image enhancement. We have also curated a dataset of 3D LUTs and images for evaluation of color manipulation methods.
\begin{table}
\begin{tabular}{c c c} \hline \hline Input resolution & Mali-G77 MC9 (ms) \(\downarrow\) & Adreno 620 (ms) \(\downarrow\) \\ \hline \(1920\times 1080\) & \(47.9\pm 1.62\) & \(75.0\pm 3.28\) \\ \(2778\times 1284\) & \(73.9\pm 1.62\) & \(135.0\pm 3.63\) \\ \(3840\times 2160\) & \(121.0\pm 1.41\) & \(286.0\pm 2.92\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: CNILUT deployment on two mid-level smartphone GPUs. We report the average image processing runtime over 10 runs \(\pm\) the std. deviation (see Figure 11).
Figure 11: (L) Sample image enhancement application on mobile devices based on the proposed CNILUT. (R) Deployment using [21]. See also Section 4.1 and Figure 9.
Figure 10: NILUT as plug-in module to further enhance learned ISPs. (Left) **RAW** image after demosaicing. (Middle) **RGB** image produced using a learned ISP designed for NPU [21, 22]. (Right) Enhanced image using our **NILUT**.
## Appendix A Implementation Details
We develop the models using PyTorch framework and two NVIDIA RTX 3090. The MLP networks are designed based on SIREN (and variants) [41]. The models are trained using fixed learning rate \(1e^{-3}\) and Adam optimizer [26] until convergence (5000 steps, \(\sim 4\) minutes). Note that as we show in Fig. 6, convergence depends on the architecture. For the experiments in Tables 1, 2, 3 we do not use RGB maps of dimension \(4096\times 4096\times 3\) (complete 16M values), instead, we use a reduced map of dimension \(2048\times 1024\times 3\) which contains one of each two possible values in the R,G, and B channels (\(128^{3}\) values), which requires less memory and allows faster experimentation.
For training the conditional NILUT we use three/five different 3D LUTs, at each step we feed the three/five condition vector and RGB map into the network and accumulate the three/five different loss terms (one for each learned LUT). We concatenate the map and condition vector to obtain an input with dimension \([h\times w,6]/[h\times w,8]\). Since we learn three/five different 3D LUTs using a single CNILUT, we need to train at least 4000 steps to obtain reasonable results. Once the CNILUT is trained, we can further fine-tune it to perform blending by yielding random condition vector weights (softmax weights) and the corresponding blended outputs; these represent plausible convex combinations of the three basis 3D LUTs.
|
2302.12706 | Time dependent non-Abelian waves and their stochastic regimes for gauge
fields coupled to external sources | In this paper we explore explicit exact solutions of the $SU(2)$ Yang-Mills
(YM) and Yang-Mill-Higgs (YMH) equations with homogeneous and inhomogeneous
external sources. Whereas in the case of YM we have confirmed our analytical
findings with the numerical simulations, the numerical corroborations in the
YMH case yielded the stochastic character of motion for the ensuing fields. | T. Shreecharan, Thokala Soloman Raju | 2023-02-24T16:07:18Z | http://arxiv.org/abs/2302.12706v1 | Time dependent non-Abelian waves and their stochastic regimes for gauge fields coupled to external sources
###### Abstract
In this paper we explore explicit exact solutions of the \(SU(2)\) Yang-Mills (YM) and Yang-Mills Higgs (YMH) equations with homogeneous and inhomogeneous external sources. Whereas in the case of YM we have confirmed our analytical findings with the numerical simulations, the numerical corroborations in the YMH case yielded the stochastic character of motion for the ensuing fields.
## I Introduction
It is well-known that the non-Abelian YM gauge theory is essentially nonlinear in nature. More specifically the infrared phenomena of QCD such as confinement and chiral symmetry breaking can be explained, using non-perturbative methods, due to the nonlinear nature of gluon interactions. The nonperturbative methods that have been developed over the years have greatly benefited from the study of exact solutions of the classical YM equations [1]. Of the several exact solutions that have been found for these dyanmical equations of YM and YMH, the most prominent ones are instantons [2; 3; 4], merons [5; 6], vortices [7], monopoles [8; 9; 10], and non-Abelian plane waves [11; 12; 13; 14; 15; 16; 17].
The pursuit of finding exact solutions for these dynamical equations becomes much more arduous in the presence of external sources. In the literature limited works have been devoted to this task, despite the fact that addition of external sources enriches the dynamics of the ensuing fields. However even with the relative dearth of exact solutions of YM with external sources, some interesting results have been reported [18; 19; 20]. Furthermore, the investigation of general properties of classical solutions of gauge field equations is important and may provide new insight about the vacuum structure of a given theory. Out of these properties, the study of irregular stochastic character of the gauge field dynamics is of great interest. In their seminal papers [14; 15; 16; 17], the authors have studied the stochastic behavior of the YM theory. Also the chaotic behavior of the YM mechanics with 3 degrees of freedom was demonstrated by a Painleve' test [21] and by studying Lyapunov exponents [22]. Additionally, it will be very interesting to consider extra possible interactions with a Higgs field. In Refs. [15; 23; 24; 25] investigations of the stochastic behavior of the Abelian and non-Abelian classical field systems with Higgs fields but without a source term were made.
Motivated by the above works, we have discovered exact solutions for the YM and YMH dynamical equations in the presence of external sources. Two special cases have been considered: In the first case, we have explored exact solutions of YM field equations when the external sources are homogeneous. The exact solutions are explicated in detail for different parameter values. In the case of YM equations of motion, numerical simulations conducted attest our analytical results. We have also found exact soliton solutions when the source is inhomogeneous. The phase space portraits have been obtained when the source is an oscillatory one. In the second case, we have obtained nondegenerate soliton solutions of the YMH dynamical equations, when the source is homogeneous. Parameter domains are delineated in which the solitons exist. Also we have obtained phase space portraits when the sources are nondegernately inhomogeneous, indicative of stochastic motion.
## II The Yang-Mills equations of motion
Here we consider \(4-D\)\(SU(2)\) YM Lagrangian density coupled to an external source, in Minkowski space with the metric \(g_{\mu\nu}=\mathrm{diag}(+,-,-,-)\). The Lagrangian in this case is given by
\[\mathcal{L}=-\frac{1}{4}F^{a}_{\mu\nu}F^{\mu\nu a}+A^{a}_{\mu}J^{\mu a} \tag{1}\]
where
\[F^{a}_{\mu\nu}=\partial_{\mu}A^{a}_{\nu}-\partial_{\nu}A^{a}_{\mu}+g\epsilon^ {abc}A^{b}_{\mu}A^{c}_{\nu} \tag{2}\]
with \(\epsilon^{abc}\) being the \(SU(2)\) structure constants. The equations of motion are
\[\partial_{\mu}F^{\mu\nu a}+g\epsilon^{abc}A^{b}_{\mu}F^{\mu\nu c}=J^{\nu a} \tag{3}\]
As discussed in Ref. [14] it is convenient to solve these equations (3) in the gauge: \(A^{a}_{0}\), \(\partial^{i}A^{a}_{i}=0\). In the next section we would like to explicate the procedure to obtain time dependent solutions.
We seek solutions that are purely time dependent. To achieve this the ansatz [26] is chosen to be :
\[A^{a}_{\mu}=\delta^{a}_{\mu}f_{a}(t),\quad A^{\mu a}=g^{\mu a}f_{a}(t),\quad J ^{\mu a}=\delta^{\mu a}j_{a}(t), \tag{4}\]
where \(a=1,2,3,\,\mu=1,2,3,4\) (\(x_{1}=t,x_{2}=x,x_{3}=y,x_{4}=z\)). The equations of motion to Eq. (3) can be
written in explicit component form as
\[\begin{split} g^{2}f_{1}(f_{2}^{2}+f_{3}^{2})=j_{1},\\ -\ddot{f_{2}}+g^{2}f_{2}(f_{1}^{2}-f_{3}^{2})=j_{2},\\ -\ddot{f_{3}}+g^{2}f_{3}(f_{1}^{2}-f_{2}^{2})=j_{3},\\ g(\dot{f_{1}}f_{3}+2f_{1}\dot{f_{3}})=0,\\ g(\dot{f_{1}}f_{2}+2f_{1}\dot{f_{2}})=0,\end{split} \tag{5}\]
where \(\dot{f_{a}}\) denotes derivative with respect to time. One can notice, that this system has only finite number of degrees of freedom, and hence finite dimensional phase space. Choosing \(f_{1}=0\) implies \(j_{1}=0\), and further setting \(f_{2}=f_{3}=f(t)\), \(j_{2}=j_{3}=j(t)\) in the above set of equations, we obtain
\[f^{\prime\prime}+g^{2}f^{3}+j(t)=0. \tag{6}\]
It is interesting to note that Eq. (6) without source was derived in Ref. [27]. In this case the energy integral of motion can be written as
\[f^{\prime}\ {}^{2}+\frac{g^{2}}{2}f^{4}+2jf=\mathcal{E}. \tag{7}\]
The above equation describes an anharmonic oscillator with potential \(U=\frac{g^{2}}{2}f^{4}+2jf\). Although Eq. (7) can be integrated to yield elliptic functions as solutions in the absence of the source, previously we have shown that there exists Mobius transform solutions to Eq. (6) when the source is homogeneous in nature i.e., \(j(t)=j\)[28].
Here we will numerically analyze the behavior of the effective mechanical system with the potential \(U(f,j)\) leading to Eq. (6). We have solved Eq. (6) numerically using RK-4 technique for two different sets of initial conditions. From the Figs. (1) and (2), one can see that there exits periodic solutions and soliton solutions for different initial conditions specified in the figure captions. In Fig. (1) we depict the numerically obtained periodic solution when the strength of the source is \(j=0.5\) and in Fig. (2), we depict soliton solution when the strength of the source is \(j=1.5\).
In addition to the above solutions we have also found time dependent non-Abelian waves in terms of Jacobian elliptic functions when the external source is inhomogeneous. These are nonperturbative exact solutions.
**Case I:** When the source is \(j(t)=B\) cn(\(\alpha t,k\)) we find that the solution to Eq. (6) takes the form \(f=A\) cn(\(\alpha t,k\)), where the amplitude parameters are \(A=\sqrt{2\alpha^{2}k/g^{2}}\), \(B=\alpha^{2}(1-2k)\sqrt{2\alpha^{2}k/g^{2}}\).
Apart from the above analytical insights we have also solved the Eq. (6) numerically when the source depends on time explicitly. Specifically we have chosen \(j(t)=\) cn(\(0.5t,0.5\)). In Fig. (3) we depict the time evolution of the limit cycle for the initial condition specified in the figure caption.
**Case II:** Here we find localized explicit solution to Eq. (6) when the source is a bell-type one : \(j(t)=B\) sech(\(t\)). We find that the solution is \(f=A\) sech(\(t\)), where the amplitude parameters are \(A=\sqrt{2}/g\) and \(B=-\sqrt{2}/g\). We observe that this solution is a limiting case of Case I in the limit \(k\to 1\), which corresponds to a solution with infinite period of the (\(cn\)) Jacobian elliptic function. Additionally we have also found dark soliton as exact solution to Eq. (6) when the source is \(j(t)=B\) tanh(\(t\)).
Figure 3: Phase portrait for the YM field with an external source \(j(t)=\) cn(\(0.5t,0.5\)). Initial conditions are \(f_{0}=0.5\)\(\dot{f_{0}}=0.5\)
Figure 1: Numerical solution for initial conditions \(f_{0}=0\), \(\dot{f_{0}}=0.5\), \(g=1\), and \(j=0.5\).
## III (3+1)d Yang-Mills-Higgs action
In the previous section we have looked at the time dependent solutions of the pure YM theory. In this section we generalize the same procedure to the YM field coupled to Higgs field with an external source, specifically we consider the Georgi-Glashow model with an external source.
The Yang-Mills-Higgs (YMH) Lagrangian density coupled to an external source is given by
\[{\cal L}=-\frac{1}{4}F^{a}_{\mu\nu}F^{\mu va}+A^{a}_{\mu}J^{\mu a}+ \frac{1}{2}(D_{\mu}\Phi^{a})(D^{\mu}\Phi^{a})\] \[+\frac{m^{2}}{2}\Phi^{a}\Phi^{a}-\frac{\lambda}{4}(\Phi^{a}\Phi^{ a})^{2}+J^{a}\Phi^{a} \tag{8}\]
where the Higgs field is taken to be adjoint thereby \(D_{\mu}\Phi^{a}=\partial_{\mu}\Phi^{a}+ge^{abc}A^{b}_{\mu}\Phi^{c}\). The equation of motion of the above YMH action is
\[D_{\nu}F^{\nu\mu a}+g\epsilon^{bac}\Phi^{c}D^{\mu}\Phi^{b}=J^{ \mu a}\, \tag{9}\] \[D_{\mu}D^{\mu}\Phi^{a}-m^{2}\Phi^{a}+\lambda(\Phi^{b}\Phi^{b}) \Phi^{a}=J^{a}. \tag{10}\]
It is worth mentioning that some algebraic solutions of YMH have been obtained in Ref. [29]. Here, we follow the procedure enunciated by Ebert et al in Ref [26] where the following ansatz has been profitably utilized:
\[A^{a}_{\mu}=\delta^{a}_{\mu}f_{a}(t),\quad A^{\mu a}=g^{\mu a}f_{ a}(t),\quad J^{\mu a}=\delta^{\mu a}j_{a}(t),\] \[\Phi^{a}(t)=(\Phi_{1}(t),\Phi_{2}(t),\Phi_{3}(t)), \tag{11}\]
where \(a=1,2,3,\,\mu=1,2,3,4\) (\(x_{1}=t,x_{2}=x,x_{3}=y,x_{4}=z\)). In order to seek time dependent solutions, we choose \(f_{1}=0\), \(\Phi_{2}=\Phi_{3}=0\) implying \(J^{11}=0=J^{23}=J^{32}\). Further setting \(f_{2}=f_{3}=f(t)\), \(\Phi_{1}=\Phi\) we obtain the following coupled equations in gauge and Higgs field:
\[\ddot{f}+g^{2}\Phi^{2}f+g^{2}f^{3}=j_{1}(t)\, \tag{12}\] \[\ddot{\Phi}+(2g^{2}f^{2}-m^{2})\ \Phi+\lambda\Phi^{3}=j_{2}(t)\, \tag{13}\]
where \(J^{22}(t)=J^{33}(t)=j_{1}(t)\), \(J^{1}=0\), and \(J^{2}(t)=J^{3}(t)=j_{2}(t)\). The potential energy of this system is given by
\[V(f,\Phi)=\frac{g^{2}}{2}f^{4}+\frac{\lambda}{4}\Phi^{4}-\frac{m^{2}}{2}\Phi^{ 2}+g^{2}f^{2}\Phi^{2}-j_{1}f-j_{2}\Phi. \tag{14}\]
In Fig. (4) we depict this potential energy for different parameters specified in the figure caption. From this figure we observe that there are three critical points given by: \((0,\pm\sqrt{\frac{m+\lambda}{\lambda}})\) and \((0,0)\). It is worth mentioning that there is saddle point with coordinates \((0,0)\).
For the nonlinear coupled equations (12) and (13), we would like to find nondegenerate non-Abelian waves in terms of Jacobian elliptic functions. We would like to emphasize that these solutions are possible only when the source terms are also Jacobian elliptic functions explicitly depending on time. More pertinently the solutions are:
\[f=A\ {\rm cn}(\alpha t,k)\,\qquad j_{1}=B\ {\rm cn}(\alpha t,k)\, \tag{15}\] \[\Phi=C\ {\rm sn}(\alpha t,k)\,\qquad j_{2}=D\ {\rm sn}(\alpha t,k)\, \tag{16}\]
where the amplitude parameters are given by
\[A = \sqrt{C^{2}+\frac{2k\alpha^{2}}{g^{2}}}\, \tag{17}\]
\[B = \Big{[}g^{2}C^{2}-\alpha^{2}(1-2k)\Big{]}\sqrt{C^{2}+\frac{2k\alpha^ {2}}{g^{2}}}\, \tag{18}\] \[C = \sqrt{\frac{2g^{2}A^{2}-2k\alpha^{2}}{\lambda}}\,\] (19) \[D = \Big{[}2g^{2}A^{2}-\alpha^{2}(1+k)-m^{2}\Big{]}\sqrt{\frac{2g^{2} A^{2}-2k\alpha^{2}}{\lambda}}. \tag{20}\]
At this moment we will numerically analyze the behavior of the gauge field and the Higgs field corresponding to the nonlinear coupled equations (12) and (13). In Figs. (5) and (6) we depict the numerical solutions for \(f\) and \(\Phi\) respectively for one set of initial conditions mentioned in the figure captions. From the figures it is clear that the dynamics of \(f\) and \(\Phi\) in the presence of a source is much richer as compared to the dynamics without sources Ref. [26]. As the strength of the source is increased the motion undergoes a transition from periodic to quasi-periodic and then stochastic. The dynamics for a different set of initial conditions is revealed in Fig. (7) and (8). Once again the motion undergoes the transition from periodic to quasi-periodic and then stochastic. These system behaviors have been illustrated in Figs. (9) and (10) by increasing the strength of the external sources for various values mentioned in the figure captions.
## IV Conclusion
In conclusion we have reported exact solutions for the YM and YMH dynamical equations in the presence of external sources. Two special cases have been considered: In the first case, we have explored exact solutions of YM field equations when the external sources are homogeneous. The exact solutions are explicated in detail for different parameter values. In the case of YM equations of motion, numerical simulations corroborated our analytical results. We have also found exact soliton solutions when the source is inhomogeneous. The phase space portraits have been obtained when the source is time dependent Jacobian cosine. In the second case, we
Figure 11: Phase space plot for initial conditions \(f_{0}=0.5\), \(f_{0}=0\), \(j=0.2\) and the set of parameter values \(g=1\), \(m=1\) and \(\lambda=2\).
Figure 14: Phase space plot for initial conditions \(\Phi_{0}=0.5\), \(\dot{\Phi}_{0}=0\), \(j=0.5\) and the set of parameter values \(g=1\), \(m=1\) and \(\lambda=2\).
Figure 12: Phase space plot for initial conditions \(\Phi_{0}=0.5\), \(\dot{\Phi}_{0}=0\), \(j=0.2\) and the set of parameter values \(g=1\), \(m=1\) and \(\lambda=2\).
Figure 13: Phase space plot for initial conditions \(f_{0}=0.5\), \(f_{0}=0\), \(j=0.5\) and the set of parameter values \(g=1\), \(m=1\) and \(\lambda=2\).
have obtained exact nondegenerate soliton solutions of the YMH dynamical equations, when the source is inhomogeneous. We have delineated the parameter domains in which the these solitons exist. Also we have obtained phase space portraits. In this case we have solved the coupled differential equations using RK-4 technique for different initial conditions. The extensive numerical simulations conducted revealed the stochastic motion of the ensuing fields. Despite the fact that the study of the Yang-Mills equations at the classical level has already led to numerous remarkable results, we have still not yet understood in adequate detail the numerous manifestations of unusual properties of the Yang-Mills fields associated with their nonlinear interactions with external sources. Nonetheless, we envision that these presented time-dependent non-Abelian waves will shed new light on our understanding of gluoundynamics, involving nonlinear interactions with coupling constant \(g\) in the presence of external sources. The external sources that have been used here play a pivotal role in exhibiting the rich dynamical features of both YM and YMH fields. Thus it is hoped that the current work will shed some light in that direction.
|
2310.06562 | Compositional Representation Learning for Brain Tumour Segmentation | For brain tumour segmentation, deep learning models can achieve human
expert-level performance given a large amount of data and pixel-level
annotations. However, the expensive exercise of obtaining pixel-level
annotations for large amounts of data is not always feasible, and performance
is often heavily reduced in a low-annotated data regime. To tackle this
challenge, we adapt a mixed supervision framework, vMFNet, to learn robust
compositional representations using unsupervised learning and weak supervision
alongside non-exhaustive pixel-level pathology labels. In particular, we use
the BraTS dataset to simulate a collection of 2-point expert pathology
annotations indicating the top and bottom slice of the tumour (or tumour
sub-regions: peritumoural edema, GD-enhancing tumour, and the necrotic /
non-enhancing tumour) in each MRI volume, from which weak image-level labels
that indicate the presence or absence of the tumour (or the tumour sub-regions)
in the image are constructed. Then, vMFNet models the encoded image features
with von-Mises-Fisher (vMF) distributions, via learnable and compositional vMF
kernels which capture information about structures in the images. We show that
good tumour segmentation performance can be achieved with a large amount of
weakly labelled data but only a small amount of fully-annotated data.
Interestingly, emergent learning of anatomical structures occurs in the
compositional representation even given only supervision relating to pathology
(tumour). | Xiao Liu, Antanas Kascenas, Hannah Watson, Sotirios A. Tsaftaris, Alison Q. O'Neil | 2023-10-10T12:19:39Z | http://arxiv.org/abs/2310.06562v1 | # Compositional Representation Learning for Brain Tumour Segmentation
###### Abstract
For brain tumour segmentation, deep learning models can achieve human expert-level performance given a large amount of data and pixel-level annotations. However, the expensive exercise of obtaining pixel-level annotations for large amounts of data is not always feasible, and performance is often heavily reduced in a low-annotated data regime. To tackle this challenge, we adapt a mixed supervision framework, vMFNet, to learn robust compositional representations using unsupervised learning and weak supervision alongside non-exhaustive pixel-level pathology labels. In particular, we use the BraTS dataset to simulate a collection of 2-point expert pathology annotations indicating the top and bottom slice of the tumour (or tumour sub-regions: peritumural edema, GD-enhancing tumour, and the necrotic / non-enhancing tumour) in each MRI volume, from which weak image-level labels that indicate the presence or absence of the tumour (or the tumour sub-regions) in the image are constructed. Then, vMFNet models the encoded image features with von-Mises-Fisher (vMF) distributions, via learnable and compositional vMF kernels which capture information about structures in the images. We show that good tumour segmentation performance can be achieved with a large amount of weakly labelled data but only a small amount of fully-annotated data. Interestingly, emergent learning of anatomical structures occurs in the compositional representation even given only supervision relating to pathology (tumour).
Keywords:Compositionality Representation learning Semi-supervised Weakly-supervised Brain tumour segmentation.
## 1 Introduction
When a large amount of labelled training data is available, deep learning techniques have demonstrated remarkable accuracy in medical image segmentation [2]. However, performance drops significantly when insufficient pixel-level annotations are available [16, 17, 24]. By contrast, radiologists learn clinically relevant visual features from "weak" image-level supervision of seeing many medical scans [1]. When searching for anatomy or lesions of interest in new images, they
look for characteristic configurations of these clinically relevant features (or components). A similar compositional learning process has been shown to improve deep learning model performance in many computer vision tasks [9, 11, 25] but has received limited attention in medical applications.
In this paper, we consider a limited annotation data regime where few pixel-level annotations are available for the task of brain tumour segmentation in brain MRI scans. Alongside this, we construct slice-level labels for each MRI volume indicating the presence or absence of the tumour. These labels can be constructed from 2-point expert pathology annotations indicating the top and bottom slices of the tumour, which are fast to collect. We consider that pathology annotations are not only better suited to the task (tumour segmentation) but also to the domain (brain MRI) than the originally proposed weak supervision with anatomy annotations [15]; annotating the top and bottom slices for anatomical brain structures such as white matter, grey matter and cerebrospinal fluid (CSF) would be relatively uninformative about the configurations of structures within the image due to their whole brain distributions.
For the learning paradigm, we investigate the utility of learning compositional representations in increasing the annotation efficiency of segmentation model training. Compositional frameworks encourage identification of the visible semantic components (e.g. anatomical structures) in an image, requiring less explicit supervision (labels). We follow [11, 15, 18] in modelling compositional representations of medical imaging structures with learnable von-Mises-Fisher (vMF) kernels. The vMF kernels are learned as the cluster centres of the feature vectors of the training images, and the vMF activations determine which kernel is activated at each position. On visualising kernel activations, it can be seen that they approximately correspond to human-recognisable structures in the image, lending interpretability to the model predictions. Our contributions are summarised as:
* We refine an existing mixed supervision compositional representation learning framework, vMFNet, for the task of brain tumour segmentation, changing the weak supervision task from anatomy presence/absence to more domain-suited pathology presence/absence and simplifying the architecture and training parameters according to the principle of parsimony (in particular reducing the number of compositional vMF kernels and removing an original training subtask of image reconstruction).
* We perform extensive experiments on the BraTS 2021 challenge dataset [4, 5, 19] with different percentages of labelled data, showing superior performance of the proposed method compared to several strong baselines, both quantitatively (better segmentation performance) and qualitatively (better compositional representations).
* We compare weak pathology supervision with _tumour_ labels to richer tumour _sub-region_ labels, showing that the latter increases model accuracy for the task of tumour sub-region segmentation but also reduces the generality of the compositional representation, which loses anatomical detail and increases in pathology detail, becoming more focused on the supervision task.
## 2 Related work
Compositionality is a fundamental concept in computer vision, where it refers to the ability to recognise complex objects or scenes by detecting and combining simpler components or features [13]. Leveraging this idea, compositional representation learning is an area of active research in computer vision [27]. Early approaches to compositional representation learning in computer vision include the bag-of-visual-words model [12] and part-based models [11]. Compositional representation learning has been applied to fine-grained recognition tasks in computer vision, such as recognising bird species [9, 23]. In addition, compositionality has been incorporated for robust image classification [11, 25] and recently for compositional image synthesis [3, 14]. Among these works, Compositional Networks [11], originally designed for robust classification under object occlusion, are easier to extend to pixel-level tasks as they estimate spatial and interpretable vMF likelihoods. Previous work integrates vMF kernels [11] for object localisation [26] and recently for nuclei segmentation (with the bounding box as supervision) in a weakly supervised manner [28]. More recently, vMFNet [18] applies vMF kernels for cardiac image segmentation in the domain generalisation setting. Additionally, vMFNet integrated weak labels indicating the presence or absence of cardiac structures and this gave improved performance [15]. We use similar types of weak image-level annotations but apply the vMF kernels to pathology segmentation and supervise with weak labels indicating the presence or absence of pathological structures.
## 3 Method
We apply vMFNet [15, 18], as shown in Fig. 1, a model consisting of three modules: the feature extractor \(\mathbf{F}_{\psi}\), the task network \(\mathbf{T}_{\theta}\) (for brain tumour segmentation in our case), and the weak supervision network \(\mathbf{W}_{\omega}\), where \(\psi\), \(\theta\) and \(\omega\)
Figure 1: Illustration of the brain tumour segmentation task using vMFBrain for compositional representation learning. We extract the weak supervision pathology labels (_presence or absence of tumour_) from 2-point brain tumour annotations; interestingly, learning of anatomical structures somewhat emerges even without supervision. Notation is specified in Section 3.
denote the network parameters. Compositional components are learned as vMF kernels by decomposing the features extracted by \(\mathbf{F}_{\psi}\). Then, the vMF likelihoods that contain spatial information are used to predict the tumour segmentation mask with \(\mathbf{T}_{\theta}\). The voxel-wise output of \(\mathbf{T}_{\theta}\) is also input to the weak supervision network \(\mathbf{W}_{\omega}\) to predict the presence or absence of the tumour. This framework is detailed below. We term our implementation _vMFBrain_.
### Background: learning compositional components
To learn compositional components, the image features \(\mathbf{Z}\in\mathbb{R}^{H\times W\times D}\) are first extracted by \(\mathbf{F}_{\psi}\). \(H\) and \(W\) are the spatial dimensions and \(D\) is the number of channels. The feature vector \(\mathbf{z}_{i}\in\mathbb{R}^{D}\) is defined as the normalised vector (i.e. \(||\mathbf{z}_{i}||=1\)) across channels at position \(i\) on the 2D lattice of the feature map. Then, the image features are modelled with \(J\) vMF distributions. Each distribution has a learnable mean that is defined as vMF kernel \(\mathbf{\mu}_{j}\in\mathbb{R}^{D}\). To ensure computational tractability, a fixed variance \(\sigma\) is set for all distributions. The vMF likelihood for the \(j^{th}\) distribution at each position \(i\) is calculated as:
\[p(\mathbf{z}_{i}|\mathbf{\mu}_{j})=\frac{e^{\sigma_{j}\mathbf{\mu}_{j}^{T}\mathbf{z}_{ i}}}{C},\text{ s.t. }||\mathbf{\mu}_{j}||=1, \tag{1}\]
where \(C\) is a constant. This gives the vMF likelihood vector \(\mathbf{z}_{i,vMF}\in\mathbb{R}^{J}\), a component of \(\mathbf{Z}_{vMF}\in\mathbb{R}^{H\times W\times J}\), which determines which kernel is activated at each position. To update the kernels during training, the clustering loss \(\mathcal{L}_{clu}\) is defined in [11] as:
\[\mathcal{L}_{clu}(\mathbf{\mu},\mathbf{Z})=-(HW)^{-1}\sum_{i}\max_{j}\mathbf{\mu}_{j} ^{T}\mathbf{z}_{i}, \tag{2}\]
where the kernel \(\mathbf{\mu}_{j}\) which is maximally activated for each feature vector \(\mathbf{z}_{i}\) is found, and the distance between the feature vectors and their corresponding kernels is minimised by updating the kernels. Overall, feature vectors in different images corresponding to the same anatomical or pathological structure will be clustered and activate the same kernels. Hence, the vMF likelihoods \(\mathbf{Z}_{vMF}\) for the same anatomical or pathological features in different images will be aligned to follow the same distributions (with the same means).
### vMFBrain for brain tumour segmentation
Taking the vMF likelihoods as input, a follow-on segmentation task module \(\mathbf{T}_{\theta}\), is trained to predict the tumour segmentation mask, i.e. \(\mathbf{\hat{Y}}=\mathbf{T}_{\theta}(\mathbf{Z}_{vMF})\). Firstly, we use direct strong supervision from the available pixel-level annotations \(\mathbf{Y}\). Secondly, we define the weak supervision label \(c\) as a scalar (or a vector \(\mathbf{c}\)) which indicates the presence or absence of the tumour (or the presence or absence of the tumour sub-regions) in the 2D image slice. We use the output of the segmentation module as the input for a weak supervision classifier \(\mathbf{W}_{\omega}\) i.e. \(\hat{c}=\mathbf{W}_{\omega}(\mathbf{\hat{Y}})\). We train the classifier using \(L1\) distance i.e. \(\mathcal{L}_{weak}(\hat{c},c)=|\hat{c}-c|_{1}\).
Overall, the model contains trainable parameters \(\psi\), \(\theta\), \(\omega\) and the vMF kernel means \(\mathbf{\mu}\). The model (including all the modules) is trained **end-to-end** with the following objective:
\[\operatorname*{argmin}_{\psi,\theta,\omega,\mathbf{\mu}}\mathcal{L}_{ clu}+\lambda_{Dice}\mathcal{L}_{Dice}(\mathbf{Y},\hat{\mathbf{Y}})(\mathbf{\mu}, \mathbf{Z})+\lambda_{weak}\mathcal{L}_{weak}(\hat{c},c), \tag{3}\]
where \(\mathcal{L}_{Dice}\) is Dice loss [7, 20]. We set \(\lambda_{Dice}=1\) when the ground-truth mask \(\mathbf{Y}\) is available, otherwise \(\lambda_{Dice}=0\). We set \(\lambda_{weak}\) as 0.5 for the whole tumour segmentation task and \(\lambda_{weak}\) as 0.1 for the tumour sub-region segmentation task (values determined empirically).
## 4 Experiments
### Dataset
We evaluate on the task of brain tumour segmentation using data from the BraTS 2021 challenge [4, 5, 19]. This data comprises native (T1), post-contrast T1-weighted (T1Gd), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (FLAIR) modality volumes for 1,251 patients from a variety of institutions and scanners. We split the data into train, validation and test sets containing 938, 62 and 251 subjects. The data has already been co-registered, skull-stripped and interpolated to the same resolution, each volume having 155 2D slices. Labels are provided for tumour sub-regions: the peritumoural edema (ED), the GD-enhancing tumour (ET), and the necrotic and non-enhancing tumour (NE). We additionally downscale all images to \(128\times 128\).
### Baselines
We compare to the baselines **UNet**[22], **SDNet**[6] and **vMFNet**[18]. **SDNet [6]** is a semi-supervised disentanglement model with anatomy and modality encoders to separately encode the anatomical structure information and the imaging characteristics. The anatomical features are used as the input to the segmentor for the task of segmentation; the model is also trained with unlabelled data on the task of reconstructing the image by recombining the anatomy and modality factors. We compare to **vMFNet** with the architecture and training loss as described in [18]; this setup does not use weak supervision and has an additional image reconstruction module which we found empirically not to help performance (which thus we omit from vMFBrain).
### Implementation
**Imaging backbone:**\(\mathbf{F}_{\psi}\) is a 2D UNet [22] (without the output classification layer) to extract features \(\mathbf{Z}\). The four modalities are concatenated as the input (with 4 channels) to \(\mathbf{F}_{\psi}\). For a fair comparison, we use this same UNet implementation as the backbone for all models.
vMFNet and vMFBrain4: \(\mathbf{T_{\theta}}\) is a shallow convolutional network. \(\mathbf{W_{\omega}}\) is a classifier model. Following [11], we set the variance of the vMF distributions as 30. The number of kernels is set to 8, as this number performed best empirically in our early experiments. For vMF kernel initialisation, we pre-train a 2D UNet for 10 epochs to reconstruct the input image with all the training data. After training, we extract the corresponding feature vectors and perform k-means clustering, then use the discovered cluster centres to initialise the vMF kernels.
Footnote 4: The code for vMFNet is available at [https://github.com/vios-s/vMFNet](https://github.com/vios-s/vMFNet).
**Training:** All models are implemented in PyTorch [21] and are trained using an NVIDIA 3090 GPU. Models are trained using the Adam optimiser [10] with a learning rate of \(1\times e^{-4}\) using batch size 32. In semi-supervised and weakly supervised settings, we consider the use of different percentages of fully labelled data to train the models. For this purpose, we randomly sample 2D image slices and the corresponding pixel-level labels from the whole training dataset.
### Results
We compare model performance quantitatively using volume-wise Dice (%) and Hausdorff Distance (95%) (HD) [8] as the evaluation metrics, and qualitatively using the interpretability and compositionality of representations. In Table 1 and Table 2, for semi-supervised and weakly supervised approaches, the training data contains all unlabelled or weakly labelled data alongside different percentages of fully labelled data. UNet is trained with different percentages of labelled data only. Bold numbers indicate the best performance. Arrows \((\uparrow,\downarrow)\) indicate the direction of metric improvement.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Metrics & \multicolumn{4}{c|}{Dice (\(\uparrow\))} & \multicolumn{4}{c|}{HD (\(\downarrow\))} \\ \hline Pixel labels & 0.1\% & 0.5\% & 1\% & 100\% & 0.1\% & 0.5\% & 1\% & 100\% \\ \hline UNet & 80.66\({}_{10}\) & 86.39\({}_{7.7}\) & 87.34\({}_{7.0}\) & 90.84\({}_{5.6}\) & 9.18\({}_{10}\) & 6.60\({}_{8.1}\) & 7.37\({}_{10}\) & **4.49\({}_{\mathbf{7.2}}\)** \\ \hline SDNet & 79.20\({}_{11}\) & 86.38\({}_{7.6}\) & 87.96\({}_{6.6}\) & **90.96\({}_{\mathbf{5.3}}\)** & 11.85\({}_{13}\) & 7.24\({}_{9.3}\) & 6.11\({}_{8.4}\) & 4.87\({}_{8.3}\) \\ \hline vMFNet & 81.30\({}_{6.8}\) & 86.14\({}_{7.8}\) & 87.98\({}_{6.6}\) & 90.62\({}_{5.8}\) & 11.62\({}_{13}\) & 9.12\({}_{12}\) & 7.15\({}_{9.6}\) & 5.20\({}_{8.2}\) \\ \hline vMFBrain w/o weak & 79.70\({}_{10}\) & 84.92\({}_{8.1}\) & 87.26\({}_{7.9}\) & 90.67\({}_{5.8}\) & 13.89\({}_{14}\) & 9.80\({}_{13}\) & 7.18\({}_{9.4}\) & 4.93\({}_{7.3}\) \\ \hline vMFBrain & **85.64\({}_{\mathbf{7.8}}\)** & **88.64\({}_{\mathbf{6.8}}\)** & **89.04\({}_{\mathbf{6.7}}\)** & 90.58\({}_{5.6}\) & **7.75\({}_{\mathbf{7.8}}\)** & **6.18\({}_{\mathbf{8.6}}\)** & **6.14\({}_{\mathbf{8.4}}\)** & 4.60\({}_{6.5}\) \\ \hline \end{tabular}
\end{table}
Table 1: Dice (%) and Hausdorff Distance (HD) results for the task of **whole tumour segmentation**. We report the mean and standard deviation across volumes.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{0.1\% pixel labelled data} & \multicolumn{2}{c|}{ED} & \multicolumn{2}{c|}{ET} & \multicolumn{2}{c|}{NE} \\ \cline{2-7} & Dice (\(\uparrow\)) & HD (\(\downarrow\)) & Dice (\(\uparrow\)) & HD (\(\downarrow\)) & Dice (\(\uparrow\)) & HD (\(\downarrow\)) \\ \hline UNet & 71.47\({}_{12}\) & 9.60\({}_{11}\) & 83.74\({}_{8.4}\) & **5.19\({}_{\mathbf{5.7}}\)** & 79.42\({}_{10}\) & 10.24\({}_{7.9}\) \\ \hline SDNet & 75.87\({}_{11}\) & 10.17\({}_{11}\) & 82.45\({}_{8.8}\) & 7.74\({}_{42}\) & 80.70\({}_{9.7}\) & 9.89\({}_{11}\) \\ \hline vMFNet & 71.11\({}_{12}\) & 10.06\({}_{9.8}\) & 80.92\({}_{6.7}\) & 7.99\({}_{11}\) & 78.37\({}_{11}\) & 12.97\({}_{11}\) \\ \hline vMFBrain w/o weak & 70.65\({}_{13}\) & 15.65\({}_{15}\) & 79.36\({}_{12}\) & 13.13\({}_{17}\) & 79.39\({}_{9.8}\) & 9.50\({}_{9.3}\) \\ \hline vMFBrain w/ whole tumour weak & 75.02\({}_{11}\) & 11.56\({}_{12}\) & 84.59\({}_{8.5}\) & 8.17\({}_{12}\) & 79.48\({}_{9.6}\) & 10.02\({}_{9.1}\) \\ \hline vMFBrain w/ tumour sub-region weak & **78.43\({}_{\mathbf{9.8}}\)** & **9.14\({}_{\mathbf{8.}}\)** & **85.77\({}_{\mathbf{8.2}}\)** & 5.90\({}_{7.7}\) & **81.31\({}_{\mathbf{9.0}}\)** & **8.08\({}_{\mathbf{7.0}}\)** \\ \hline \end{tabular}
\end{table}
Table 2: Dice (%) and Hausdorff Distance (HD) results for the task of **tumour sub-region segmentation**. We report the mean and standard deviation across volumes.
**Brain tumour segmentation with weak labels:** Overall, as reported in Table 1, the proposed vMFBrain model achieves best performance for most of the cases, particularly when very few annotations are available, i.e. the 0.1% case. When dropping the weak supervision (vMFBrain w/o weak), we observe reduced performance, which confirms the effectiveness of weak supervision. We also observe that the reconstruction of the original image (in vMFNet) does not help. It is possible that reconstruction of the tumour does not help here because the tumour has inconsistent appearance and location between different scans. With more annotated data, all models gradually achieve better performance, as expected. Notably, with only 1% labelled data vMFBrain achieves comparable performance (89.04 on Dice and 6.14 on HD) to the fully supervised UNet trained with all labelled data (90.84 on Dice and 4.49 on HD).
Figure 2: Visualisation of vMF compositional representations (whole tumour supervision). We show the 4 input image modalities, the ground truth tumour segmentation mask, and all 8 vMF channels for the vMFBrain and baseline models trained with different percentages of labelled data. In the red boxes, the other interpretable vMF activations (excluding the tumour kernels) are highlighted. The vMF channels are ordered manually. For the vMFBrain channels, we label with a clinicianβs visual interpretation of which image features activated each kernel. N/I denotes non-interpretable.
**Tumour sub-region segmentation:** We also report the results of tumour sub-region segmentation task in Table 2. For this task, we perform experiments using different weak labels: a) the weak label indicating the presence of the whole tumour i.e. vMFBrain w/ whole weak and b) the weak label indicating the presence of the tumour sub-regions i.e. vMFBrain w/ sub weak. It can be seen that our proposed vMFBrain performed best with both types of weak labels. Predictably, the best performance occurs when more task-specific weak labels (i.e. weak supervision on the tumour sub-regions) are provided.
**Interpretability of compositional representations:** We are particularly interested in the compositionality of the representations when pixel labels are not sufficient. In Fig. 2, we show the kernel activations. Note that the channels are ordered manually. For different runs, the learning is emergent such that kernels randomly learn to represent different components. Clearly, one of the kernels corresponds to the tumour in all cases. Using this kernel, we can detect and locate the tumours. For vMFBrain, training with more labelled data improves the compositionality of the kernels and the activations i.e. different kernels correspond to different anatomical or pathological structures, which are labelled by a clinician performing visual inspection of which image features activated each
Figure 3: Visualisation of vMF compositional representations (tumour sub-region supervision). We show the 4 input image modalities, the corresponding ground truth tumour sub-region segmentation mask and all 8 channels of the representations for the models trained with 1% of labelled data. The channels are ordered manually. For the vMFBrain channels, we label with a clinicianβs visual interpretation of which image features activated each kernel. N/I denotes non-interpretable.
channel. The most interpretable and compositional representation is vMFBrain trained with 1% labelled data. As highlighted in the red boxes, the kernels relate to CSF, brain matter, and the border of the brain even without any information about these structures given during training. Qualitatively, vMFBrain decomposes this information better into each kernel i.e. learns better compositional representations compared to other baseline models. Notably, weak supervision improves compositionality. We also show in Fig. 3 the representations for sub-region segmentation. Overall, we observe that with the more task-specific weak labels, the kernels learn to be more aligned with the sub-region segmentation task, where less information on other clinically relevant features is learnt.
## 5 Conclusion
In this paper, we have presented vMFBrain, a compositional representation learning framework. In particular, we constructed weak labels indicating the presence or absence of the brain tumour and tumour sub-regions in the image. Training with weak labels, better compositional representations can be learnt that produce better brain tumour segmentation performance when the availability of pixel-level annotations is limited. Additionally, our experiments show the interpretability of the compositional representations, where each kernel corresponds to specific anatomical or pathological structures. Importantly, according to our experiments and the results reported in previous studies [15, 18], the vMF-based compositional representation learning framework is robust and applicable to different medical datasets and tasks. In future work, we might consider transferring vMFBrain to 3D in order to process wider spatial context for each structure.
**Acknowledgements** S.A. Tsaftaris acknowledges the support of Canon Medical and the Royal Academy of Engineering and the Research Chairs and Senior Research Fellowships scheme (grant RCSRF1819\(\backslash\)8\(\backslash\)25). Many thanks to Patrick Schrempf and Joseph Boyle for their helpful review comments.
|
2306.14680 | A Conditional Flow Variational Autoencoder for Controllable Synthesis of
Virtual Populations of Anatomy | The generation of virtual populations (VPs) of anatomy is essential for
conducting in silico trials of medical devices. Typically, the generated VP
should capture sufficient variability while remaining plausible and should
reflect the specific characteristics and demographics of the patients observed
in real populations. In several applications, it is desirable to synthesise
virtual populations in a \textit{controlled} manner, where relevant covariates
are used to conditionally synthesise virtual populations that fit a specific
target population/characteristics. We propose to equip a conditional
variational autoencoder (cVAE) with normalising flows to boost the flexibility
and complexity of the approximate posterior learnt, leading to enhanced
flexibility for controllable synthesis of VPs of anatomical structures. We
demonstrate the performance of our conditional flow VAE using a data set of
cardiac left ventricles acquired from 2360 patients, with associated
demographic information and clinical measurements (used as
covariates/conditional information). The results obtained indicate the
superiority of the proposed method for conditional synthesis of virtual
populations of cardiac left ventricles relative to a cVAE. Conditional
synthesis performance was evaluated in terms of generalisation and specificity
errors and in terms of the ability to preserve clinically relevant biomarkers
in synthesised VPs, that is, the left ventricular blood pool and myocardial
volume, relative to the real observed population. | Haoran Dou, Nishant Ravikumar, Alejandro F. Frangi | 2023-06-26T13:23:52Z | http://arxiv.org/abs/2306.14680v2 | A Conditional Flow Variational Autoencoder for Controllable Synthesis of Virtual Populations of Anatomy
###### Abstract
The generation of virtual populations (VPs) of anatomy is essential for conducting in silico trials of medical devices. Typically, the generated VP should capture sufficient variability while remaining plausible and should reflect the specific characteristics and demographics of the patients observed in real populations. In several applications, it is desirable to synthesise virtual populations in a _controlled_ manner, where relevant covariates are used to conditionally synthesise virtual populations that fit a specific target population/characteristics. We propose to equip a conditional variational autoencoder (cVAE) with normalising flows to boost the flexibility and complexity of the approximate posterior learnt, leading to enhanced flexibility for controllable synthesis of VPs of anatomical structures. We demonstrate the performance of our conditional flow VAE using a data set of cardiac left ventricles acquired from 2360 patients, with associated demographic information and clinical measurements (used as covariates/conditional information). The results obtained indicate the superiority of the proposed method for conditional synthesis of virtual populations of cardiac left ventricles relative to a cVAE. Conditional synthesis performance was evaluated in terms of generalisation and specificity errors and in terms of the ability to preserve clinically relevant biomarkers in synthesised VPs, that is, the left ventricular blood pool and myocardial volume, relative to the real observed population.
Keywords:Virtual Population Generative Model Normalizing Flow.
## 1 Introduction
_In-silico_ trials (ISTs) use computational modelling and simulation techniques with virtual twin or patient models of anatomy and physiology to evaluate the
safety and efficacy of medical devices virtually [22]. Virtual patient populations (VPs), distinct from virtual twin populations, comprise plausible instances of anatomy and physiology that do not represent any specific real patient's data (as in the case of the latter, viz. virtual twins). In other words, VPs comprise synthetic data that help expand/enrich the diversity of anatomical and physiological characteristics that can be investigated within an IST for a given medical device. A key aspect of patient recruitment in real clinical trials used to assess device performance and generate regulatory evidence for device approval is the clear definition of inclusion and exclusion criteria for the trial. These criteria define the target patient population considered appropriate/safe to assess the performance of the device of interest. Consequently, it is desirable to enable the _controlled_ synthesis of VPs that may be used for device ISTs, in a manner that emulates the imposition of trial inclusion and exclusion criteria.
Virtual populations can be considered to be parametric representations of the anatomy sampled from a generative model. Traditional statistical shape models (SSMs), based on methods such as principal component analysis (PCA), have been widely explored in the past decade [8; 9; 15]. Recent studies focus on deep learning-based generative models due to their automatic and powerful hierarchical feature extraction [3; 7]. For instance, Bonazzola _et al._[3] used a graph convolutional variational auto-encoder (gcVAE) to learn latent representations of 3D left ventricular meshes and used the learnt representations as surrogates for cardiac phenotypes in genome-wide association studies. Dou _et al._[7] proposed learning the shape representations of multiple cardiovascular anatomies using gcVAE independently and then assembling them into complete whole-heart anatomies termed virtual heart chimaeras. Other studies have investigated conditional-generative models for synthesis of VPs of anatomies. For example, Beetz _et al._[1] employed a conditional VAE (cVAE), conditioned on gender and cardiac phase, to allow the synthesis of VPs from biventricular anatomies. In subsequent work [2; 12], they extended their method to a multidomain VAE to model biventricular anatomies at multiple times (across the cardiac cycle), using patient-specific electrocardiogram (ECG) signals as additional conditioning information (in addition to patient demographic data and standard clinical measurements) to guide the synthesis. All aforementioned methods model the latent space in the VAEs/cVAEs as a multivariate Gaussian distribution with a diagonal covariance matrix. This limits the flexibility afforded to the cVAE, as the Gaussian distribution, being unimodal, is a poor approximation to multimodal latent posterior distributions. This in turn limits the overall variability in anatomical shape that can be captured by standard VAEs and cVAEs.
In this study, we address the limitations of the state-of-the-art conditional generative models used to synthesise VPs of anatomical structures. In particular, we propose a method to relax the constraint on modelling the latent distribution as a unimodal multivariate Gaussian, to boost the flexibility of the generative model, and to enable conditional synthesis of diverse and plausible VPs generation. Recent advances in normalising flows [14; 16; 21] introduce a new solution for this limitation by leveraging a series of invertible parameterized functions
to transform the unimodal distribution to a multimodal one. Motivated by this technique, we propose the first conditional flow VAE (parameterised as a graph-convolutional network) for the task of _controllable_ synthesis of VPs of anatomy. The contributions are as follows: (i) we introduce normalising flows to learn a multimodal latent posterior distribution by transforming the latent variables from a simple unimodal distribution. This helps the generative model capture greater anatomical variability from the observed real population, leading to the synthesis of more diverse VPs; (ii) we condition the flow-based VAE on patient demographic data and clinical measurements. This enables conditional synthesis of plausible VPs (given relevant covariates/conditioning information as inputs), which reflect the observed correlations between nonimaging patient information and anatomical characteristics in the real population.
## 2 Methodology
In this study, we propose a cVAE model equipped with normalising flows for controllable synthesis of VPs of cardiovascular anatomy. A schematic of the proposed conditional flow VAE network architecture is shown in Figure 1. We employ normalising flows in the latent space of the cVAE to transform the initial Gaussian posterior to a complex multimodal distribution.
**Conditional Variational Autoencoder:** A VAE is a probabilistic generative model/network [11] that comprises an encoder and a decoder network branch. The encoder learns a mapping from the input data to a low-dimensional latent space that abstracts the semantic representations from the observations, and the decoder reconstructs the original data from the low-dimensional latent representation. The latent space from which the observed data is generated is given by approximating the posterior distribution of the latent variables using
Figure 1: Schematic illustration of our proposed conditional flow VAE
variational inference. The VAE network is trained by maximising the evidence lower bound (ELBO), which is a summation of the expected log-likelihood of the data and the Kullback-Leibler divergence between the approximate posterior and some assumed prior distribution over the latent variables (typically a multivariate Gaussian distribution). Despite its effectiveness in capturing some of the observed variability in the training population (e.g. of anatomical shapes or images), VAEs do not provide any control over the generation process and hence cannot guarantee that the generated population anatomical shapes are representative of target patient populations with specific inclusion/exclusion criteria. Controllable synthesis of anatomical VPs is essential for constructing meaningful cohorts for use in ISTs. Conditional VAE [18] is a VAE-variant that uses additional covariates/conditioning information in addition to the input data (e.g. anatomical shapes) to learn a conditional latent posterior distribution (conditioned on the covariates), enabling controllable synthesis of VPs during inference (given relevant covariates/conditioning information as input).
Our conditional flow VAE (cVAE-NF) is a graph-convolutional network which takes as input a triangular surface mesh representation of an anatomical structure of interest, i.e., the Left Ventricle (LV) in this study, and its associated covariates/conditioning variables, i.e., the patient demographic data and clinical measurements, such as gender, age, weight, blood cholesterol, etc., and outputs the reconstructed surface mesh. Each mesh is represented by a list of 3D spatial coordinates of its vertices and an adjacency matrix defining vertex connectivity (i.e. edges of mesh triangles). The encoder and decoder contain five residual graph-convolutional blocks, respectively. Each block comprises two Chebyshev graph convolutions, each of which is followed by batch normalisation and ELU activation. A residual connection is added between the input and the output of each graph-convolutional block. Hierarchical mesh down/up-sampling operations proposed in CoMA [13] are adopted after each block to capture the global and local shape context. The VAE model is conditioned on covariates by scaling the hidden representations in the encoder similar to adaptive instance normalization [10] given the covariates as input to generate the scaling factor, and by concatenating the covariates with the latent variables before decoding.
**Flexible Posterior using Normalizing Flow:** Vanilla cVAEs model the approximate posterior distribution using Gaussian distributions with a diagonal covariance matrix. However, such a unimodal distribution is a poor approximation of the complex true latent posterior distribution in most real-world applications (e.g. for shapes of the LV observed across a population), limiting the anatomical variability captured by the model. In this study, we introduce normalising flows to construct a flexible multi-modal latent posterior distribution by applying a series of differentiable, invertible/diffeomorphic transformations iteratively to the initial simple unimodal latent distribution. As shown in Fig. 2, a two-dimensional Gaussian distribution can be transformed into a multi-modal distribution by applying several normalising flow steps to the former.
Consider an invertible and smooth mapping function \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) with inverse \(f^{-1}=g\), and a random variable \(\mathbf{z}\) with distribution \(q(\mathbf{z})\). The transformed
variable \(\mathbf{z}^{\prime}=f(\mathbf{z})\) follows a distribution given by:
\[q(\mathbf{z}^{\prime})=q(\mathbf{z})\left|\text{det}\frac{\partial f}{\partial \mathbf{z}}\right|^{-1} \tag{1}\]
where the \(\text{det}\frac{\partial f}{\partial\mathbf{z}}\) is the Jacobian determinant of \(f\). Therefore, we can obtain a complex multi-modal density by composing multiple invertible mappings to transform the initial, simple and tractable density sequentially, as follows,
\[\mathbf{z}_{i}=f_{i}\circ\ldots\circ f_{2}\circ f_{1}(\mathbf{z}_{0}) \tag{2}\]
\[\ln q_{i}(\mathbf{z}_{i})=\ln q_{0}(\mathbf{z}_{0})-\sum^{i}\ln\left|\text{det }\frac{\partial f_{i}}{\partial\mathbf{z}_{i-1}}\right| \tag{3}\]
The specific mathematical formulation of the normalising flow function is important and must be chosen with care to allow for efficient gradient computation during training, scalable inference, and efficiency in computing the determinant of the Jacobian. In this study, we leverage the planar flow in [16] as a basic unit of our latent normalising flow net. Specifically, each transformation unit is given by,
\[f(\mathbf{z})=\mathbf{z}+\mathbf{u}h(w^{\top}\mathbf{z}+b) \tag{4}\]
where \(\mathbf{w}\in\mathbb{R}^{d}\), \(\mathbf{u}\in\mathbb{R}^{d}\) and \(b\in\mathbb{R}\) are learnable parameters; \(h(\cdot)\) is a smooth element-wise non-linear function with derivative \(h^{\prime}(\cdot)\) (we use \(\tanh\) in our study) and \(\mathbf{z}\) denotes the latent variables sampled from the posterior distribution. Therefore, we could compute the log determinant of the Jacobian term in \(O(D)\) time as follows:
\[\phi(\mathbf{z})=h^{\prime}(\mathbf{w}^{\top}\mathbf{z}+b)\mathbf{w} \tag{5}\]
\[\left|\text{det}\frac{\partial f_{i}}{\partial\mathbf{z}_{i-1}}\right|=\left| \text{det}(\mathbf{I}+\mathbf{u}\phi(\mathbf{z})^{\top})\right|=\left|1+ \mathbf{u}^{\top}\phi(\mathbf{z})\right| \tag{6}\]
Finally, the network is trained by optimizing the modified ELBO based on equation 3:
\[\ln p(\mathbf{x}|\mathbf{c})\geq\mathbb{E}_{q(\mathbf{z}_{0}|\mathbf{x}, \mathbf{c})}\left[\ln p(\mathbf{x}|\mathbf{z}_{i},\mathbf{c})+\sum^{i}\ln \left|\text{det}\frac{\partial f_{i}}{\partial\mathbf{z}_{i-1}}\right|\right] -\text{KL}(q(\mathbf{z}_{0}|\mathbf{x},\mathbf{c})\|p(\mathbf{z}_{i})) \tag{7}\]
Figure 2: Effect of normalising flow on Gaussian distribution. Step 0 is the initial two-dimensional Gaussian distribution, and step 1-5 represents the distribution of latent variables transformed by the normalising flow layers (i.e., planar flow).
where, \(\ln p(\mathbf{x}|\mathbf{c})\) is the marginal log-likelihood of the observed data \(\mathbf{x}\) (i.e. here \(\mathbf{x}\) represents an LV graph/mesh), conditioned on the covariates of interest (i.e. patient demographics and clinical measurements) \(\mathbf{c}\); \(i\) is the steps of the normalizing flows. \(p(\mathbf{x}|\mathbf{z}_{i},\mathbf{c})\) is the likelihood of data parameterised by the decoder network, which reconstructs/predicts \(\mathbf{x}\) given the latent variables \(\mathbf{z}_{i}\), transformed by latent (planar) normalising flows, and the conditioning variables \(\mathbf{c}\); \(\text{KL}(q(\mathbf{z}_{0}|\mathbf{x})\|p(\mathbf{z}_{i}))\) is the Kullback-Leibler divergence of the approximate posterior initial \(q(\mathbf{z}_{0}|\mathbf{x},\mathbf{c})\) from the prior, \(p(z)=\mathcal{N}(z\mid 0,I)\).
## 3 Experimental setup and Results
**Data:** In this study, we created a cohort of 2360 triangular meshes of the left ventricle (LV) based on a subset of cardiac cine-MR imaging data available from the UK Biobank (UKBB) by registering a cardiac LV atlas mesh [17] in manual contours (as described in [23]). We randomly split the data set into 422/59/1879 for training, validation, and testing, respectively. All meshes have the same and fixed graph topology, sharing the same edges and faces but differing in the position of vertices; i.e. there is pointwise correspondence across all shapes. We used 14 covariates available for the same subjects in UKBB as conditioning variables for our model, including, gender, age, height, weight, pulse, alcohol drinker status, smoking status, HbA1c, cholesterol, C-reactive protein, glucose, high-density lipoprotein cholesterol (HDL), insulin-like growth factor 1 (IGF-1), and low-density lipoprotein (LDL) cholesterol. These covariates were chosen because they are known cardiovascular risk factors.
**Implementation Details:** The framework was implemented using PyTorch on a standard PC with a NVIDIA RTX 2080Ti GPU. We trained our model using the AdamW optimizer with an initial learning rate of 1e-3 and batch size of 16 for 1000 epochs. The feature number for each graph convolutional block in the encoder was 16, 32, 32, 64, 64, and in reverse order in the decoder. The latent dimension was set at 16. The down/up-sampling factor was four, and we used a warm-up strategy [19] to the weight of the KL loss to prevent model collapse.
**Evaluation metrics:** We compared our model (cVAE-NF) with a traditional PCA-based SSM, two generative models without conditioning information including a vanilla VAE and a VAE with normalising flow (VAE-NF) and the vanilla cVAE. Comparison of the vanilla cVAE can also validate the performance of existing approaches [1, 2] because they are built on the cVAE with different covariates and basic units in the network. We evaluated the performance of all methods using three different metrics: 1) the reconstruction error, which evaluates the generalisability of the trained model to reconstruct/represent unseen shapes, using the distance between the reconstructed mesh with the ground truth/original mesh; 2) the specificity error, which measures the anatomical plausibility of the virtual cohorts synthesised, using the distance between the generated meshes and its nearest neighbour in the unseen real population [6]; and 3) the variability in the left ventricular volume in the synthesised cohorts, to assess the diversity of the instances generated in terms of a clinically relevant
cardiac index. The variability in LV volume was quantified as the standard deviation of the volumes of LV blood pools (BPVols). The Euclidean distance was used to evaluate all three metrics. Additionally, we measured the activity of the latent dimension using the statistic \(A=\text{Cov}_{\mathbf{X}}(\mathbb{E}_{\mathbf{Z}\sim q(\mathbf{Z}|\mathbf{X})}[z])\) of the observations \(\mathbf{x}\)[4]. A higher activity score indicates that a given latent dimension can capture greater population-wide shape variability.
The results of our method are presented in Table 1. Our model outperforms the cVAE in terms of reconstruction error and the amount of volume variability captured in the synthesised VP (the reference volume variability for the real UKBB population was 33.38 \(mm^{3}\)). However, the cVAE achieved lower specificity errors than our model. This indicates that our method is better at capturing the population's shape variability, but it also creates some instances that are further away from the real population, resulting in higher specificity errors. We attribute this to the normalising flow's ability to learn a more flexible approximate posterior latent distribution of the observed shapes than the cVAE. This is also seen when comparing the performance of VAE and VAE-NF, where the latter can synthesise significantly more diverse VPs (e.g. it improves the volume variability from 3.00 to 16.03). Figure 3 shows the variability captured in each latent dimension. We observe that VAE-NF has higher activity scores in all latent dimensions compared to vanilla cVAE. The normalising flow allows for the approximation of multimodal latent distributions in the generative model, resulting in greater shape variability. Although PCA outperforms our method in terms of generalisation error and volume variability captured, it does not allow for controllable synthesis of VPs based on relevant patient demographic information and clinical measurements, making it less useful for our application of synthesising VPs for use in ISTs.
It is essential to capture the distribution of clinically relevant biomarkers (e.g. BPVol) in the synthesised virtual populations (VPs) based on the specified covariates/conditioning information available for real patients, in order to effectively replicate the inclusion/exclusion criteria used during trial design in ISTs. For example, the BPVol of women is known to be lower than that of men [20]. To verify this, we generated VPs using cVAE and our method, conditioned on real patient data (covariates) from the UK Biobank. Figure 3 summarises the
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Methods & Reconstruction Error\(\downarrow\) & Specificity Error\(\downarrow\) & Volume Variability\(\uparrow\) \\ \hline PCA & **0.82\(\pm\)0.16** & 1.48\(\pm\)0.26 & **32.74** \\ VAE & 1.29\(\pm\)0.21 & **1.39\(\pm\)0.98** & 3.00 \\ VAE-NF & 0.90\(\pm\)1.76 & 1.60\(\pm\)0.34 & 16.03 \\ \hline cVAE & 1.43\(\pm\)0.26 & **1.32\(\pm\)0.21** & 28.39 \\ Ours & **1.23\(\pm\)0.23** & 1.38\(\pm\)0.20 & **29.91** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The quantitative results of the investigated methods in a hold-out test dataset. The bold values represent the results are significantly better than those of other methods.
BPVol distribution for both genders in the synthesised VPs and the real UKBB population, and the former accurately reflects the known trend of women having lower BPVol than men. Compared to cVAE, our model generates a VP that more closely matches the distribution of the volume of the LV blood pool observed in the real population. We also visualised the effect of manipulating individual attributes on two real patients in Fig. 4. We selected two representative attributes that are significantly associated with BPVol and myocardial volume (MyoVol): weight and age. We observe that BPVol and MyoVol of the LV are positively correlated with the weight of the patients (as expected). On the other hand, increasing the individual's age results in a smaller BPVol, but an increased MyoVol (as visualised in Fig. 4), which is known to be due to cardiac hypertrophy caused by aging [5].
## 4 Conclusion
We proposed a conditional flow VAE model for the controllable synthesis of VPs of anatomy. Our approach was demonstrated to increase the flexibility of
Figure 4: Two representative examples of the reconstructed shapes and their variations through manipulation over two demographic attributes, i.e., weight and Age. MyoVol and BPVol are shown in the bottom right corner.
Figure 3: Left: Comparison of the activity scores in different latent dimensions between the cVAE and cVAE-NF; right: Kernel density plots for BPVol from the VPs generated by cVAE and cVAE-NF and the real patient population (UKBB).
the learnt latent distribution, resulting in VPs that captured greater variability in the LV shape than the vanilla cVAE. Furthermore, our model was able to model the relationship between covariates/conditional variables and the shape of the LV, and synthesise target VPs that fit the desired criteria (in terms of demographics of the patient and clinical measurements) and closely matched the real population in terms of a clinically relevant biomarker (LV BPVol). These results suggest that our approach has potential for the controllable synthesis of diverse, yet plausible, VPs of anatomy. Future work will focus on modelling the whole heart and exploring the impact of individual covariates on VP synthesis in more detail.
## 5 Acknowledgement
This research was carried out using data from the UK Biobank (access application 11350). This work was supported by the Royal Academy of Engineering (INSILEX CiET1819/19), Engineering and Physical Sciences Research Council (EPSRC) UKRI Frontier Research Guarantee Programmes (INSILICO, EP/Y030494/1), and the Royal Society Exchange Programme CROSSLINK IES\NSFC\201380.
|
2310.19442 | Birkhoff-James orthogonality in certain tensor products of Banach spaces
II | In this article, we discuss the relationship between Birkhoff-James
orthogonality of elementary tensors in the space
$L^{p}(\mu)\otimes^{\Delta_{p}}X,\; (1\leq p<\infty)$ with the individual
elements in their respective spaces, where $X$ is a Banach space whose norm is
Fr$\acute{e}chet$ differentiable and $\Delta_{p}$ is the natural norm induced
by $L^{p}(\mu,X)$. In order to study the said relationship, we first provide
some characterizations of Birkhoff-James orthogonality of elements in the
Lebesgue-Bochner space $L^{p}(\mu,X)$. | Mohit, Ranjana Jain | 2023-10-30T11:11:18Z | http://arxiv.org/abs/2310.19442v1 | # Birkhoff-James orthogonality in certain tensor products of Banach spaces II
# Birkhoff-James orthogonality in certain tensor products of Banach spaces II
Mohit and Ranjana Jain
Department of Mathematics, University of Delhi, Delhi, India mohit@math. Delhi.edu.cn
**Abstract:** In this article, we discuss the relationship between Birkhoff-James orthogonality of elementary tensors in the space \(L^{p}(\mu)\otimes^{\Delta_{p}}X,\ (1\leq p<\infty)\) with the individual elements in their respective spaces, where \(X\) is a Banach space whose norm is \(\mathrm{Fr}\check{c}chet\) differentiable and \(\Delta_{p}\) is the natural norm induced by \(L^{p}(\mu,X)\). In order to study the said relationship, we first provide some characterizations of Birkhoff-James orthogonality of elements in the Lebesgue-Bochner space \(L^{p}(\mu,X)\).
+
Footnote β : Research of the first named author is supported by Savitribai Jyotirao Phule Single Girl Child Fellowship vide F.No. 82-7/2022(SA-III) and the second named author is supported by Faculty Research Programme Grant by IoE, University of Delhi vide Ref.No./IoE/2023-24/12/FRP.
+
Footnote β : Research of the first named author is supported by Savitribai Jyotirao Phule Single Girl Child Fellowship vide F.No. 82-7/2022(SA-III) and the second named author is supported by Faculty Research Programme Grant by IoE, University of Delhi vide Ref.No./IoE/2023-24/12/FRP.
+
Footnote β : Research of the first named author is supported by Savitribai Jyotirao Phule Single Girl Child Fellowship vide F.No. 82-7/2022(SA-III) and the second named author is supported by Faculty Research Programme Grant by IoE, University of Delhi vide Ref.No./IoE/2023-24/12/FRP.
+
Footnote β : Research of the first named author is supported by Savitribai Jyotirao Phule Single Girl Child Fellowship vide F.No. 82-7/2022(SA-III) and the second named author is supported by Faculty Research Programme Grant by IoE, University of Delhi vide Ref.No./IoE/2023-24/12/FRP.
+
Footnote β : Research of the first named author is supported by Savitribai Jyotirao Phule Single Girl Child Fellowship vide F.No. 82-7/2022(SA-III) and the second named author is supported by Faculty Research Programme Grant by IoE, University of Delhi vide Ref.No./IoE/2023-24/12/FRP.
## 1. Introduction
The notion of Birkhoff-James orthogonality (in short, BJ-orthogonality) has been studied extensively in the last few decades in the category of normed spaces and more recently it has also attracted researchers from the areas of Banach algebras and operator algebras. One of the major reason behind the study of the notion of BJ-orthogonality is that it has various applications in the geometry of Banach spaces, see [1, 6]. Recall that, for a normed space \(X\) over a field \(\mathbb{F}\) (\(\mathbb{R}\) or \(\mathbb{C}\)) and \(x,\ y\in X,\ x\) is said to be BJ-orthogonal to \(y\), denoted as \(x\perp_{BJ}y\) if
\[\|x+\lambda y\|\geq\|x\|,\ \text{for all}\ \lambda\in\mathbb{F}.\]
Interestingly, there exists a very natural relationship between the notion of BJ-orthogonality and the (more classical and much thoroughly studied) notion of best approximation elements in normed spaces. The problem of best approximation lies in finding, for a given \(x\) in a normed space \(X\), an element \(g_{0}\) in a subset (mostly a subspace) \(G\) of \(X\) such that
\[\|x-g_{0}\|=\inf\{\|x-g\|:g\in G\}.\]
Such an element \(g_{0}\) is called a point of best approximation of \(x\) out of \(G\). We shall represent the set of all best approximants of \(x\) in \(G\) by \(\mathcal{P}_{G}(x)\). It is rather straight forward to see that for any two elements \(x,y\) in a normed space \(X\), \(x\perp_{BJ}y\) if
and only if \(0\in\mathcal{P}_{G}(x)\), \(G\) being the subspace spanned by \(y\). Needless to mention, a good deal of research has been done from both perspectives.
Since the seminal work of Grothendieck on tensor products of topological vectors spaces [5], the theories of tensor products of Banach spaces and Banach algebras (in particular) have proved to be indispensible in the proper understanding of these categories. Our motivation to study BJ-orthogonality arose from the natural question of studying tensor product spaces from the perspective of BJ-orthogonality. In this direction, in [11], we initiated the analysis of BJ-orthogonality of elementary tensors in various tensor product spaces in terms of the BJ-orthogonality of the individual elements in their respective spaces. One of the main results exhibited that for real Banach spaces \(X\) and \(Y\), \(x_{1}\otimes y_{1}\perp_{BJ}x_{2}\otimes y_{2}\) in \(X\otimes^{\lambda}Y\) if and only if \(x_{1}\perp_{BJ}x_{2}\) or \(y_{1}\perp_{BJ}y_{2}\), where \(\lambda\) is the injective tensor product [11, Theorem 3.7]. However this equivalence fails in \(C_{\mathbb{C}}(K_{1})\otimes^{\alpha}C_{\mathbb{C}}(K_{2})\), for any reasonable cross norm \(\|\cdot\|_{\alpha}\)[11, Example 3.2]. It was also established that this equivalence is true in \(L^{p}(\mu)\otimes^{\Delta_{p}}L^{p}(\nu)\), \(1<p<\infty\), where \(\mu\), \(\nu\) are positive measures and \(\Delta_{p}\) is defined in Definition 2.1 (see, [11, Theorem 3.9]).
Having made these observations, it was then quite natural to ask whether similar results hold in the tensor product spaces \(L^{1}(\mu)\otimes^{\Delta_{p}}X\) for \(1\leq p<\infty\). To our somewhat mixed satisfaction, in this article, we establish that a similar result holds in \(L^{p}(\mu)\otimes^{\Delta_{p}}X\) as well, where \(X\) is a Banach space whose norm is Fr\(\acute{e}\)chet differentiable, \(\mu\) is a positive measure and \(1<p<\infty\). When \(p=1\), quite surprisingly, we observe through concrete examples that such a result is not true in the tensor product spaces \(L^{1}(\mu)\otimes^{\gamma}L^{1}(\nu)\), and \(L^{1}(\mu)\otimes^{\gamma}X\), where \(\otimes^{\gamma}\) is the Banach space projective tensor product.
Our approach relies on the well known isometric identification between \(L^{p}(\mu)\otimes^{\Delta_{p}}X\) and the Lebesgue-Bochner space of \(p\)-integrable functions \(L^{p}(\mu,X)\) (see, [3, SS7.2]) and an appropriate exploitation of the above mentioned relationship between the notions of BJ-orthogonality and best approximation elements. More precisely, motivated by some classical results of Kripke-Rivlin and Singer regarding best approximations (as mentioned in the following two paragraphs) in \(L^{p}\)-spaces, we first obtain certain characterizations of best approximation elements for points in \(L^{p}(\mu,X)\) and then we exploit those to derive the above mentioned results in the context of tensor products.
In 1965, Kripke and Rivlin [8] characterized the elements of \(\mathcal{P}_{G}(f)\) for \(f\in L^{1}(\mu)\setminus\overline{G}\), for a positive measure space \((S,\mu)\) and a subspace \(G\) of \(L^{1}(\mu)\). In particular, he proved that \(g_{0}\in\mathcal{P}_{G}(f)\) if and only if
\[\bigg{|}\int\limits_{S}g(s)\ \overline{\operatorname{sign}}(f(s)-g_{0}(s))\,ds \bigg{|}\leq\int\limits_{Z(f-g_{o})}|g(s)|\,ds \tag{1}\]
for all \(g\in G\), here \(Z(f)\) denotes the zero set \(\{s\in S:f(s)=0\}\) of \(f\) and \(\operatorname{sign}(a)=a/|a|\) for \(0\neq a\in\mathbb{C},\ \operatorname{sign}(0)=0\).
Later, in 1970, Singer [12] gave a different proof of this characterization, and also provided its analogue for elements in \(L^{p}(\mu)\), \(1<p<\infty\). More precisely, he proved that if \((S,\mu)\) is a positive measure space and \(G\) is a subspace of \(L^{p}(\mu)\), then for \(f\in L^{p}(\mu)\setminus\overline{G},\;g_{0}\in\mathcal{P}_{G}(f)\) if and only if
\[\int\limits_{S}g(s)|f(s)-g_{0}(s)|^{p-1}\overline{sign}(f(s)-g_{0}(s))\,ds=0\]
for all \(g\in G\)[12, Theorem 1.11]. A natural question arises that what can be said about the approximation for the vector valued integrable functions. In 1989, Smirnov [13] characterized the elements of \(\mathcal{P}_{G}(f)\) for \(f\in C([0,1],X)\setminus U\), for a real smooth Banach space \(X\) and a convex subset \(U\) of \(C([0,1],X)\)(equipped with the integral-norm).
In Section 3, for any positive measure space \((S,\Sigma,\mu)\) and a Banach space \(X\) belonging to a relatively large class, we provide natural analogues of the characterizations of best approximation elements by Kripke-Rivlin and Singer in \(L^{p}(\mu,X),1\leq p<\infty\), using entirely different techniques. Before using these characterizations in the tensor product context, for \(p>1\), we derive a new and quite elegant proof of Light's theorem [10, Corollary 2] for Banach spaces with Fr\(\acute{e}\)chet differentiable norm. More precisely, we prove that if \(f\in L^{p}(\mu,X)\) (\(1<p<\infty\)) and \(Y\) is a closed subspace of \(X\), where \(X\) is a Banach space whose norm is Fr\(\acute{e}\)chet differentiable and \(\mu\) is a finite complete positive measure, then \(f\perp_{BJ}L^{p}(\mu,Y)\) if and only if \(f(s)\perp_{BJ}Y\) for a.e."\(s\)". We must mention that the proof by Light was based on some distance formula.
Finally, in Section 4, we present our main results related to the BJ-orthogonality of tensor products.
## 2. Preliminaries
We first collect some basic definitions and results which we need for our investigation. For a measure space \((S,\Sigma,\mu)\) (\(\mu\) being a positive measure) and a Banach space \(X\) (real or complex), _Lebesgue-Bochner space_ is defined as
\[L^{p}(\mu,X)=\{f:S\to X|\;f\mbox{ is strongly measurable and }\;\int\limits_{S}\|f(s)\|^{p}\,ds<\infty\},\]
\(1\leq p<\infty\), where almost everywhere equal functions are identified.
**Definition 2.1**.: For \(u\in L^{p}(\mu)\otimes X\), define the \(\Delta_{p}\)-norm as
\[||u||_{\Delta_{p}}=||\phi(u)||_{L^{p}(\mu,X)},\]
where \(\phi:L^{p}(\mu)\otimes X\to L^{p}(\mu,X)\) given by \(\phi(f\otimes x)=f(\cdot)x\) is an injective map. We denote the completion of \(L^{p}(\mu)\otimes X\) with respect to the \(\Delta_{p}\)-norm by \(L^{p}(\mu)\otimes^{\Delta_{p}}X\)
For \(p=1\), \(\Delta_{p}\)-norm coincides with the _Banach space projective tensor norm_ given by
\[\|u\|_{\gamma}=\inf\left\{\sum_{i=1}^{n}\|f_{i}\|\|x_{i}\|;\ u=\sum_{i=1}^{n}f_{i }\otimes x_{i}\right\},\ \ u\in L^{1}(\mu)\otimes X.\]
It is well known that the spaces \(L^{p}(\mu)\otimes^{\Delta_{p}}X\) and \(L^{1}(\mu)\otimes^{\gamma}L^{1}(\nu)\) are isometrically isomorphic to \(L^{p}(\mu,X)\) and \(L^{1}(\mu\times\nu)\), respectively see [3, SS 7.1, 7.2].
For \(0\neq x\in X\), a _support map_\(F_{x}\) at \(x\) is a bounded linear functional on \(X\) of norm one satisfying \(F_{x}(x)=\|x\|.\) An element \(x\) is said to be _smooth_ if \(F_{x}\) is unique, and if every non-zero element of \(X\) is smooth then we call \(X\) to be _smooth_.
The norm function \(\|\cdot\|\) on \(X\) is said to be
* _Gateaux differentiable_ at \(0\neq x\in X\) if \[\lim_{\alpha\to 0}\frac{\|x+\alpha y\|-\|x\|}{\alpha}\] exists for every \(y\in X\). It is well known that an element \(x\) is smooth if and only if the norm is G\(\hat{a}\)teaux differentiable at \(x\), and in this case, the G\(\hat{a}\)teaux derivative at \(x\), takes the value \(Re(F_{x}(y))\) in the direction of \(y\in X\).
* _Frechet differentiable_ at \(0\neq x\in X\) if there exists an \(f\in X^{*}\) satisfying \[\lim_{h\to 0}\frac{\left|\|x+h\|-\|x\|-f(h)\right|}{\|h\|}=0.\] The norm function is Fr\(\hat{e}\)chet differentiable if it is Fr\(\hat{e}\)chet differentiable at every non-zero point.
It is known that Fr\(\hat{e}\)chet differentiability implies G\(\hat{a}\)teaux differentiablity, and thus the space \(X\) is smooth, if the norm function on \(X\) is Fr\(\hat{e}\)chet differentiable.
Throughout the article, \((S,\mu)\) denotes a positive measure space unless specified.
## 3. Best Approximation in \(L^{p}(\mu,X)\)
We first suitably characterize the elements of best approximations in a subspace of \(L^{1}(\mu,X)\). One can easily observe that for \(X=\mathbb{F}\), by Riesz representation theorem, support map \(F_{z_{0}}\) is given by \(F_{z_{0}}(z)=\left\langle z,\frac{z_{0}}{|z_{0}|}\right\rangle_{X}\)=\(z\cdot\overline{\operatorname{sign}}(z_{0})\). Thus, the Kripke-Rivlin's characterization given in 1 can be reformulated as \(g_{0}\in\mathcal{P}_{G}(f)\) if and only if
\[\left|\int\limits_{Z(f-g_{0})^{c}}F_{f(s)-g_{0}(s)}(g(s))\,ds\right|\leq\int \limits_{Z(f-g_{0})}|g(s)|\,ds\]
for every \(g\in G.\) It is worth mentioning that we get a similar characterization for vector-valued integrable functions for a real Banach space when the norm on \(X\) is Fr\(\hat{e}\)chet differentiable. Further, when \(X\) is a complex Banach space we have
a slightly weaker charaterization, but for a large class of spaces, namely smooth Banach spaces. We would deploy few techniques of Smirnov [13] to prove the necessary part of the following result.
**Theorem 3.1**.: _Let \(X\) be a complex smooth Banach space and \(G\) be a subspace of \(L^{1}(\mu,X)\). For \(f\in\;L^{1}(\mu,X)\setminus\overline{G},\;g_{0}\in\mathcal{P}_{G}(f)\) if and only if_
\[\bigg{|}\int\limits_{Z(f-g_{0})^{c}}Re(F_{f(s)-g_{0}(s)}(g(s)))\,ds\bigg{|}\leq \int\limits_{Z(f-g_{0})}\|g(s)\|\,ds \tag{2}\]
_for all \(g\in G\)._
Proof.: First suppose that \(g_{0}\in\mathcal{P}_{G}(f)\) and let \(g\) be any arbitrary element of \(G\). Then for \(\alpha\in\mathbb{C}\), \(\|f-g_{0}+\alpha(g_{0}-g)\|_{1}\geq\|f-g_{0}\|_{1}\), that is,
\[\int\limits_{S}\|f(s)-g_{0}(s)+\alpha(g_{0}(s)-g(s))\|_{X}-\|f(s)-g_{0}(s)\|_{ X}\,ds\geq 0. \tag{3}\]
For each \(n\in\mathbb{N}\), consider a measurable function \(h_{n}:S\to\mathbb{R}\) given by
\[h_{n}(s)=n\big{(}\big{\|}f(s)-g_{0}(s)+\frac{1}{n}(g_{0}(s)-g(s))\big{\|}_{X}- \big{\|}f(s)-g_{0}(s)\big{\|}_{X}\big{)},\;s\in S.\]
By the triangle inequality we have \(|h_{n}(s)|\leq\|g_{0}(s)-g(s)\|\) for all \(s\in S\) and \(n\in\mathbb{N}\). Thus, by Lebesgue dominated convergence theorem and by (3), we have \(\int\limits_{S}\lim_{n\to\infty}h_{n}(s)\,ds\geq 0.\) Now, for \(s\in S\setminus Z(f-g_{0})\), G\(\hat{a}\)teaux differentiability of the norm function gives
\[\lim_{n\to\infty}h_{n}(s)=Re(F_{f(s)-g_{0}(s)}(g_{0}(s)-g(s))).\]
Thus, we have
\[\int\limits_{Z(f-g_{0})^{c}}Re(F_{f(s)-g_{0}(s)}(g_{0}(s)-g(s)))\,ds+\int \limits_{Z(f-g_{0})}\|g_{0}(s)-g(s)\|\,ds\geq 0.\]
for all \(g\in G\). Since \(G\) is a subspace, replacing \(g\) by \(g_{0}-g\) and \(g_{0}+g\) and using the fact that real part of support map is real linear we have
\[\bigg{|}\int\limits_{Z(f-g_{0})^{c}}Re(F_{f(s)-g_{0}(s)}(g(s)))\,ds\bigg{|} \leq\int\limits_{Z(f-g_{0})}\|g(s)\|\,ds\]
for all \(g\in G\).
Conversely, assume that the inequality holds. In order to prove \(g_{0}\in\mathcal{P}_{G}(f)\), equivalently, \(f-g_{0}\perp_{BJ}G\), by [7, Proposition 1.5], it is sufficient to prove that \(\underset{\phi\in[0,2\pi)}{\text{inf}}\;D_{\phi,\;f-g_{0}}(g)\geq 0\) for all \(g\in\;G\), where \(D_{\phi,\;f-g_{0}}(g)\) is the \(\phi\)-G\(\hat{a}\)teaux derivative
of the norm function at point \(f-g_{0}\) in the direction of \(g\). For this, let \(g\in G\) be any arbitrary element and \(\phi\in[0,2\pi).\) Then
\[D_{\phi,\;f-g_{0}}(g)=\lim_{\alpha\to 0^{+}}\frac{\|f-g_{0}+\alpha e^{i\phi}g\|_{1}- \|f-g_{0}\|_{1}}{\alpha}\] \[\qquad=\lim_{\alpha\to 0^{+}}\int\limits_{S}\frac{1}{\alpha}(\|f(s)-g _{0}(s)+\alpha e^{i\phi}g(s)\|_{X}-\|f(s)-g_{0}(s)\|_{X})\,ds\] \[\qquad=\int\limits_{S}\lim_{\alpha\to 0^{+}}\frac{1}{\alpha}(\|f(s)-g _{0}(s)+\alpha g_{1}(s)\|_{X}-\|f(s)-g_{0}(s)\|_{X})\,ds\]
using Lebesgue dominated convergence theorem, where \(g_{1}=e^{i\phi}g\in G\) as \(X\) is a complex Banach space. Using (2) for \(g_{1}\), we have
\[\int\limits_{Z(f-g_{0})^{c}}Re(F_{f(s)-g_{0}(s)}(g_{1}(s)))\,ds+\int\limits_{Z (f-g_{0})}\|g_{1}(s)\|\,ds\geq 0.\]
Since the norm function on \(X\) is G\(\hat{a}\)teaux differentiable, the above inequality reduces to
\[\int\limits_{S}\lim_{\alpha\to 0^{+}}\frac{\|f(s)-g_{0}(s)+\alpha g_{1}(s)\|_{X}- \|f(s)-g_{0}(s)\|_{X}}{\alpha}\,ds\geq 0,\]
which completes the proof.
Next, we derive a similar characterization when \(X\) is a real Banach space. To do this, we first prove a result which is motivated from [4, Lemma 3], with a slightly different notion of support map. Also, it is worth mentioning that for the real Banach spaces, this result was first proved by Cudia [2, Corollary 4.11]. However, we present a much simpler proof for any Banach space.
**Proposition 3.2**.: _Let \(X\) be a Banach space whose norm is Frechet differentiable. Then the map \(T:X\to X^{*}\) defined by \(T(x)=F_{x}\) for \(0\neq x\) and \(T(0)=0\) is continuous on \(X\setminus\{0\}\), where \(X\) and \(X^{*}\) are equipped with norm topologies._
Proof.: Since \(F_{\alpha x}=F_{x}\) for \(\alpha>0\), it is enough to prove that \(T\) is continuous on the unit sphere \(S_{1}\). Consider an arbitrary element \(x\in S_{1}\). We first claim that \(T\) is continuous at \(x\), when \(X^{*}\) is equipped with the weak\({}^{*}\)-topology.
If not, then there exist a weak\({}^{*}\)-neighbourhood, say \(V\), of \(T(x)\) and a sequence \(\{y_{n}\}\) in \(X\) such that \(\|x-y_{n}\|\leq\frac{1}{n}\) and \(T(y_{n})\notin V.\) Since \(\lim\limits_{n}\|y_{n}\|=\|x\|=1\), there exists \(m\in\mathbb{N}\) such that \(\|y_{n}\|\neq 0\) for all \(n\geq m\). thus without loss of generality, we consider the sequence \(\{y_{n}\}\) of non-zero terms. By Banach-Alaoglu theorem, the closed unit ball \(B^{*}\) of \(X^{*}\) is weak\({}^{*}\)-compact and thus the sequence \(\{F_{y_{n}}\}\) in \(B^{*}\) has a convergent subnet say \(\{F_{y_{\theta(k)}}\}\), converging to \(f\in B^{*}\). Let \(c\) be a bound
of the sequence \(\{\frac{1}{\|y_{n}\|}\}\). We claim that \(f=F_{x}\). To see this, consider
\[|f(x)-1| =\left|f(x)-F_{y_{\theta(k)}}\bigg{(}\frac{y_{\theta(k)}}{\|y_{ \theta(k)}\|}\bigg{)}\right|\] \[\leq|f(x)-F_{y_{\theta(k)}}(x)|+\bigg{|}F_{y_{\theta(k)}}(x)-F_{y_ {\theta(k)}}\bigg{(}\frac{y_{\theta(k)}}{\|y_{\theta(k)}\|}\bigg{)}\bigg{|}\] \[\leq|f(x)-F_{y_{\theta(k)}}(x)|+\frac{1}{\|y_{\theta(k)}\|}\big{\|} (x\|y_{\theta(k)}\|-y_{\theta(k)}\big{)}\|\] \[\leq|f(x)-F_{y_{\theta(k)}}(x)|+c\big{\|}(x\|y_{\theta(k)}\|-y_{ \theta(k)})\big{\|},\]
Since \(f\) is a weak\({}^{*}\)-limit of \(\{F_{y_{\theta(k)}}\}\), the subnet \(\{F_{y_{\theta(k)}}(x)\}\) converges to \(f(x)\). Also, the subnet \(\{x\|y_{\theta(k)}\|-y_{\theta(k)}\}\) converges to zero. Thus, \(f(x)=1=\|x\|\), and smoothness of \(X\) gives \(f=F_{x}\), that is, \(F_{x}\) is a weak\({}^{*}\)-cluster point of \(\{F_{y_{n}}\}\). Thus, \(V\) contains some points of the sequence \(\{F_{y_{n}}\}\) which is a contradiction.
Now, let \(\{x_{n}\}\) be a sequence in \(X\) converging to \(x\). Then \(T(x_{n})\to T(x)\) in weak\({}^{*}\)-topology of \(X^{*}\). Thus, we have a sequence \(\{F_{x_{n}}\}\) satisfying \(\underset{n}{\lim}F_{x_{n}}(x)=\|x\|\). By [4, Lemma 4], \(\{F_{x_{n}}\}\) is norm convergent to \(F_{x}\). Hence \(T\) is continuous at \(x\), which completes the proof.
**Theorem 3.3**.: _Let \(X\) be a real Banach space whose norm is Frechet differentiable and \(G\) be a subspace of \(L^{1}(\mu,X)\). For \(f\in\ L^{1}(\mu,X)\setminus\overline{G},\ g_{0}\in\mathcal{P}_{G}(f)\) if and only if_
\[\bigg{|}\int\limits_{Z(f-g_{0})^{c}}F_{f(s)-g_{0}(s)}(g(s))\,ds\bigg{|}\leq \int\limits_{Z(f-g_{0})}\|g(s)\|\,ds \tag{4}\]
_for all \(g\in G\)._
Proof.: Since \(X\) is smooth, the norm being Fr\(\acute{e}\)chet differentiable, proof of the necessary part follows on the same lines of Theorem 3.1.
For the converse, consider an arbitrary element \(g\) of \(G\).
Case(i): If \(\int\limits_{Z(f-g_{0})}\|g(s)\|\,ds=0\), then define \(\phi:S\to X^{*}\) as
\[\phi(s)=\begin{cases}F_{f(s)-g_{0}(s)}&\text{if }s\in Z(f-g_{0})^{c},\\ 0&\text{if }s\in Z(f-g_{0}).\end{cases}\]
We first claim that \(\phi\in L^{\infty}(\mu,X^{*})\). Since norm of the support map is one and \(f\neq g_{0}\), it is sufficient to show that \(\phi\) is strongly measurable. Since \(f-g_{0}\) is strongly measurable, there exists a sequence \(\{\psi_{n}\}\) of simple measurable functions such that \(\psi_{n}(s)\to f(s)-g_{0}(s)\) for a.e. "\(s\)". Let \(A_{n}=\{s\in S:\psi_{n}(s)\neq 0\}\) and consider a sequence \(\{\phi_{n}\}\) of simple measurable functions defined as
\[\phi_{n}(s)=\begin{cases}F_{\psi_{n}(s)}&\text{if }s\in Z(f-g_{0})^{c}\cap A_{n}, \\ 0,&elsewhere.\end{cases}\]
Let \(s\in S\) for which \(\underset{n}{\lim}\psi_{n}(s)=f(s)-g_{0}(s)\). If \(s\in Z(f-g_{0})^{c}\), then there exists \(n_{0}\in\mathbb{N}\) such that \(s\in A_{n}\) for all \(n\geq n_{0}\). Thus, by Proposition 3.2, the sequence \(\{F_{\psi_{n}(s)}\}_{n\geq n_{0}}\) converges to \(F_{f(s)-g_{0}(s)}\) and hence the sequence \(\{\phi_{n}(s)\}_{n\geq n_{0}}\) converges to \(\phi(s)\). If \(s\in Z(f-g_{0})\), then \(\phi_{n}(s)=0=\phi(s)\) for all \(n\). Thus, in both the cases, the function \(\phi\) is a.e. limit of a sequence of simple measurable functions and hence \(\phi\) is strongly measurable function. Now, by (4), we have \(\int_{S}\phi(s)(g(s))\,ds=0\). Thus,
\[\|f-g_{0}\|_{1} =\int_{S}\|f(s)-g_{0}(s)\|\,ds\] \[=\int_{S}\phi(s)(f(s)-g_{0}(s))\,ds\] \[\leq\int_{S}|\phi(s)(f(s)-g_{0}(s)-g(s))|\,ds\] \[\leq\int_{S}\|\phi(s)\|\|f(s)-g_{0}(s)-g(s)\|\,ds\] \[\leq\|\phi\|_{L^{\infty}(\mu,X^{*})}\int_{S}\|f(s)-g_{0}(s)-g(s) \|\,ds\] \[=\|f-g_{0}-g\|_{1}.\]
Case(ii): If \(\int\limits_{Z(f-g_{0})}\|g(t)\|\,dt\neq 0\), set \(\text{c=}\frac{-\big{|}\int\limits_{Z(f-g_{0})^{c}}F_{f(t)-g_{0}(t)}(g(t))\, dt}{\int\limits_{Z(f-g_{0})}\|g(t)\|\,dt}\) so that by (4), \(|c|\leq 1\). Define \(\phi:S\to X^{*}\) as
\[\phi(s)=\begin{cases}F_{f(s)-g_{0}(s)}&\text{if }s\in Z(f-g_{0})^{c},\\ cF_{g(s)}&\text{if }s\in Z(f-g_{0})\cap Z(g)^{c},\\ 0&\text{if }s\in Z(f-g_{0})\cap Z(g).\end{cases}\]
As done earlier, in order to prove that \(\phi\in L^{\infty}(\mu,X^{*})\), it is sufficient to show that \(\phi\) is strongly measurable. Consider a sequence \(\{g_{n}\}\) of simple measurable functions such that \(g_{n}(s)\to g(s)\) for a.e. "\(s\)". For \(B_{n}=\{s\in S:g_{n}(s)\neq 0\}\), define
\(\phi_{n}:S\to X^{*}\) as
\[\phi_{n}(s)=\begin{cases}F_{\psi_{n}(s)}&\text{if }s\in Z(f-g_{0})^{c}\cap A_{ n},\\ cF_{g_{n}(s)}&\text{if }s\in Z(f-g_{0})\cap Z(g)^{c}\cap B_{n},\\ 0,&elsewhere.\end{cases}\]
Clearly, \(\{\phi_{n}\}\) is a sequence of simple measurable functions. Consider an \(s\in\ S\) such that \(\underset{n}{\lim}\psi_{n}(s)=f(s)-g_{0}(s)\) and \(\underset{n}{\lim}g_{n}(s)=g(s)\), we claim that \(\underset{n}{\lim}\phi_{n}(s)=\phi(s)\). If \(s\in Z(f-g_{0})^{c}\), then \(\underset{n}{\lim}\phi_{n}(s)=\phi(s)\), as done in Case(i). If
\(Z(f-g_{0})\cap Z(g)^{c}\), then \(g(s)\neq 0\), so that \(s\in B_{n}\ \forall\ n\geq n_{0}\), for some \(n_{0}\in\ \mathbb{N}\). Thus for \(n\geq n_{0}\), \(\phi_{n}(s)=cF_{g_{n}(s)}\). Again, by Proposition 3.2, the sequence \(\{F_{g_{n}(s)}\}_{n\geq n_{0}}\) converges to \(F_{g(s)}\) and hence the sequence \(\{\phi_{n}(s)\}_{n\geq n_{0}}\) converges to \(\phi(s)\). Lastly, if \(s\in Z(f-g_{0})\cap Z(g)\), then \(\phi_{n}(s)=0\) for all \(n\) and we are done.
Finally, once again, observe that \(\int_{S}\phi(s)(g(s))\,ds=0\) and as done earlier in Case(i) we have \(\|f-g_{0}\|_{1}\leq\|f-g_{0}-g\|_{1}\).
Hence, \(g_{0}\in\mathcal{P}_{G}(f)\), which completes the proof.
Kripke and Rivlin [8, Corollary 1.4] and Singer [12, Theorem I.1.7] established the following:
**Proposition 3.4**.: _For \(f,g\in L^{1}(\mu)\), \(f\perp_{BJ}g\) if and only if_
\[\bigg{|}\int\limits_{Z(f)^{c}}g(s)\overline{sign}(f(s))\,ds\bigg{|}\leq\int \limits_{Z(f)}|g(s)|\,ds.\]
As a direct application of Theorem 3.1 and Theorem 3.3, we provide its analogue for vector-valued integrable functions.
**Corollary 3.5**.: _Let \(X\) be a Banach space and \(f,g\in L^{1}(\mu,X)\). Then \(f\perp_{BJ}g\) in \(L^{1}(\mu,X)\) if and only if_
\[\bigg{|}\int\limits_{Z(f)^{c}}Re(F_{f(s)}(\alpha g(s)))\,ds\bigg{|}\leq\int \limits_{Z(f)}|\alpha|\|g(s)\|\,ds,\ \forall\ \alpha\in\mathbb{C},\]
_when \(X\) is a complex smooth Banach space, or_
\[\bigg{|}\int\limits_{Z(f)^{c}}F_{f(s)}(g(s))\,ds\bigg{|}\leq\int\limits_{Z(f)} \|g(s)\|\,ds,\]
_when \(X\) is a real Banach space whose norm is Frechet differentiable._
We next move on to investigate the elements of best approximation for vector-valued \(p\)-integrable functions, \(1<p<\infty\). For this, we use a result of Leonard [9, Theorem 3.1], where in he proved that for real Banach space \(X\), the space \(L^{p}(\mu,X)\) is smooth if and only if \(X\) is smooth. This result can be proved for complex Banach spaces on the similar lines.
**Theorem 3.6**.: _Let \(X\) be a Banach space whose norm is Frechet differentiable and let \(G\) be a subspace of \(L^{p}(\mu,X)\), where \(1<p<\infty\). For \(f\in L^{p}(\mu,X)\setminus\overline{G},\ g_{0}\in\mathcal{P}_{G}(f)\) if and only if_
\[\int\limits_{Z(f-g_{0})^{c}}\|f(s)-g_{0}(s)\|^{p-1}F_{f(s)-g_{0}(s)}(g(s))\, ds=0,\ \text{for all}\ \ g\in G.\]
Proof.: Since the space \(L^{p}(\mu,X)\) is smooth, \(g_{0}\in\mathcal{P}_{G}(f)\) if and only if \(F_{f-g_{0}}(g)=0\) for every \(g\in G\)[12, Corollary I.1.4]. So, our primary goal is to determine the
support map at \(f-g_{0}\). For this, let \(h\in\ L^{p}(\mu,X)\) be an arbitrary element. Define a map \(\phi_{h}:S\to\mathbb{F}\) as
\[\phi_{h}(s)=\begin{cases}\frac{\|f(s)-g_{0}(s)\|^{p-1}}{\|f-g_{0}\|^{p-1}}F_{f(s )-g_{0}(s)}(h(s))&\text{if }f(s)-g_{0}(s)\neq 0\\ 0&\text{if }f(s)-g_{0}(s)=0.\end{cases}\]
We first claim that \(\phi_{h}\) is measurable. Let \(\{\psi_{n}\}\) and \(\{h_{n}\}\) be the sequences of simple measurable functions such that \(\underset{n}{\lim}\psi_{n}(s)=f(s)-g_{0}(s)\) and \(\underset{n}{\lim}h_{n}(s)=h(s)\) for a.e. "\(s\)". Set \(A_{n}=\{s:\psi_{n}(s)\neq 0\}\) and define a sequence \(\phi_{n}:S\to\mathbb{F}\) as
\[\phi_{n}(s)=\begin{cases}\frac{\|\psi_{n}(s)\|^{p-1}}{\|f-g_{0}\|^{p-1}}F_{ \psi_{n}(s)}(h_{n}(s))&,s\in Z(f-g_{0})^{c}\cap A_{n},\\ 0&,\text{otherwise}.\end{cases}\]
For \(s\in S\) such that \(\underset{n}{\lim}\psi_{n}(s)=f(s)-g_{0}(s)\) and \(\underset{n}{\lim}h_{n}(s)=h(s)\), we claim that \(\underset{n}{\lim}\phi_{n}(s)=\phi_{h}(s)\). If \(s\in\ Z(f-g_{0})^{c}\), then there exists \(n_{0}\in\mathbb{N}\) such that \(s\in A_{n}\) for all \(n\geq n_{0}.\) Hence, by Proposition 3.2 and using the fact that \(p>1\), the sequence \(\{\phi_{n}(s)\}_{n\geq n_{0}}\) converges to \(\phi_{h}(s)\). If \(s\in\ Z(f-g_{0})\), then \(|\phi_{n}(s)|\leq\frac{\|\psi_{n}(s)\|^{p-1}}{\|f-g_{0}\|^{p-1}}\|h_{n}(s)\|\ \forall\ n\in\mathbb{N}\), since \(\|F_{\psi_{n}(s)}\|=1\) for \(s\in A_{n}\). Now, \(p>1\) implies that \(\lim\phi_{n}(s)=\phi_{h}(s)=0\). Thus, \(\phi_{h}\) is the a.e. pointwise limit of a sequence of simple measurable functions and by redefining \(\phi_{h}\), if required, we derive that \(\phi_{h}\) is a measurable function.
Now define a map \(T:L^{p}(\mu,X)\to\mathbb{F}\) as
\[T(h)=\int\limits_{S}\phi_{h}(s)\,ds.\]
To see that \(T\) is well defined consider \(h\in\ L^{p}(\mu,X)\), then
\[|T(h)| \leq\frac{1}{\|f-g_{0}\|^{p-1}}\int\limits_{Z(f-g_{0})^{c}}\|f(s)- g_{0}(s)\|^{p-1}\|h(s)\|\,ds\] \[\leq\frac{1}{\|f-g_{0}\|^{p-1}}\bigg{(}\int\limits_{S}\|f(s)-g_{0 }(s)\|^{p}\,ds\bigg{)}^{\frac{1}{q}}\bigg{(}\int\limits_{S}\|h(s)\|^{p}\,ds \bigg{)}^{\frac{1}{p}}\] \[=\frac{1}{\|f-g_{0}\|^{p-1}}\|f-g_{0}\|^{\frac{p}{q}}\|h\|\] \[=\|h\|\]
where we have used the fact that the map \(s\mapsto\|f(s)-g_{0}(s)\|^{p-1}\) is in \(L^{q}(\mu)\), \(\frac{1}{p}+\frac{1}{q}=1\). Thus, \(T\in\ L^{p}(\mu,X)^{*}\) with \(\|T\|\leq 1\). It is easy to verify that \(T(f-g_{0})=\|f-g_{0}\|\) and hence \(\|T\|=1\). Since \(L^{p}(\mu,X)\) is smooth, \(T=F_{f-g_{0}}\) and this completes the proof.
From [12, Theorem I.1.11], it is known that for \(f,g\in L^{p}(\mu)\), \(1<p<\infty\), \(f\perp_{BJ}g\) if and only if
\[\int\limits_{S}g(s)|f(s)|^{p-1}\overline{\operatorname{sign}}(f(s))\,ds=0. \tag{5}\]
Using Theorem 3.6, one can easily obtain its analogue in \(L^{p}(\mu,X)\).
**Corollary 3.7**.: _Let \(X\) be a Banach space whose norm is Frechet differentiable and let \(\mu\) be a positive complete measure space. For \(f,g\in L^{p}(\mu,X)\), \(1<p<\infty\), \(f\perp_{BJ}g\) in \(L^{p}(\mu,X)\) if and only if_
\[\int\limits_{Z(f)^{c}}\|f(s)\|^{p-1}F_{f(s)}(g(s))\,ds=0.\]
As a consequence, we next deduce an alternate proof of a result of Light [10, Corollary 2], which he established using a different technique.
**Theorem 3.8**.: _Let \(f\in L^{p}(\mu,X)\), \((1<p<\infty)\) and let \(Y\) be a closed subspace of \(X\), where \(X\) is a Banach space whose norm is Frechet differentiable and \(\mu\) is a finite complete positive measure. Then \(f\perp_{BJ}L^{p}(\mu,Y)\) if and only if \(f(s)\perp_{BJ}Y\) for a.e."\(s\)"._
Proof.: First suppose that \(f\perp_{BJ}L^{p}(\mu,Y)\). Let, if possible, \(f(s)\not\perp_{BJ}Y\) for a.e."\(s\)". Then the set \(K=\{s\in S:f(s)\not\perp_{BJ}Y\}\) has non-zero measure. Observe that \(K=\underset{y\in Y}{\cup}K_{y}\), where \(K_{y}=\{s\in S:f(s)\not\perp_{BJ}y\}\). Let \(y\in Y\) be such that \(K_{y}\neq\emptyset\). We claim that \(K_{y}\) is \(\mu\)-measurable. Define a function \(\phi:S\to\mathbb{F}\) as
\[\phi(s)=\begin{cases}\|f(s)\|F_{f(s)}(y)&\text{if }f(s)\neq 0\\ 0&\text{if }f(s)=0\end{cases}\]
Then \(\phi\) is \(\mu\)-measurable. To see this, using the strong measurability of \(f\), there exists a sequence \(\{f_{n}\}\) of simple measurable functions such that \(f_{n}(s)\to f(s)\) for a.e."\(s\)". For each \(n\), the map \(\phi_{n}:S\to\mathbb{F}\) defined by
\[\phi_{n}(s)=\begin{cases}\|f_{n}(s)\|F_{f_{n}(s)}(y)&\text{if }f_{n}(s)\neq 0\\ 0&\text{if }f_{n}(s)=0\end{cases}\]
is a simple measurable function. Consider \(s\in S\) for which \(\underset{n\to\infty}{\lim}f_{n}(s)=f(s)\).
If \(f(s)\neq 0\), since the norm is F\(\acute{e}\)chet differentiable, by Proposition 3.2 it is easy to see \(\underset{n\to\infty}{\lim}F_{f_{n}(s)}=F_{f(s)}\) (take a tail of the sequence \(\{f_{n}\}\), if required) in the operator norm topology. Thus, \(\underset{n\to\infty}{\lim}F_{f_{n}(s)}(y)=F_{f(s)}(y)\) which gives \(\underset{n\to\infty}{\lim}\phi_{n}(s)=\phi(s)\).
If \(f(s)=0\), then using the fact that \(\|F_{f_{n}(s)}\|=1\), we have \(|\phi_{n}(t)|\leq\|f_{n}(t)\|\) for all \(t\in S\) and \(n\in\mathbb{N}\). This gives that \(\underset{n\to\infty}{\lim}\phi_{n}(s)=0\). Thus, in both the cases, the
function \(\phi\) is a.e. limit of a sequence of simple measurable functions and hence \(\phi\) is measurable being the measure is complete. By James criteria and the smoothness of \(X\), \(K_{y}=\{s\in S:F_{f(s)}(y)\neq 0\}\). Further, by using the fact that \(f(s)\neq 0\) for all \(s\in K_{y}\), we observe that \(K_{y}=\phi^{-1}(\mathbb{F}\setminus\{0\})\) and hence measurable.
**Case(i): \(X\) is a real Banach space.** Fix a \(y\in Y\) such that \(\mu(K_{y})\neq 0\) and write \(K_{y}=K_{y}^{+}\cup K_{y}^{-}\) where \(K_{y}^{+}=\{s\in S:F_{f(s)}(y)>0\}\) and \(K_{y}^{-}=\{s\in S:F_{f(s)}(y)<0\}\). Again, observe that both \(K_{y}^{+}\) and \(K_{y}^{-}\) are measurable as \(K_{y}^{+}=\phi^{-1}(0,\infty)\) and \(K_{y}^{-}=\phi^{-1}(-\infty,0)\). Without loss of generality, assume that \(\mu(K_{y}^{+})\neq 0\) and define a map \(g:S\to Y\) as \(g(s)=y\chi_{K_{y}^{+}}(s)\), where \(\chi_{K_{y}^{+}}\) denotes the characteristic function on \(S\). Then \(g\in L^{p}(\mu,Y)\), as \(\mu\) is a finite measure. Also
\[\int\limits_{Z(f)^{c}}\|f(s)\|^{p-1}F_{f(s)}(g(s))\,ds=\int\limits_{K_{y}^{+} }\|f(s)\|^{p-1}F_{f(s)}(y)\,ds\neq 0\]
which, by Corollary 3.7, is a contradiction to the fact that \(f\perp_{BJ}g\).
**Case(ii): \(X\) is a complex Banach space.** Fix a \(y\in\ Y\) such that \(\mu(K_{y})\neq 0\). Since \(F_{f(s)}(y)=(ReF_{f(s)})(y)-i(ReF_{f(s)})(iy)\), therefore,
\[K_{y} =\{s\in S:(ReF_{f(s)})(y)-i(ReF_{f(s)})(iy)\neq 0\}\] \[=K_{1}\cup K_{2}\cup K_{3}\cup K_{4}\]
where, \(K_{1}=\{s\in S:(ReF_{f(s)})(y)\geq 0\},\ K_{2}=\{s\in S:(ReF_{f(s)})(y)\leq 0\}\) and \(K_{3}=\{s\in S:(ReF_{f(s)})(iy)\geq 0\},\ K_{4}=\{s\in S:(ReF_{f(s)})(iy)\leq 0\}\). First, observe that \(K_{1}\) is a measurable set. For this, define \(\psi:S\to\mathbb{R}\) as
\[\psi(s)=\begin{cases}\|f(s)\|(ReF_{f(s)})(y)&\text{if }f(s)\neq 0\\ 0&\text{if }f(s)=0.\end{cases}\]
As done earlier, the function \(\psi\) is the a.e. limit of the sequence of simple measurable functions and hence \(\psi\) is a measurable. Thus, \(K_{1}=\psi^{-1}((0,\infty))\) is a measurable set. Similarly, one can verify that each \(K_{i},\ i\in\{2,3,4\}\) is a measurable set. Since \(\mu(K_{y})\neq 0\), without loss of generality, assume \(\mu(K_{1})\neq 0\) and then proceeding in the same manner as in Case(i), we obtain the desired result.
Conversely, if \(f(s)\perp_{BJ}Y\) for a.e."\(s\)", then \(F_{f(s)}(y)=0\) for a.e. "\(s\)" and for each \(y\in Y\). Therefore \(\int_{S}\|f(s)\|^{p-1}F_{f(s)}(g(s))\,ds=0\), for every \(g\in L^{p}(\mu,Y)\) and by Corollary 3.7, \(f\perp_{BJ}L^{p}(\mu,Y)\).
## 4. Birkhoff-James orthogonality and tensor product
With all the ingredients prepared, we are now ready to discuss BJ-orthogonality in the tensor product spaces \(L^{p}(\mu)\otimes^{\Delta_{p}}X\), \(1\leq p<\infty\).
**Theorem 4.1**.: _Let \(X\) be a Banach space whose norm is Frechet differentiable and \(1<p<\infty\). Then \(f\otimes x\perp_{BJ}g\otimes y\) in \(L^{p}(\mu)\otimes^{\Delta_{p}}X\) if and only if either \(f\perp_{BJ}g\) in \(L^{p}(\mu)\) or \(x\perp_{BJ}y\) in \(X\)._
Proof.: Let \(f\otimes x\perp_{BJ}g\otimes y\). We assume \(x\) to be non-zero. Since \(L^{p}(\mu)\otimes^{\Delta_{p}}X\) is isometrically isomorphic to \(L^{p}(\mu,X)\), we have that \(f_{x}\perp_{BJ}g_{y}\) in \(L^{p}(\mu,X)\) where \(f_{x}\) and \(g_{y}\) correspond to \(f\otimes x\) and \(g\otimes y\), respectively. Thus, by Corollary 3.7, we have
\[\int\limits_{Z(f_{x})^{c}}\|f_{x}(s)\|^{p-1}F_{f_{x}(s)}(g_{y}(s))\,ds=0,\]
which further implies
\[\int\limits_{Z(f_{x})^{c}}|f(s)|^{p-1}\|x\|^{p-1}F_{f(s)x}(g(s)y)\,ds=0.\]
Since \(\overline{\frac{f(s)}{|f(s)|}}F_{x}(f(s)x)=\|f(s)x\|\) for \(0\neq f(s)\) and the space \(X\) is smooth, we have \(F_{f(s)x}=\overline{\frac{f(s)}{|f(s)|}}F_{x}\) for \(0\neq f(s)\). Observing that \(Z(f_{x})=Z(f)\), the above equation becomes
\[(F_{x}(y)\|x\|^{p-1})\int\limits_{Z(f)^{c}}|f(s)|^{p-1}g(s)\overline{\text{ sign}(f(s))}\,ds=0.\]
Thus, either \(\int\limits_{Z(f)^{c}}|f(s)|^{p-1}g(s)\overline{\text{sign}(f(s))}\,ds=0\) or \(F_{x}(y)\|x\|^{p-1}=0\). Since \(X\) is smooth, by Equation (5), either \(f\perp_{BJ}g\) or \(x\perp_{BJ}y\). Converse follows from [11, Theorem 3.1].
It is interesting to note that in Theorem 4.1 if we take \(X\) to be a complex Banach space and \(p=1\), the conclusion may not hold as seen in the following example.
**Example 4.2**.: Consider the measure space \((\mathbb{N},P(\mathbb{N}),\mu)\), where \(\mu\) is the counting measure, \(P(\mathbb{N})\) denotes the power set of \(\mathbb{N}\), and let \(X=\ell^{2}(\mathbb{C}).\) Take \(A=\{1,2,3\},B=\{2,3,5\}\) and let \(x=(i,-i,0,0,.....)\), \(y=(i,0,-i,0,0,...)\in X.\) Now,
\[\int\limits_{Z(\chi_{A})^{c}}\chi_{B}(s)\overline{\text{sign}( \chi_{A}(s))}\,ds =\mu(A\cap B)\] \[>\mu(A^{c}\cap B)\] \[=\int\limits_{Z(\chi_{A})}|\chi_{B}(s)|\,ds.\]
Thus, by Proposition 3.4, \(\chi_{A}\not\perp_{BJ}\chi_{B}\). Also, \(x\not\perp_{BJ}y\) since BJ-orthogonality coincides with the usual orthogonality in Hilbert spaces. Now, we claim that \(\chi_{A}\otimes x\perp_{BJ}\chi_{B}\otimes y\) in \(L^{1}(\mu)\otimes^{\gamma}X\). As done earlier, it is sufficient to prove that
\(h_{1}\perp_{BJ}h_{2}\) in \(L^{1}(\mu,X)\), where \(h_{1},\ h_{2}\in L^{1}(\mu,X)\) correspond to \(\chi_{A}\otimes x\) and \(\chi_{B}\otimes y\), respectively. To see this, for any scalar \(\alpha\in\mathbb{C}\), we have
\[\left|\int\limits_{Z(h_{1})^{c}}Re(F_{h_{1}(s)}(\alpha h_{2}(s))) \,ds\right| =\left|\int\limits_{Z(h_{1})^{c}}Re\bigg{(}\left\langle\alpha h_{2 }(s),\frac{h_{1}(s)}{\|h_{1}(s)\|}\right\rangle_{X}\bigg{)}\,ds\right|\] \[=\left|\int\limits_{Z(\chi_{A})^{c}}Re\bigg{(}\left\langle\alpha \chi_{B}(s)y,\frac{\chi_{A}(s)x}{|\chi_{A}(s)|\|x\|}\right\rangle_{X}\bigg{)} \,ds\right|\] \[=\left|\int\limits_{A\cap B}\frac{Re(\alpha(\langle y,\ x \rangle_{X}))}{\|x\|_{X}}\,ds\right|\] \[=\frac{|Re(\alpha\ \mu(A\cap B))|}{\sqrt{2}}\] \[=\frac{|Re(2\alpha)|}{\sqrt{2}}.\]
On the other hand,
\[\int\limits_{Z(h_{1})}\|\alpha h_{2}(s)\|\,ds =\int\limits_{Z(\chi_{A})}\|\alpha\chi_{B}(s)y\|_{X}\,ds\] \[=\int\limits_{A^{c}\cap B}|\alpha|\|y\|_{X}\,ds\] \[=\sqrt{2}|\alpha|\mu(A^{c}\cap B)\] \[=\sqrt{2}|\alpha|\] \[\geq\bigg{|}\int\limits_{Z(h_{1})^{c}}Re(F_{h_{1}(s)}(\alpha h_{2 }(s)))\,ds\bigg{|}\]
and hence by Corollary 3.5, \(h_{1}\perp_{BJ}h_{2}\). This proves the claim.
Lastly, we present an example to show that \(f_{1}\otimes f_{2}\perp_{BJ}g_{1}\otimes g_{2}\) in \(L^{1}(\mu)\otimes^{\gamma}L^{1}(\nu)\) need not imply either \(f_{1}\perp_{BJ}g_{1}\) or \(f_{2}\perp_{BJ}g_{2}\).
**Example 4.3**.: Consider the measure spaces \((\mathbb{N},P(\mathbb{N}),\mu)\) and \((\mathbb{R},\mathcal{M}(\mathbb{R}),\nu)\), where \(\mu\) is the counting measure, \(\nu\) is the Lebesgue measure, \(P(\mathbb{N})\) denotes the power set of \(\mathbb{N}\) and \(\mathcal{M}(\mathbb{R})\) denotes the algebra of Lebesgue measurable subsets of \(\mathbb{R}\). Take \(A=\{1,2,3\},B=\{2,3,5\}\) and \(C=[-1,2],D=[-2,1]\). As done in Example 4.2, since \(\mu(A\cap B)>\mu(A^{c}\cap B)\) and \(\nu(C\cap D)>\nu(C^{c}\cap D)\), therefore, by Proposition 3.4 neither \(\chi_{A}\perp_{BJ}\chi_{B}\) nor \(\chi_{C}\perp_{BJ}\chi_{D}\). Now, we claim that \(\chi_{A}\otimes\chi_{C}\perp_{BJ}\chi_{B}\otimes\chi_{D}\) in \(L^{1}(\mu)\otimes^{\gamma}L^{1}(\nu)\). As done earlier, it is sufficient to prove
that \(\chi_{A}\chi_{C}\perp_{BJ}\chi_{B}\chi_{D}\) in \(L^{1}(\mu\times\nu).\) To see this, consider
\[\bigg{|}\int\limits_{Z(\chi_{A}\chi_{C})^{c}}\chi_{B}\chi_{D}\overline{\operatorname {sign}}(\chi_{A}\chi_{C})\,d(\mu\times\nu)\bigg{|}\]
\[=\bigg{|}\bigg{(}\int\limits_{Z(\chi_{A})^{c}}\chi_{B}(s)\overline{ \operatorname{sign}\chi_{A}(s)}\,ds\bigg{)}\bigg{(}\int\limits_{Z(\chi_{C})^{c }}\chi_{D}(t)\overline{\operatorname{sign}\chi_{C}(t)}\,dt\bigg{)}\bigg{|}\]
\[=\mu(A\cap B)\nu(C\cap D)\]
\[=4.\]
On the other hand
\[\int\limits_{Z(\chi_{A}\chi_{C})}|\chi_{B}\chi_{D}|\,d(\mu\times\nu)\]
\[=\int\limits_{C^{c}}\int\limits_{A}|\chi_{B}(s)\chi_{D}(t)|\,ds\,dt+\int \limits_{C}\int\limits_{A^{c}}|\chi_{B}(s)\chi_{D}(t)|\,ds\,dt+\int\limits_ {C^{c}}\int\limits_{A^{c}}|\chi_{B}(s)\chi_{D}(t)|\,ds\,dt\]
\[=\mu(A\cap B)\nu(C^{c}\cap D)+\mu(A^{c}\cap B)\nu(C\cap D)+\mu(A^{c}\cap B)\nu (C^{c}\cap D)=5.\]
Thus, by Proposition 3.4, \(\chi_{A}\chi_{C}\perp_{BJ}\chi_{B}\chi_{D},\) which proves the claim.
|
2304.12488 | Estimating ensemble likelihoods for the Sentinel-1 based Global Flood
Monitoring product of the Copernicus Emergency Management Service | The Global Flood Monitoring (GFM) system of the Copernicus Emergency
Management Service (CEMS) addresses the challenges and impacts that are caused
by flooding. The GFM system provides global, near-real time flood extent masks
for each newly acquired Sentinel-1 Interferometric Wide Swath Synthetic
Aperture Radar (SAR) image, as well as flood information from the whole
Sentinel-1 archive from 2015 on. The GFM flood extent is an ensemble product
based on a combination of three independently developed flood mapping
algorithms that individually derive the flood information from Sentinel-1 data.
Each flood algorithm also provides classification uncertainty information that
is aggregated into the GFM ensemble likelihood product as the mean of the
individual classification likelihoods. As the flood detection algorithms derive
uncertainty information with different methods, the value range of the three
input likelihoods must be harmonized to a range from low [0] to high [100]
flood likelihood. The ensemble likelihood is evaluated on two test sites in
Myanmar and Somalia, showcasing the performance during an actual flood event
and an area with challenging conditions for SAR-based flood detection. The
Myanmar use case demonstrates the robustness if flood detections in the
ensemble step disagree and how that information is communicated to the
end-user. The Somalia use case demonstrates a setting where misclassifications
are likely, how the ensemble process mitigates false detections and how the
flood likelihoods can be interpreted to use such results with adequate caution. | Christian Krullikowski, Candace Chow, Marc Wieland, Sandro Martinis, Bernhard Bauer-Marschallinger, Florian Roth, Patrick Matgen, Marco Chini, Renaud Hostache, Yu Li, Peter Salamon | 2023-04-24T23:01:50Z | http://arxiv.org/abs/2304.12488v1 | Estimating ensemble likelihoods for the Sentinel-1 based Global Flood Monitoring product of the Copernicus Emergency Management Service
###### Abstract
The Global Flood Monitoring (GFM) system of the Copernicus Emergency Management Service (CEMS) addresses the challenges and impacts that are caused by flooding. The GFM system provides global, near-real time flood extent masks for each newly acquired Sentinel-1 Interferometric Wide Swath Synthetic Aperture Radar (SAR) image, as well as flood information from the whole Sentinel-1 archive from 2015 on. The GFM flood extent is an ensemble product based on a combination of three independently developed flood mapping algorithms that individually derive the flood information from Sentinel-1 data. Each flood algorithm also provides classification uncertainty information that is aggregated into the GFM ensemble likelihood product as the mean of the individual classification likelihoods. As the flood detection algorithms derive uncertainty information with different methods, the value range of the three input likelihoods must be harmonized to a range from low [0] to high [100] flood likelihood. The ensemble likelihood is evaluated on two test sites in Myanmar and Somalia, showcasing the performance during an actual flood event and an area with challenging conditions for SAR-based flood detection. The Myanmar use case demonstrates the robustness if flood detections in the ensemble step disagree and how that information is communicated to the end-user. The Somalia use case demonstrates a setting where misclassifications are likely, how the ensemble process mitigates false detections and how the flood likelihoods can be interpreted to use such results with adequate caution.
CEMS, ensemble classification, Earth Observation, flood monitoring, likelihoods, radar, Sentinel-1, uncertainties.
## I Introduction
With an amount of 44 % [1] of all occurring disasters and produced economic losses of about 651 billion dollars, floods are among the most severe disasters worldwide. Although not the deadliest natural disaster, floods are affecting the largest number of people worldwide every year. With globally rising temperatures, Dottori et al. (2018) [2] predict an increase in human losses due to flooding by up to 70 to 83 % and additional direct flood damages upwards of 160 to 240 %. Botzen et al. (2019) [3] identify population and economic growth in disaster-prone regions as key causes leading to this increase. Apart from human losses, floods may cause damages to (critical) infrastructure [4] and may lead to further cascading effects such as the spread of infectious diseases [5].
Mitigating these effects requires coordinated action on multiple levels, including but not limited to the implementation of accurate early warning systems, constant monitoring of disaster-prone regions and well implemented risk management procedures [6]. Arguably, the monitoring requirement, especially at large scale, is currently best fulfilled through the utilization of Earth Observation data. Grimaldi et al. (2016) [7] present a review of different flood data sources and compare optical with synthetic aperture radar (SAR) sensors. Optical imagery mainly relies on cloud-free and illuminated data, whereas radar remote sensing satellites can operate day and night due to their ability to emit cloud penetrating microwave.
Past studies already highlighted the potential of a synergetic use of optical and SAR data in flood mapping [8] and [9]. However, most studies focus on a single technology, most frequently microwave remote sensing [10], [11], [12] and [13]. A comprehensive overview of advantages and limitations of different methods is found in [14].
It can be noted that the aforementioned studies either focus on specific regions or were not implemented as operational services. Furthermore, a majority of the studies do not provide any information on the flood classification uncertainties. Clement et al. (2018) [10] highlight several sources of uncertainty affecting SAR-based flood extent
mapping, for example the ambiguities related to similar backscatter return over water look-alikes and dry soil, as well as areas with fuzzy backscatter response, e. g. dense vegetation, and areas with higher backscatter return over urban areas. In general, cases where SAR-based flood mapping may be hampered and the detection may become less confident, the classification necessitates and benefits from the inclusion of a dedicated uncertainty analysis [15]. This complementary output also supports the interpretation and use of SAR-based flood map products, where end-users can be alerted to flood features associated with lower confidence, which should be treated with more caution with respect to risk assessment.
The absence of a fully operational flood service that also returns confidence information culminated in the request of the Copernicus Emergency Management Service (CEMS) to integrate technically mature and scientifically validated flood detection algorithms into the Global Flood Awareness System (GloFAS) 1 and the European Flood Awareness System (EFAS) 2. Instead of utilizing an approach based on a single retrieval algorithm, the Joint Research Centre (JRC) as the contracting authority adopted an ensemble approach, which merges the results of three matured and independently developed flood algorithms.
Footnote 1: [https://www.globalfoods.eu/](https://www.globalfoods.eu/)
Footnote 2: [https://www.efas.eu/](https://www.efas.eu/)
### _Conceptual basis of GFM ensemble_
The Global Flood Monitoring (GFM) product of the CEMS continuously processes and analyzes all incoming Sentinel-1 Ground Range Detected (GRD) Interferometric Wide swath (IW) data, aiming to detect and monitor flood events in nearreal time at global scale.
The GFM product builds on an ensemble approach that combines three mature and independently developed flood detection algorithms provided by the German Aerospace Center (DLR), Luxembourg Institute of Science and Technology (LIST) and Vienna University of Technology (TUW). The flood ensemble is computed pixel-wise and based on a majority voting system, where at least two algorithms must classify a pixel as flooded or non-flooded. Further insight into the flood ensemble algorithm is described by [16].
Besides a pixel-based flood classification, each flood detection algorithm generates classification uncertainty information in the form of likelihoods. The GFM ensemble algorithm then combines the three individual layers of uncertainty information into a single layer termed ensemble likelihood. Although, an uncertainty analysis is performed, the term likelihood is used instead of uncertainty; most users have an a-priori understanding about likelihoods, whereas uncertainties describe a negation which may not be understood as intuitively.
The GFM product is composed of different layers supporting the interpretation of flood situations using remote sensing data. Besides the actual flood extent layer, users can also download the likelihood data from GloFAS and EFAS. In addition, a downloadable exclusion layer informs about regions where no flood delineation was possible. These areas correspond to nodata values in both of the flood and likelihood products.
### _Objectives of ensemble flood detection and interpretation of ensemble likelihoods_
Two sets of objectives drive the ensemble-based flood detection and the manner in which ensemble likelihoods are intended to be interpreted and applied by two user communities: 1) integrating results into further processes or studies and 2) utilizing results for decision-making processes.
The first objective addresses the first community, consisting of algorithm developers. Ensemble likelihoods may be used by them to identify subsets of pixels associated with low confidence values as a way to gain insights about opportunities for improving algorithms so that they return more accurate predictions. The individual and combined likelihoods may also serve as a basis for inter-comparing the results obtained by different approaches, thereby potentially improving our understanding of their strengths and weaknesses.
The second community are data (end-)users. They may use ensemble likelihoods to minimize adverse consequences of making potentially costly decisions based on highly uncertain information. Results of this study may thus provide a basis on how the two aforementioned use cases can be used to support decision-making for flood and non-flood events.
In particular, to evaluate the impact of differential decisions made based on flood classifications with and without consideration of ensemble likelihood values, we examine two land cover types (i. e. agricultural lands, built environments) of particularly high economic importance and social consequences. In effect, mean likelihoods serve as a heuristic indication of overall confidence in the flood prediction, based on an average of available flood and respective uncertainty outputs from contributing individual flood algorithms. The values also function as an indicator of the current capacities/confidence of ensemble algorithms to detect water over certain types of land covers and uses.
This study focuses on gaining insights on scenarios that result in regular and over-detections with respect to dominant land covers. Based on the results, benefits and limitations of the ensemble likelihood approach are highlighted and provide a starting point to guide further developments and applications in the two communities.
The application of ensemble likelihoods is evaluated with two use cases exemplifying flood (Myanmar use case) and non-flood (Somalia use case) events, respectively. In particular, the objectives are to accurately delineate flood extent, while minimizing over- and under-detection. Extension of case-based assessments provide useful insights on the generalizability of the flood monitoring algorithm on a global scale with respect to a more comprehensive range of land covers/ uses.
Subsequent sections provide detailed descriptions about how each individual flood algorithm generates uncertainty information (Section 2), the generation and evaluation of ensemble likelihoods (Section 3). Data used to conduct the study is described in Section 4, followed by results (Section 5), discussion (Section 6), conclusions and recommendations (Section 7).
## II Generation of algorithm likelihoods
The GFM flood ensemble likelihood product attributes to each valid pixel a likelihood of being flooded given its recorded Sentinel-1 backscatter value and ancillary data inputs. The term valid refers to pixels that are considered to be potentially flooded and included in the computation. Invalid pixels are excluded through an exclusion mask. This mask excluded areas blocked by radar shadow, regions of no Sentinel-1 SAR sensitivity towards flood dynamics, or areas that are considered non-floodable as they are located too far away from the next drainage [17].
Ensemble likelihoods are defined in the interval [0, 100]. Likelihood values towards 0 represent lowest confidence in the ensemble flood classification, whereas values towards 100 represent highest confidence. Ensemble likelihoods are used to convert the set of ensemble classifications into a single binary flood classification, representing non-flood pixels as 0 and flood pixels as 1, respectively. In this binarization step, a likelihood value of 50 is defined as the threshold value that separates the two classes (i. e., non-flood pixels from the interval [0, 49] and flood pixels from the interval [50, 100]). Confidence about the detection of each respective class increases with likelihood values towards the lower or higher class boundaries (see Fig. 1).
The ensemble likelihood value is computed pixel-wise as the mean of the likelihood values attributed to each valid pixel by the three algorithms. The following sections describe the independent generation of each set of values.
In general, all likelihoods are in reference to a flood classification. If the likelihood is low over a certain pixel or feature, the classification confidence that the pixel or feature is flooded is also low. The specific terms that are further evaluated in the next sections are defined as follows:
* individual likelihood values: refer to pixel-wise likelihood information generated by each of the three flood algorithms (i. e., DLR, LIST, TUW)
* initial mean likelihood values: are computed pixel-wise based on the average of all available individual likelihood information, ideally generated by all three flood algorithms, prior to the application of the ensemble algorithm
* ensemble likelihood values: are updated likelihood values based on the initial mean likelihood after the ensemble algorithm is applied
The ensemble algorithm combines the flood detection and likelihood outputs of the individual flood algorithms. Although a majority voting system is implemented, split situations, i. e., cases of classification disagreement occur where a majority cannot be achieved, e. g. when one out of three algorithms yields a nodata pixel.
Post-processing steps involve the exclusion of sub-areas within a given Sentinel-1 scene that overlap with the reference water and exclusion masks. Clusters with a size less than a defined threshold of flood pixels are assumed to be unlikely flooded and re-labeled as non-flood pixels. This action is termed a blob removal step and eliminates small fragmented patches. Results following these steps are then referred to as ensemble classifications, see [16]. Likelihood values corresponding to formerly flooded but excluded pixels are set to a likelihood value of 0. Likelihood values corresponding to formerly flooded but blob-removed pixels are set to a likelihood value of 49, i. e., expressing the lowest confidence in the non-flood regime.
The ensemble algorithm can be applied based on two approaches: split and consensus. The split approach considers the likelihood values associated with the respective classification of each flood algorithm and favors the classification with the highest confidence. The consensus approach is based on majority voting, which sets all split situations to non-flooded classifications, since only 2/3 flood algorithms generate valid but conflicting pixel-wise classifications. In effect, a flood classification is only returned when there is an agreement. The following subsections describe the individual likelihood layers produced by the individual flood detection algorithms.
### _Computation of DLR fuzzy values_
The flood detection algorithm by DLR is a single scene approach, i. e., the main data input for flood inundation is a single Sentinel-1 observation. The DLR algorithm applies fuzzy logic post-processing to measure and to reduce the uncertainty associated with the water classification, originally described by [18] and [19]. Three cases influence classification uncertainty. In particular, the likelihood of a pixel being classified as water is low:
* if it's radar backscatter is close to the automatically derived threshold \(\tau\), separating water and non-water;
* if the slope at that location is high, since steeper surfaces are unlikely to retain water; and
* if that pixel is connected to other neighboring water pixels and the resulting area is relatively small. On the contrary, the uncertainty is low if the pixel is
Fig. 1: Confidence distribution of likelihood values. Likelihood values towards 0 correspond to higher confidence in non-flood classifications, whereas values towards 100 correspond to higher confidence in flood classifications. Low confidence in both classifications is indicated by likelihood values towards 50.
connected to other neighboring water pixels and the resulting area is relatively large.
The three cases or parameters, namely backscatter of the normalized radar cross section (NRCS), slope and minimum mapping unit are evaluated separately, resulting in the generation of three fuzzy layers. The concept of the fuzzy logic step is exemplified with the consideration of SAR backscatter values.
Fig. 2 illustrates the application of the fuzzy logic approach to address the first case where SAR backscatter is uncertain. In Fig. 2a, the water/non-water separating threshold \(\tau\) is defined as the upper fuzzy value \(x_{2}\). This value represents the boundary between both classes, where the likelihood of a correct classification is the lowest. The mean backscatter value of the class water \(\mu_{water}\) is associated with a minimum fuzzy value \(x_{1}\). Pal and Rosenfeld (1988) [20] describe the negative S-function that maps numeric to fuzzy values which is also depicted in Fig. 2b.
As the majority of water pixels have backscatter values around the mean backscatter value, the uncertainty of a correct classification of these water pixels is low. The fuzzy logic approach maps high uncertainties to low degrees of membership to a particular class. For instance, high uncertainty of a correct pixel-wise classification to the water class corresponds to a low degree of membership to that class. The pixel is, therefore, assigned a lower fuzzy value. The converse is also true, where a low uncertainty corresponds to a high degree of membership of a given pixel to the water class; it is assigned a high fuzzy value.
The three individual fuzzy membership functions are in the range \([0,1]\). For easier interpretation and lower storage requirements, float values were rescaled to the range [0, 100]. The resulting fuzzy layer is computed as the mean of all three individual fuzzy layers. A defuzzification value of 60 is defined as the threshold to mark the distinction between water and non-water classes.
Pixels classified as water with a fuzzy value of \(\geq\) 60 are treated as water detections of high confidence.
### _Computation of LIST probabilities_
The flood detection algorithm by LIST applies a change-detection approach [21], i. e. the flood inundation is performed by detecting backscatter changes of two consecutive Sentinel-1 observations, the most recent SAR scene, \(I_{t-1}^{S1}\) with the overlapping SAR scene acquired from the same orbit called reference SAR scene, \(I_{t0}^{S1}\). As it is a change detection algorithm, it aims at detecting and mapping all decreases of backscattering values with respect to a reference one. A change detection approach is adopted because it allows to differentiate floodwater from permanent water bodies and, at the same time, filter out classes having water-like backscattering values such as shadows or smooth surfaces. The floodwater extent for the actual event is described with \(I_{t0}^{S1}\) and the image difference to the pre-event situation is described with \(I_{t0}^{S1}=I_{t-1}^{S1}-I_{t0}^{S1}\). The likelihood of floodwater classification is characterized by flood probability.
Both \(I_{t0}^{S1}\) and \(I_{t0}^{S1}\) are used for likelihood estimation, the pixels that have high posterior probability of both water class and change class are likely to be real flooded pixels. The probability of being flooded for a given pixel (1) is defined as the minimum value between the conditional probability of the water class \(p(W|\sigma^{0})\) and the conditional probability of the changed class \(p(C|\Delta\sigma^{0})\) with regards to the Sentinel-1 backscatter \(\sigma^{0}\):
\[p(F|\sigma^{0},\Delta\sigma^{0})=min(p(W|\sigma^{0}),p(C|\Delta\sigma^{0})) \tag{1}\]
where \(p(\sigma^{0})\) is the marginal distribution of backscatter values in \(I_{t0}^{S1}\) and \(p(\Delta\sigma^{0})\) is the marginal distribution of backscatter difference values in \(I_{D}^{S1}\). In case a pixel is also flooded in the reference image, only \(I_{t0}^{S1}\) is considered for likelihood estimation of flood classification, see (2).
\[p(F|\sigma^{0})=p(W|\sigma^{0}) \tag{2}\]
As in this case, the likelihood is only calculated from the backscatter value in \(I_{t0}^{S1}\), false high flood probability can be caused by permanent water and other water look-alike dark areas, these false alarms in binary map has been removed by comparing the resulting flood map with the previous flood map. To reduce these false high probabilities in current likelihood map, for non-flood pixels in the new flood map, their flood probability is the minimum value between \(p(W|\sigma^{0})\) and the value in the latest previous likelihood map.
Fig. 2: Fuzzy logic approach for discriminating between water and non-water classes based on SAR backscatter values, denoted as \(\sigma^{0}\) Sentinel-1 backscatter [dB].
### _Computation of TUW uncertainties_
The flood algorithm by TUW is based on a data cube approach as introduced by [22] and builds upon a-priori probability parameters for flood and non-flood conditions generated from Sentinel-1 time series. Incoming Sentinel-1 scenes that are subject to flood mapping are classified by means of Bayesian inference, which is not only computationally slim and NRT-suitable, but also intrinsically yields likelihood values in terms of posterior probabilities of the class allocation. For each pixel in a new Sentinel-1 backscatter measurement, the probability of belonging to either the flood or the non-flood class is inferred.
Based on the Bayes decision rule, higher ("winning") posterior probabilities define then the class allocation. Additionally, the conditional error \(p(error|\sigma^{0})\) can be defined by the lower posterior probability, see (3),
\[p(error|\sigma^{0})=min[p(F|sigma^{0}),p(NF|\sigma^{0})] \tag{3}\]
where \(p(F|sigma^{0})\) describes the probability of the flood class and \(p(NF|\sigma^{0})\) the probability of the non-flood class with respect to the Sentinel-1 backscatter \(\sigma^{0}\).
The conditional error as direct measure for _uncertainty_ enables direct quantification of the lack of confidence with respect to a given decision. Since posterior probabilities sum up to 1, a higher posterior probability for one class results in a lower posterior probability for the other class in the binary classifications. Uncertainty is thus defined between 0.0 and 0.5. A value close to zero represents high confidence, since the probabilities for both classes (flood and non-flood) indicate a clear decision. High conditional errors (i. e. close to 0.5) indicate uncertain decisions, as the new observation is falling into the overlap of the local flood/no-flood distributions and hence no class is much more probable than the other one. In such a situation, the Bayes decision is very uncertain and the classification is not meaningful.
For all pixels of the incoming Sentinel-1 image, the conditional errors \(p(error|\sigma^{0})\) are forwarded to the ensemble algorithm, which represent the pixel-wise uncertainties associated with the flood map of TUW's algorithm. For easier interpretation and lower storage requirements, the uncertainties are scaled to values between 0 and 100.
The TUW flood mapping algorithm features some internal masking of conditions not well represented by the a-priori probability parameters. This includes an internal uncertainty mask based on the statistical Sentinel-1 backscatter model of TUW is applied in this algorithm to exclude poorly-based decisions (i. e., with low reliability), defined by an upper limit of 0.2 for the conditional error, reflecting a 4:1 probability that the assigned class is correct.
### _Fusion of likelihoods_
In the context of this study, likelihood values of different origins are fused to a single quantity. Probability and fuzziness can be considered equal in terms of the numerical expression of the likelihood that is represented in the unit interval [0, 1]. However, they have to be differentiated in the manner in which the two measures handle the semantic classes water and non-water [23]. Given the same probability and fuzzy values of for example 0.8, the representation of likelihood is clarified as follows:
A probability of 0.8 represents an 80 % chance of pixel-wise water detection, where the value is determined based on pixel frequencies. The likelihood about the chance of a water or a non-water detection can be maximized as more observations become available and the pixel-wise water detection is built on a broader data base.
A fuzzy value of 0.8 represents a pixel that is 80 % water, describing the degree of membership belonging to that class, based on its properties. Such properties are defined through the uncertainty analysis, e. g. the DLR algorithm attributes a pixel with Sentinel-1 backscatter, slope and size information that declare its membership to the class water. The fuzzy value expresses the degree to which it can be considered to be a (pure) water pixel. The uncertainty about class ambiguity persists even if more observations become available. Maximizing the likelihood is however possible by introducing additional auxiliary datasets that add to the pixel properties.
This study acknowledges the mathematical and ontological complexities that characterize the formulation of the two aforementioned types of likelihood generation. However, in order to return actionable and interpretable information, the GFM likelihood product simplifies the fusion of likelihood values by computing the average value of the three algorithm likelihood outputs. While more advanced approaches have been proposed to bring the two measures of likelihoods together, e. g. by [24], [23] and [25], this approach addresses the need for practicality in crisis information management. This objective is characterized by the need to make time-critical decisions with informative and also more easily interpretable products to support decision-making. Furthermore, the harmonized GFM likelihood product summarizes the likelihood inputs from the three water detection algorithms, thereby minimizing cognitive overload for end-users.
## III Generation of ensemble likelihoods
Combining the likelihood information generated by each flood algorithm requires value harmonization. TABLE I outlines each of the three outputs with respective value ranges. The value ranges indicate the lowest and highest classification confidences as well as the threshold distinguishing flood from non-flood. Since the TUW algorithm outputs uncertainties, a pixel value of 100 represents a maximum uncertainty value that is comparable to a LIST probability or a DLR fuzzy value of 0.
The DLR and LIST flood algorithms produce uncertainties that are numerically similar to likelihoods, with low values indicating low flood classification confidence and vice versa. The uncertainty analysis of TUW produces an inverse value range, where low values indicate high likelihood or high flood classification confidence and vice versa.
By definition, a threshold value of 50 separates the flood and non-flood pixels in the ensemble likelihood layer. Fuzzy values F generated from the DLR algorithm are adapted to this scheme, based on (4).
\[likelihood_{DLR}=\begin{cases}100-1.25\cdot(100-F),&F\geq 60\\ \frac{F}{1.2},&F<60\end{cases} \tag{4}\]
The TUW uncertainties U are inverted, following (5).
\[likelihood_{TUW}=\begin{cases}100-U,&flood_{TUW}=1\\ U,&flood_{TUW}=0\end{cases} \tag{5}\]
Once likelihood values from all three algorithms are represented in the same range, the ensemble likelihood is computed as the mean of the individual likelihood layers, irrespective of nodata values.
The ensemble algorithm computes the result on pixel level and requires two sets of three input layers each for flood and likelihood computations, respectively. If two flood algorithms fail to output data, the required flood and likelihood layers are generated automatically with the same geometry as the input Sentinel-1 scene and filled with zero values stating no flood for the entire scene, accompanied with zero values stating low likelihoods.
A valid flood or a non-flood pixel is always connected to a valid likelihood pixel. If a flood pixel holds a nodata value, the corresponding likelihood pixel also stores a nodata value.
This behavior has implications for the statistical robustness of the ensemble results. For instance, a flood pixel that is based on three valid individual classifications is considered to be statistically more robust compared to a flood pixel that is based on only two valid individual classifications (and one nodata classification). The latter is a so-called split situation that is resolved through a consensus approach, i. e., the ensemble algorithm marks that pixel as not flooded.
Fig. 3 illustrates three different cases (C1, C2, C3) to demonstrate the ensemble classification scheme. In the first case C1, 3 out of 3 of the individual algorithms return a flood classification which is a full consent. In case C2, 2 out of 3 of the individual algorithms return a classification that disagrees with the third algorithm. This is a major consent and the pixel is classified as flood or non-flooded respectively, depending on the majority vote, e. g. [flood, flood, non-flood] or [non-flood, non-flood, flood]. In case C3, one algorithm returns a nodata value and the remaining algorithms disagree on the classification, e. g. [nodata, flood, non-flood]. The ensemble algorithm resolves the split situation through a conservative approach and marks the pixel as non-flooded.
The steps described in this section define initial flood mapping likelihoods that are to be corrected with auxiliary data masking out error-prone regions and excluding areas of no interest, e. g. reference water that is not flooded per definition. If these pixels are to be excluded and were classified as non-flooded, the initial likelihood value remains unchanged. If pixels to be excluded were classified as flooded, the respective likelihood value is changed to the value 49, i. e. the most unconfident likelihood value for the non-flood class.
## IV Datasets
This section gives an overview on the used datasets and how they were processed within this study. Two use cases are presented that showcase a flood event in Myanmar and a non-flood situation in the semi-arid climate-zone of Somalia (see Fig. 4). The preprocessing of the Sentinel-1 IW GRDH datasets is described by the overview given in [26].
The individual flood algorithms exploit Sentinel-1 GRD IW data which are shown in Fig. 4 (a) and (c). Sentinel-1 data over Myanmar was acquired on 2019-07-16 11:39:44. Sentinel-1 data over Somalia was acquired on 2019-03-16 02:46:06. The fuzzy logic step of the DLR flood algorithm uses slope information derived from Copernicus DEM data [27]. All input datasets were resampled to a common pixel spacing of 20 x 20 m in the Equi7Grid projection [28].
Land cover information for this study is based on the global Copernicus Land Cover product from 2019 [29] with an original pixel spacing of 100 x 100 m that has been resampled to 20 x 20 m. As for this study, it was decided to focus on selected pre-dominant land cover types that are either of particular socio-economic interest or likely to
Fig. 3: Sample inputs generated by each of the three individual flood algorithms with (a): available flood and likelihood data per algorithm on pixel level and (b): possible changes in the final classification. The cases C1 and C2 handle full and majority agreement respectively. The case C3 demonstrates a split situation that is resolved through a conservative agreement with a non-flood result.
be affected from flooding. Fig. 4 (b) and (d) depict the spatial distribution of predominant land cover types for the study areas, followed by the class frequencies in Fig. 4 (e) and (f). An exclusion mask is applied, leaving land cover types that are part of this analysis.
For the Myanmar use case, the land cover type _agriculture_ dominates the study area with a pixel coverage of approx. 90 % followed by the classes of _built-up_ and _permanent water_, sharing less than 5 % coverage each. Land cover types depicting forests and similar classes are not considered, although shown in green colors in the map, as they represent a state of dense vegetation that is mostly excluded from the flood computation.
For the Somalia use case, the land cover type _shrubs_ dominates the study area with a pixel coverage of approx. 90 % followed by the class _agriculture_ with a pixel coverage of approx. 10 %.
A crucial part of the presented data relies on a consistent and robust reference water dataset that was computed prior to the release of the GFM products. The GFM reference water dataset exploits a two-years' time series of Sentinel-1 median backscatter images that were aggregated for each month. Thus, the reference water reflects permanent water which is stable over the reference period of two years and seasonal water bodies that are periodically flooded over the duration of the reference period.
An exclusion layer defines pixels that are not included into the final ensemble output, as examined by [30]. As already mentioned in section 2, the exclusion layer contains information about radar shadows, dense vegetation and permanent low backscatter, i. e., regions where the flood inundation is hampered, as well as topographic regions that are not prone to flooding.
In order to evaluate the ensemble likelihood values validation data covering the Myanmar study area is introduced. The data consists of a binary flood extent map derived from Sentinel-2 data, which was acquired on 2019-07-15 with a one-day delay to the acquisition of the Sentinel-1 data over Myanmar.
## V Results
In relation to the validation data, this study further examines a quantile-quantile plot supporting the evaluation of the ensemble likelihoods (see Fig. 5). The quantile-quantile plot shows the agreement of the computed ensemble likelihood values with the empirical probabilities. The majority (\(>80\) %) of the pixels fall into the first and last bins while the remaining pixels show higher empirical probabilities compared to the predicted samples.
To address the objectives defined in Section 1.2, scenarios are examined to identify specific likelihood regimes reflecting the number of algorithms that were used to compute the pixel-wise likelihood.
The Myanmar use case represents a known flood event on July 16, 2019, see Fig. 6 (a) (d).
The likelihoods in subfigure (c) represent the initial mean likelihood values that were computed as the average of all available pixel-wise flood algorithm likelihood values, prior to the application of the ensemble algorithm.
Fig. 4: Location of the study areas in Myanmar and Somalia. The Sentinel-1 scene over Myanmar (a) was acquired on 201907-16 11:39:44. The Sentinel-1 scene over Somalia was acquired on 201903-16 02:46:06. Both study areas are accompanied with an overview of pre-dominant land cover maps (b) and (d). An exclusion mask is applied and shows areas where flood computation is performed. This information is in line with the pixel-wise distribution of land cover classes for Somalia (e) and Myanmar (f), valid for non-excluded regions.
Fig. 5: Quantile-quantile plot comparing predicted probabilities (ensemble likelihood values) with empirical probabilities from validation data. Each marker plots into the respective bin range, e. g. [0.0, 0.1]. The marker size denotes the relative number of samples.
The classification subfigure (d) illustrates the initial flood classification prior to the application of the ensemble algorithm. Full consent marks pixels where 3 out of 3 algorithms agree on a flood and non-flood classification, respectively. Major flood and major non-flood indicate pixels where 2 out of 3 algorithms agree on the classification of flood or non-flood, respectively, i. e. [flood, flood, non-flood], and vice versa with [non-flood, non-flood, flood]. It should be noted that [flood, flood, nodata] also results in a major flood decision; the same applies to major non-flood decisions with [non-flood, non-flood, nodata]. Split situations mark pixels where one algorithm cannot classify a pixel and thus outputs nodata, while the remaining algorithms disagree on the classification, i. e. [flood, non-flood, nodata].
Very low initial mean likelihood values are observed over permanent water features (b). These values primarily correspond to areas that are excluded in the post-processing step and with initial full non-flood classifications (d). Much higher likelihood values are observed over image features that correspond to full or major flood classifications. Pixels with medium likelihood values around 50 correspond to split situations located along and within seasonal water bodies. The consensus approach resolves these split situations to non-flood decisions.
Fig. 7 illustrates split situations in green colors with likelihoods \(<50\) that are remapped to non-flood decisions. Thus, 100 % of the split pixels are re-classified to non-flood, thereby increasing the share of non-flood pixels for that particular likelihood value. Fig. 7a also shows major flood pixels in the likelihood range [50, 80] and full flood pixels with likelihoods \(>80\). A small number of major flood pixels with likelihoods \(>50\) are excluded from the final ensemble results and therefore marked as superior non-flood with re-assigned likelihood values of 49. As can be seen in the top right bar plot of Fig. 7b, the count of initial flood pixels that are remapped to non-flood pixels is very low, as indicated through the small size of the green bar. Also, the full non-flood class clearly dominates the classification types, followed by major non-flood decisions. In the Myanmar study area, about 10 % of all pixels are classified as flooded.
The Somalia use case represents a regular monitoring observation (i. e. non-flood event) on March 16, 2019,
Fig. 6: Series of image chips sampled from the Myanmar (a) (d) and Somalia (e) (h)use case: Sentinel-1A image with backscatter values (a) and (e), reference water and exclusion mask (b) and (f), likelihoods prior to ensemble (c) and (g) and spatial distribution of flood classifications prior to ensemble (d) and (h).
where over-detections are likely to be observed as the environmental setting mostly covers dry soil, see Fig. 6 (e) (h). In effect, the use case contains a very limited number of water pixels in general, which is also reflected in the reference water mask (f). Very low initial mean likelihoods (g) highlight the fact that the majority of the pixels are initially classified as non-flooded (h). A substantial number of split situations are observed along meandering river channels. These split situations represent potential over-detections that appear as fragmented clusters and do not follow morphological shapes, e. g.depression boundaries, and generally correspond to dark Sentinel-1 backscatter features, i. e. potential water look-alikes.
Fig. 7 (c) and (d) show medium to low likelihood values for split pixels that are remapped to major non-flood pixels and therefore increase the amount of major non-flood pixels, as depicted by the top right bar plot of (d). It is also clear that very few flood pixels are removed at the end of the ensemble algorithm with re-assigned likelihood values. Any initial flood pixel with a likelihood \(>50\) is re-assigned with a likelihood value of 49 which is depicted in (d). However, the amount of these remapped pixels is still low and the majority of pixels belongs to non-flood classes.
The next set of results supports the identification of land cover classes that are associated with different likelihood values (see Fig. 8). This analysis is performed with both use cases and aims to focus on land covers with relatively high economic and social impacts to end-users, e. g. agriculture and built-up.
Fig. 7 already depicts low likelihood values for non-flood classifications for both use cases, which also dominate the amount of considered pixels for these classes. In congruence to that, Fig. 8 shows low variance in likelihood values for full non-flood decisions across all land cover types and for both use cases. The situation differs for major non-flood decisions where the Myanmar use case shows greater likelihood variance across all land cover classes (Fig. 8a), in contrast to the Somalia use case (Fig. 8b).
As stated in Section 4, the land cover type agriculture dominates the valid pixels in the Myanmar study area and attributes to the majority of full non-flood classifications that are depicted with likelihoods of 20 and less (Fig. 7b). This is also shown in Fig. 8a where full non-flood classifications over the land cover type agriculture show very low variance.
Major non-flood classifications show greater likelihood variance that originates from split situations, which are remapped to major non-flood in the ensemble algorithm.
For the Myanmar use case, the less dominant flood pixels show low variance for full flood and greater variance for major flood decisions across all land cover types. Superior non-flood decisions are depicted with extremely low variance across all land cover types.
As stated in Section 4, the land cover type shrubs dominates the Somalia study area and attributes to the majority of full non-flood classifications that are depicted with likelihoods of approx. 20, and less (Fig. 7d). This is also shown in Fig. 8b, where full non-flood classifications over the land cover type shrubs show very low variance.
Major non-flood classifications of greater variance are depicted for the land cover types agriculture, of which approx. 10 % of pixels build the study area, and permanent
Fig. 7: Histograms of likelihood distributions for the Myanmar (a) and (b) and Somalia (c) and (d) use case. Each box contains a histogram pair prior to the ensemble (a) and (c) and after the ensemble algorithm was applied (b) and (d). Pixels that were initially marked as flooded and have been overwritten by the exclusion layer to non-flooded are marked as superior non-flood. Depending on the distribution of the likelihood in the initial flood classification, ensemble flood likelihoods \(<50\) can occur, e. g. a classification with [flood, flood, non-flood] has the likelihoods [50, 50, 40] with a mean likelihood of 47. Each colored group sums up to 100 %, i. e., the bar widths are not comparable but give an indication about the likelihood distribution within that group. The share of each group on the total pixel count is given with a bar plot for each of the histograms.
water, that is almost not present in the region. However, both major non-flood clusters are rather small.
As shown in Fig. 7d and Fig. 8b, superior non-flood classifications occur and remap any potential flood classification prior to the application of the ensemble algorithm if they were masked out by the exclusion layer. Although only given with a very low number of pixels, their majority plots into the dominating land cover class shrubs and shows a likelihood cluster of low variances.
## VI Discussion
The results provide a basis to obtain insights about the correlation of results prior to and after the application of the ensemble algorithm. In particular, the evaluations aim to link majority-based classifications and their statistical robustness to an explanatory variable, namely dominant land cover types.
### _Quantile-quantile plot_
The quantile-quantile plot reflects the statistical reliability of the probabilistic prediction as the majority of predicted data is in close proximity to the one-to-one line.
The part of the data showing high predicted probability marks a flood over-detection with reference to the validation data. At this point, it should be noted that the validation data is not real ground truth data but relies on the flood extent derived from Sentinel-2 data. Furthermore, the validation data shows a oneday delay to the acquisition of Sentinel-1 data, reflecting a probable change of the flood situation. Therefore, flood patches missing in the validation data can be considered as possible sources of flood over-detection in the prediction data.
The parts of the data showing higher empirical probabilities with medium predicted probabilities mark flood underdetections with reference to the validation data. These regions mostly locate along the edges of detected flood patches and along river channels. It is likely that the exclusion layer does not cover these regions although they are likely to introduce misclassifications.
### _Map subfigures_
Examining the series of subfigures for both use cases supports an initial assessment of the scenarios under which majority flood/non-flood classifications are identified, prior to the application of the ensemble algorithm. These results contain a significant number of split pixels where 1 out of 3 algorithms return a nodata value and 2 out of 3 disagree on the flood classification. This behavior does not indicate a failure of the system and is not to be mistaken as an inaccurate result. Nodata classifications occur if an algorithm is unable to classify a pixel with a robust likelihood, i. e. if the result shows a significantly low classification confidence. Such an output is observed over challenging SAR image features that could not be excluded from the ensemble result. The ensemble algorithm translates these unconfident results to non-flood with a likelihood value of 49, i. e. the most unconfident likelihood of the non-flood class.
In the Myanmar flood event use case, the relatively high number of split situations coincide with the location of seasonal reference water bodies and with the presence of dense vegetation. Flood waters, including those classified based on majority decisions, also correspond with seasonal water bodies. Flood classifications over these areas represent a degree of disagreement among the contributing flood algorithms. The resulting mean likelihoods may be used to caution users to consider verifying these flood hotspots with additional data prior to making decisions e. g. on resource allocation. Permanent water features, on the other hand, are classified as non-flooded despite 1 out of 3 algorithms classifying these features as flooded. It should be noted that the application of the reference water mask in the ensemble post-processing step reassigns the likelihood values of the permanent water features to 0 to indicate high confidence of being non-flooded. In these
Fig. 8: Boxplots of ensemble likelihood distributions with respect to land cover/ uses in the Myanmar (a) and Somalia (b) use case. The sample count for each box is indicated through the box height. Boxplots of extreme low variance are located along likelihoods of 50 and mark superior non-flood pixels. The legend in plot (b) applies for both plots.
instances, the ensemble likelihood makes the non-flood classification explicit, regardless of the number of individually contributing flood algorithms over these pixels.
In the Somalia use case, the relatively high number of split situations, in addition to major non-flood classifications, are located along former meandering river channels that seemed to have dried out and share the same SAR signal responses as bare soils. Herbaceous vegetation, shrubs and agriculture on dry soil are well-known challenges for flood detection based on Sentinel-1 backscatter, where the low radar backscatter tends to often result in over-detections. The consensus approach of the ensemble algorithm, although being a conservative measure, reduces these over-detections significantly and demonstrates the advantage of the ensemble algorithm over the application of a single measure alone.
### _Histograms_
For the Myanmar use case, flood classifications are associated with ensemble likelihoods between 50 and 100; major floods detected with a certain degree of disagreement are associated with a wider range of lower ensemble likelihoods, while full floods are associated with a narrower range of higher confidence ensemble likelihoods between 80 and 100. The expression of agreement, as a form of confidence in flood detections, is useful information that can be consulted to support any decision making by end users during the onset of reported flood events. It should be noted that a major flood decision originates from 1 out of 3 algorithms classifying nodata or non-flood. Regardless of the scenario, such a likelihood value is rather low and clearly indicates lower confidence for the ensemble flood classification compared to a full flood agreement, which builds on a broader data basis.
For the Somalia use case, no flood was classified, which is also a result of resolving split situations with the consensus approach. A different approach would have been to resolve split situations by favoring the flood or non-flood classification with the highest confidence in the respective class. Although less conservative, it would have been more likely to miss critical over-detections.
### _Boxplots_
The pair of boxplots compare ensemble likelihood distributions with respect to land cover/uses in the Myanmar and Somalia use cases.
In the Myanmar use case, ensemble likelihoods of flood detection increase from the lower-mid ranges corresponding to the initial mean likelihoods for all dominant land cover classes to notably higher ranges and demonstrate a medium spread of classification confidence.
In the Somalia use case, no floods could be observed but in comparison to the Myanmar use case, the likelihood variances are rather low and demonstrate higher confidence of the nonflood classifications.
At this stage, it should be noted that the exclusion layer also masks out regions that are likely to hamper the flood classification. It cannot be ruled out that for the Somalia use case, as a representative for challenging SAR-based flood detection conditions, a greater range of likelihood variance would be possible if another exclusion mask would have been applied to the data. However, this set demonstrates the advantage of including auxiliary data like an exclusion mask to focus on flood-prone regions.
In comparison with the utilized land cover data, the permanent water class reveals regions of disagreement with the GFM flood product. Although the GFM flood and likelihood products are not relevant for permanent water features, an intersection of these datasets shows different results due to the significantly enhanced spatial resolution of 20 m in the GFM dataset compared to the spatial resolution of 100 m in the land cover dataset.
Furthermore, the land cover class _built-up_ is to be excluded with the exclusion layer. However, the same diversity in spatial resolution applies to this case as well as the definition of built-up areas that are not meant to be included in the computation. Apart from a small number of towns, both study areas also contain light settlements that are not covered by the exclusion mask. Having the likelihood values for these built-ups is useful information for the end-users as this land cover type is of high socio-economic interest.
## VII Conclusion
Within this paper, we describe a methodology to combine flood ensemble likelihoods of the Sentinel-1 GFM product. We further highlight the importance of interpretable and robust likelihood values to guide end-users and decision-makers in their processes.
The computation of likelihoods informs on the robustness of flood classifications. While various methods have been proposed in the literature to combine different types of uncertainty information origins, e. g. probabilities and fuzzy values, their computation and fusion is rather complex and arguably hampers their interpretability and a straight-forward crisis response. In contrast, the method presented here is easy to interpret and its application is straight forward, as it is solely based on the computation of the arithmetic mean of the individual flood algorithm likelihoods. Considering the value range, the classification of a flood pixel with an associated likelihood close to 100 is considered to be more confident than a flood pixel with a likelihood of close to 50, and is based on a wider range of input data. Furthermore, flood classifications with low likelihood values, i. e. values in the range [50, 60], originate from an ensemble configuration with one algorithm classifying a pixel as non-flooded or nodata. Consequently, the ensemble likelihood product alerts end-users to the presence of non-consent flood classifications, which should be treated with care in decision making.
The first results show how ensemble likelihoods function as a heuristic to identify and provide a first indication of the performance, as well as the agreement among the three algorithms contributing to the final flood classification. These aggregated likelihood values capture cumulative
uncertainties from data, model architecture, algorithmic level and interpretation. Further reduction of uncertainties requires more dedicated investigative methods. Once generated, this kind of uncertainty information can be used by both of the aforementioned communities. In particular, researchers or algorithm developers are offered guidance to investigate how to minimize uncertainties with respect to certain explanatory variables, e. g. land cover types. End users may consult likelihood information that supports cautioning against the direct use of flood classification product in areas with low likelihoods of flood classification, or where classifications are based on majority, rather than full detections. For resource allocation, it may be sufficient to identify areas of certain extents as potential hotspots, even if likelihoods associated with individual pixels in the vicinity of the areas of interest are lower.
Based on the preliminary results, the benefits and limitations of the ensemble likelihood approach are highlighted and provide a starting point for further developments and applications in the two research and end-user communities. Further assessments may be conducted to include additional variables, in addition to extending the number and variety of use cases.
|
2304.02843 | Heat statistics in the relaxation process of the Edwards-Wilkinson
elastic manifold | The stochastic thermodynamics of systems with a few degrees of freedom has
been studied extensively so far. We would like to extend the study to systems
with more degrees of freedom and even further-continuous fields with infinite
degrees of freedom. The simplest case for a continuous stochastic field is the
Edwards-Wilkinson elastic manifold. It is an exactly solvable model of which
the heat statistics in the relaxation process can be calculated analytically.
The cumulants require a cutoff spacing to avoid ultra-violet divergence. The
scaling behavior of the heat cumulants with time and the system size as well as
the large deviation rate function of the heat statistics in the large size
limit is obtained. | Yu-Xin Wu, Jin-Fu Chen, Ji-Hui Pei, Fan Zhang, H. T. Quan | 2023-04-06T03:20:23Z | http://arxiv.org/abs/2304.02843v1 | # Heat statistics in the relaxation process of the Edwards-Wilkinson elastic manifold
###### Abstract
The stochastic thermodynamics of systems with a few degrees of freedom has been studied extensively so far. We would like to extend the study to systems with more degrees of freedom and even further-continuous fields with infinite degrees of freedom. The simplest case for a continuous stochastic field is the Edwards-Wilkinson elastic manifold. It is an exactly solvable model of which the heat statistics in the relaxation process can be calculated analytically. The cumulants require a cutoff spacing to avoid ultra-violet divergence. The scaling behavior of the heat cumulants with time and the system size as well as the large deviation rate function of the heat statistics in the large size limit is obtained.
## I Introduction
Historically, people studied thermodynamics in macroscopic systems such as ideal gas with up to \(10^{23}\) molecules. Due to the huge number of degrees of freedom in the macroscopic scale, it is impossible to extract the trajectories of individual particles explicitly. Hence it is not possible to study thermodynamics of macroscopic systems in arbitrary far from equilibrium processes. Nevertheless, for mesoscopic systems with only a few degrees of freedom, stochastic dynamics (Langevin equation, Fokker-Planck equation, master equation) provides detailed information about the system. Prominent examples of mesoscopic systems include colloidal particles, macromolecules, nanodevices and so on [1; 2]. In all these examples, researchers focus on the dynamics of a few degrees of freedom of the system while coarse-graining all the degrees of freedom of the reservoir. Mesoscopic systems can be driven out of equilibrium by external driving, for instance, by varying the temperature or by controlling them with optical tweezers [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18].
With the equation of motion, e.g., Langevin equation, Fokker-Planck equation or master equation, researchers are able to establish a framework of thermodynamics for mesoscopic systems in arbitrary far from equilibrium processes. This is stochastic thermodynamics in which thermodynamic quantities such as work, heat and entropy production in nonequilibrium processes have been explored extensively in both classical and quantum realms [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48]. In the study of work or heat distribution for extreme nonequilibrium processes, rare events with exponentially small probabilities have dominant contributions making finite sampling error particularly serious. Hence previous studies, be it experimental or computer simulations, are predominantly for small systems, i.e., those with a few degrees of freedom [49]. Nevertheless, systems with a few degrees of freedom are too special. Therefore it is desirable to extend the study of stochastic thermodynamics to more complicated systems. We thus would like to extend the studies to systems with more degrees of freedom, for example, stochastic fields. Hopefully in some exactly solvable model we can obtain analytical results about work and heat distribution. These rigorous results about work or heat distribution in systems with many degrees of freedom not only have pedagogical value but also may bring some insights to the understanding of thermodynamics in extreme nonequilibrium processes, as P. W. Anderson once advocated, "More is different" [50]. While many researchers are interested in the dynamic properties of stochastic fields [51; 52; 53; 54; 55], less research is carried out from the perspective of stochastic thermodynamics except [56; 57; 58; 59; 60] so far as we know.
In this article we study the thermodynamics of an elastic manifold whose underlying dynamics is described by the Edwards-Wilkinson (EW) equation [61]
\[\partial_{t}h(\mathbf{x},t)=\nu\nabla^{2}h(\mathbf{x},t)+\xi(\mathbf{x},t). \tag{1}\]
where \(h(\mathbf{x},t)\) is the local height at spatial point \(\mathbf{x}\) at time \(t\), \(\nu\) is the diffusive coefficient and \(\xi(\mathbf{x},t)\) is the Gaussian white noise.
The problem we analyze is the relaxation of an elastic manifold described by the EW equation. The elastic manifold is initially put in contact with a heat reservoir at the inverse temperature \(\beta^{\prime}\). After initial equilibration with the first heat reservoir at \(\beta^{\prime}\) the system is detached from it, and is put in contact with a second heat reservoir at the inverse temperature \(\beta\). The manifold subsequently tries to adapt to the working temperature [55]. The relaxation is characterized by the stochastic heat absorbed from/released into the surrounding reservoir during a period of time \(\tau\). We are interested in the average and fluctuation of the heat in such a process. We find several generic properties of the average and fluctuating heat in the relaxation process of the EW elastic manifold. By employing the Feynman-Kac method [45; 62], we obtain analytical results of the characteristic function of heat for the EW model during an arbitrary relaxation period \(\tau\) with an arbitrary diffusive coefficient \(\nu\) and analyze
the scaling behavior of the cumulants of heat with time. Analytical results of the heat statistics bring important insights into understanding the fluctuating property of heat in such a concrete and exactly solvable model. We also verify from the analytical results that the heat statistics satisfy the fluctuation theorem of heat exchange [63]. The large deviation rate function of heat statistics in the large size limit is also analyzed.
The rest of this article is organized as follows. In Section II we introduce the EW model. In Section III we define the stochastic heat and obtain analytical results of the characteristic function of heat using the Feynman-Kac approach. We also compute the cumulants of heat and discuss their scaling behavior with time and the system size. Conclusions are given in Section IV.
## II The model
A \(d\)-dimensional elastic manifold, with finite size \(2L\) in each direction, joggles under thermal noise. Its local height \(h(\mathbf{x},t)\) at spatial point \(\mathbf{x}\) at time \(t\) evolves according to the EW equation Eq. (1) which takes the form of a multivariable overdamped Langevin equation [1]. The thermal noise \(\xi(\mathbf{x},t)\) is white in nature, i.e., \(\langle\xi(\mathbf{x},t)\rangle=0\), \(\langle\xi(\mathbf{x},t)\xi(\mathbf{x}^{\prime},t^{\prime})\rangle=\Gamma\delta(\mathbf{x }-\mathbf{x}^{\prime})\delta(t-t^{\prime})\), with amplitude \(\Gamma=2/\beta\). The EW energy is just that of a massless field with Hamiltonian \(H_{S}=\nu\int d\mathbf{x}(\nabla h(\mathbf{x},t))^{2}/2\). Here the subscript \(S\) refers to the system.
Initially, the system is prepared in an equilibrium state with the inverse temperature \(\beta^{\prime}\) characterized by a Gibbs-Boltzmann distribution in the configuration space, i.e., the probability \(\mathcal{P}(h,t)\) to find the system in the configuration \(\{h(\mathbf{x},t)\}\) is the Gibbs-Boltzmann distribution
\[\mathcal{P}(h,0)=\mathcal{N}^{\prime-1}\exp\Big{[}-\beta^{\prime}\cdot\frac{ \nu}{2}\int d\mathbf{x}\Big{(}\nabla h(\mathbf{x},0)\Big{)}^{2}\Big{]} \tag{2}\]
where \(\mathcal{N}^{\prime}\) is the normalization constant
\[\mathcal{N}^{\prime}=\int dh(\mathbf{x},0)\exp\Big{[}-\beta^{\prime}\cdot\frac{ \nu}{2}\int d\mathbf{x}\Big{(}\nabla h(\mathbf{x},0)\Big{)}^{2}\Big{]}. \tag{3}\]
Here the integration in the normalization constant is taken over all possible initial configurations while the one in the exponential factor is taken over all spatial points.
After initial equilibration, the system is detached from the first heat reservoir, and is placed in contact with a second heat reservoir at the inverse temperature \(\beta\), which is different from \(\beta^{\prime}\). The elastic manifold subsequently relaxes towards the equilibrium state at temperature \(\beta\) since no external driving is involved. The heat absorbed/released is a fluctuating variable for the system undergoing stochastic motion. We are interested in the heat statistics in such a relaxation process.
For a finite-size manifold we take periodic boundary conditions along each \(\mathbf{x}\) direction. Following Refs. [1; 52] we employ a Fourier representation of the height field
\[h(\mathbf{x},t)=\frac{1}{(2\pi)^{d}}{\sum_{\mathbf{q}}}e^{i\mathbf{q}\cdot\mathbf{x}}h_{\mathbf{q }}(t), \tag{4}\]
\[h_{\mathbf{q}}(t)=\int d\mathbf{x}e^{-i\mathbf{q}\cdot\mathbf{x}}h(\mathbf{x},t), \tag{5}\]
where \(\mathbf{q}\) represents a wavevector with \(q_{j}=n_{j}\pi/L\) (\(j=x,y,z\dots,\ n_{j}=\pm 1,\pm 2...\) and \(h_{\mathbf{q}=\mathbf{0}}(t)=0\) for all time \(t\)) [55].
The evolution of the Fourier component is given by
\[\frac{\partial h_{\mathbf{q}}(t)}{\partial t}=-\nu q^{2}h_{\mathbf{q}}(t)+\xi_{\mathbf{q} }(t), \tag{6}\]
\[\langle\xi_{\mathbf{q}}(t)\rangle=0, \tag{7}\]
\[\langle\xi_{\mathbf{q}}(t)\xi_{\mathbf{q^{\prime}}}(t^{\prime})\rangle=\frac{2}{\beta }(2\pi)^{d}\delta(t-t^{\prime})\delta_{\mathbf{q},-\mathbf{q^{\prime}}}. \tag{8}\]
The normalization constant in Eq. (3) can be computed as
\[\mathcal{N}^{\prime} =\int d\{h_{\mathbf{q}}(0)\}\exp\Big{[}-\beta^{\prime}\nu\frac{1}{(2 \pi)^{2d}}{\sum_{\mathbf{q}(q_{j}>0)}}q^{2}h_{\mathbf{q}}(0)h_{-\mathbf{q}}(0)\Big{]}\] \[=\prod_{\mathbf{q}(q_{j}>0)}\frac{\pi(2\pi)^{2d}}{\beta^{\prime}\nu q ^{2}}. \tag{9}\]
where \(q^{2}\) stands for the modulus square of \(\mathbf{q}\).
The probability density of the system state \(\mathcal{P}(h,t)\) evolves under the governing of the Fokker-Planck equation
\[\frac{\partial\mathcal{P}(h,t)}{\partial t}= -\int d\mathbf{x}\frac{\delta}{\delta h}\Big{[}\nu\nabla^{2}h(\mathbf{x}, t)\mathcal{P}(h,t)\Big{]}\] \[+\frac{\Gamma}{2}\int d\mathbf{x}\frac{\delta^{2}}{\delta h^{2}} \mathcal{P}(h,t). \tag{10}\]
In the Fourier space, the probability of the height field configuration is the product of the real and the imaginary part over all modes
\[\mathcal{P}(\{h_{\mathbf{q}}\},t)=\prod_{\mathbf{q}}\mathcal{P}(h_{\mathbf{q}},t)={\sum_{ \mathbf{q}}}\mathcal{P}(h_{\mathbf{q}}^{R},t)\mathcal{P}(h_{\mathbf{q}}^{I},t) \tag{11}\]
where
\[h_{\mathbf{q}}^{R}=\text{Re}(h_{\mathbf{q}}),\quad h_{\mathbf{q}}^{I}=\text{Im}(h_{\mathbf{q}}). \tag{12}\]
The Fokker-Planck equation in the Fourier space can be then written into two independent parts: the real part and the imaginary part [64]
\[\frac{\partial\mathcal{P}(h_{\mathbf{q}}^{R,I},t)}{\partial t}=\frac{(2 \pi)^{d}}{2\beta}\frac{\partial^{2}\mathcal{P}}{\partial(h_{\mathbf{q}}^{R,I})^{2}}+ \nu q^{2}\mathcal{P}+\nu q^{2}h_{\mathbf{q}}^{R,I}\frac{\partial\mathcal{P}}{ \partial h_{\mathbf{q}}^{R,I}}. \tag{13}\]
Having introduced the model, in the following we will calculate the heat statistics in the relaxation process.
## III Heat statistics
In this section we study heat statistics of the EW elastic manifold in the relaxation process. First, we obtain the analytical results of heat statistics and verify the fluctuation theorem of heat exchange. Second, we study the asymptotic behavior of the cumulants. Third, we calculate the large deviation function of heat statistics in the large size limit.
### Characteristic function
Since no external driving is applied to the system, no work is performed during the relaxation process. The fluctuating heat \(Q\) absorbed from the heat reservoir equals the energy difference between the initial and the final states over a time period \(\tau\)
\[Q=H_{S}(h(x,\tau))-H_{S}(h(x,0)). \tag{14}\]
The characteristic function of heat \(\chi_{\tau}(u)\) is defined as the Fourier transform of the heat distribution
\[\chi_{\tau}(u)=\int dQ\exp(iuQ)\mathcal{P}(Q,\tau). \tag{15}\]
Here \(\mathcal{P}(Q,\tau)\) stands for the probability of the heat \(Q\) transferred from the heat reservoir to the system during the period of time \(\tau\). The characteristic function of heat \(\chi_{\tau}(u)\) can be calculated using the Feynman-Kac approach [45; 47; 62]
\[\chi_{\tau}(u) =\langle\exp(iuQ)\rangle\] \[=\int dhe^{iuH_{S}(h(x,\tau))}\eta(h,\tau) \tag{16}\]
where the probability-density-like function \(\eta(h,\tau)\) satisfies Eq. (10) and Eq. (13) with the initial condition
\[\eta(h,0)=e^{-iuH_{S}(h(x,0))}\mathcal{P}(h,0). \tag{17}\]
The probability-density-like function \(\eta(h,\tau)\) is solved in the Fourier space (See Appendix A for detailed derivation) and we obtain the characteristic function of heat for the relaxation process over a time period of \(\tau\)
\[\chi_{\tau}(u)=\beta\beta^{\prime}\prod_{\mathbf{q}(q_{j}\geq\frac{ \tau}{L})}\frac{\exp(2\nu q^{2}\tau)}{-u(i\beta^{\prime}-i\beta-u)\Big{[}\exp (2\nu q^{2}\tau)-1\Big{]}+\beta\beta^{\prime}\exp(2\nu q^{2}\tau)}. \tag{18}\]
The wavevector component in each direction only takes positive discrete values \(q_{j}=n_{j}\pi/L,n_{j}=1,2...\)
We do the self-consistent check of the analytic result Eq. (18) from three aspects:
1. The distribution of heat satisfies the conservation of probability
\[\chi_{\tau}(0)=1. \tag{19}\]
2. One can see the characteristic function of heat exhibits the following symmetry:
\[\chi_{\tau}(u)=\chi_{\tau}(i\beta^{\prime}-i\beta-u), \tag{20}\]
indicating that the heat distribution satisfies the fluctuation theorem of heat exchange [47; 63; 23]
\[\langle e^{iuQ}\rangle=\langle e^{(-iu+\beta-\beta^{\prime})Q}\rangle. \tag{21}\]
By setting \(u=0\), we obtain the relation \(\chi_{\tau}(i\beta^{\prime}-i\beta)=1\), which is exactly the fluctuation theorem of heat exchange in the integral form \(\langle\exp[-(\beta^{\prime}-\beta)Q]\rangle=1\)[63].
3. In the long time limit \(\tau\rightarrow\infty\), the characteristic function becomes
\[\lim_{\tau\rightarrow\infty}\chi_{\tau}(u)=\prod_{\mathbf{q}(q_{j} \geq\frac{\tau}{L})}\frac{\beta\beta^{\prime}}{(u+i\beta)(u-i\beta^{\prime})}.\]
This result, independent of the relaxation dynamics, can be written in the form
\[\lim_{\tau\rightarrow\infty}\chi_{\tau}(u)=\Big{\langle}e^{iuH_{S}(h(x,\tau) )}\Big{\rangle}_{\beta}\Big{\langle}e^{-iuH_{S}(h(x,0))}\Big{\rangle}_{\beta^{ \prime}} \tag{22}\]
where the initial distribution (thermal equilibrium with the inverse temperature \(\beta^{\prime}\)) and the final distribution (thermal equilibrium with the inverse temperature \(\beta\)) are sampled independently, reflecting the complete thermalization of the system [29]. This result agrees with our intuition.
### Cumulants
The cumulants of heat can be derived by taking derivatives of the logarithm of the characteristic function \(\chi_{\tau}(u)\) with respect to \(u\) at \(u=0\), with the first cumulant representing the average heat and the second one standing for the variance.
The average heat is
\[\langle Q\rangle =\frac{1}{i}\frac{d\ln\chi_{\tau}(u)}{du}|_{u=0}\] \[=\sum_{\mathbf{q}(\frac{\pi}{2}\geq q_{j}\geq\frac{\pi}{2})}\frac{ \Big{[}1-\exp(-2\nu q^{2}\tau)\Big{]}(\beta^{\prime}-\beta)}{\beta\beta^{ \prime}}\] \[=\frac{\beta^{\prime}-\beta}{\beta\beta^{\prime}}\Big{(}\frac{ \pi}{L}\Big{)}^{-d}\int_{\frac{\pi}{L}}^{\frac{\pi}{2}}d\mathbf{q}\Big{[}1-\exp(- 2\nu q^{2}\tau)\Big{]}. \tag{23}\]
A cutoff \(\pi/a\) of the wavevector is needed to avoid ultraviolet divergence, i.e., we introduce a smallest spacing \(a\) in this elastic manifold [1, 65, 66]. Since we consider a continuous field, the cutoff spacing is always much smaller than the system size \(a\ll L\). We will see that the choice of the value of \(a\) will influence the average heat (See Fig. 1 (b) inset plot).
We rewrite the average heat \(\langle Q\rangle\) with a change of the variable \(\mathbf{s}=L\mathbf{q}\)
\[\langle Q\rangle=\frac{(\beta^{\prime}-\beta)}{\beta\beta^{\prime}\pi^{d}}f \Big{(}\frac{\nu\tau}{L^{2}}\Big{)}, \tag{24}\]
where
\[f(r) =\int_{\pi}^{\frac{L\pi}{a}}d\mathbf{s}\Big{[}1-e^{-2\pi s^{2}}\Big{]}\] \[=(\frac{L-a}{a}\pi)^{d}+(\frac{\pi}{8r})^{\frac{d}{2r}}\Big{[} \mathrm{Erf}(\pi\sqrt{2r})-\mathrm{Erf}(\frac{\pi L\sqrt{2r}}{a})\Big{]}^{d}. \tag{25}\]
\(\mathrm{Erf}(r)\) is the error function.
In the following we discuss the asymptotic behavior of the average heat as a function of time. For one-dimensional case, the average heat as a function of time is illustrated in Fig. 1. At the initial stage, for \(\tau\ll a^{2}/\nu\),
\[\langle Q\rangle\approx\frac{2\pi^{2}}{3a^{2}}\frac{(\beta^{\prime}-\beta)}{ \beta\beta^{\prime}}\nu\tau\frac{L}{a}. \tag{26}\]
The average heat initially increases with time linearly. This is Newton's law of cooling.
For the intermediate time \(a^{2}/\nu\ll\tau\ll L^{2}/\nu\),
\[\langle Q\rangle\approx\frac{(\beta^{\prime}-\beta)}{\beta\beta^{\prime}} \frac{L}{a}\Big{(}1-\frac{a}{\sqrt{8\nu}}\tau^{-1/2}\Big{)}. \tag{27}\]
Figure 1: Average heat as a function of time. Parameters for both panels: \(d=1,\ \nu=1,\ \beta^{\prime}=4,\ \beta=2\). (a) \(\langle Q\rangle\) as a function of \(\tau\) for three system sizes \(L=30,35,40\), fixing \(a=0.2\). Inset: the saturation value of average heat \(\langle Q\rangle_{st}\) as a function of system size \(L\). (b) \(\langle Q\rangle\) as a function of \(\tau\) for three cutoff spacings \(a=0.2,0.5,1.0\), fixing \(L=10\). Inset: the saturation value of average heat \(\langle Q\rangle_{st}\) as a function of cutoff spacing \(a\).
Figure 2: Average heat of a mode \(\mathbf{q}\) for different time durations. The parameters take values \(L=30,\ d=1,\ \nu=1,\ \beta^{\prime}=4,\ \beta=2\) and the curves correspond to three values of time delay \(\tau=10^{1},10^{0},10^{-1}\) from the bottom to the top. The dashed line stands for the saturation value.
It exhibits \(\tau^{-1/2}\) scaling with time.
In the long time limit, for \(\tau\gg L^{2}/\nu\),
\[\langle Q\rangle\to\frac{\beta^{\prime}-\beta}{\beta\beta^{\prime}}\frac{L}{a}, \tag{28}\]
the average heat saturates, which is a consequence of the equipartition theorem. The saturation value of heat is an extensive quantity which scales linearly with the system size \(L\). It will not diverge for a finite spacing \(a\) as a result of finite resolution.
From Eq. (23) one can see the average heat for every \(\mathbf{q}\) mode is
\[\langle Q_{\mathbf{q}}\rangle=\frac{\beta^{\prime}-\beta}{\beta\beta^{\prime}} \Big{(}\frac{\pi}{L}\Big{)}^{-d}\Big{[}1-\exp(-2\nu q^{2}\tau)\Big{]}. \tag{29}\]
As we can see from this equation and Fig. 2, heat transfer occurs mainly through high-energy modes and occurs in high-energy modes more quickly than that in lower ones.
For fixed time duration \(\tau\), in the small wavevector limit, i.e., \(2\nu q^{2}\tau\ll 1\), it increases with time linearly
\[\langle Q_{\mathbf{q}}\rangle=2\nu\tau\frac{\beta^{\prime}-\beta}{\beta\beta^{ \prime}}\Big{(}\frac{\pi}{L}\Big{)}^{-d}q^{2}, \tag{30}\]
which is the Newton's law of cooling.
On the other hand, if one takes the large wavevector limit, i.e., \(2\nu q^{2}\tau\gg 1\), the average heat reaches the asymptotic value
\[\langle Q_{\mathbf{q}}\rangle=\frac{\beta^{\prime}-\beta}{\beta\beta^{\prime}} \Big{(}\frac{\pi}{L}\Big{)}^{-d}, \tag{31}\]
which is the result of the equipartition theorem.
From the analytical result of heat statistics Eq. (18) we can also study the variance of heat. The variance of heat is defined as \(\mathrm{var}(Q)=\langle Q^{2}\rangle-\langle Q\rangle^{2}\), and can be calculated as
\[\mathrm{var}(Q) =\frac{1}{i^{2}}\frac{d^{2}\ln\chi_{\tau}(u)}{du^{2}}|_{u=0}\] \[=\Big{(}\frac{\pi}{L}\Big{)}^{-d}\frac{1}{\beta^{2}\beta^{\prime 2 }}\int_{\frac{\pi}{L}}^{\frac{\pi}{2}}d\mathbf{q}e^{-4\nu q^{2}\tau}(-1+e^{2\nu q^{ 2}\tau})\] \[\quad\bigg{[}(-1+e^{2\nu q^{2}\tau})\beta^{2}+2\beta\beta^{\prime }+(-1+e^{2\nu q^{2}\tau})\beta^{\prime 2}\bigg{]}\] \[=\frac{1}{\beta^{2}\beta^{\prime 2}\pi^{d}}g\Big{(}\frac{\nu \tau}{L^{2}}\Big{)} \tag{32}\]
where
\[g(r) =\int_{\pi}^{\frac{L\pi}{a}}d\mathbf{s}\Big{[}(\beta^{2}+\beta^{ \prime 2})(1-2e^{-2rs^{2}}+e^{-4rs^{2}})\] \[\quad+2\beta\beta^{\prime}(-e^{-4rs^{2}}+e^{-2rs^{2}})\Big{]}.\]
In the one-dimensional case, for \(\tau\ll a^{2}/\nu\), we have
\[\mathrm{var}(Q,\tau)\approx\frac{4\pi^{2}}{3a^{2}\beta\beta^{\prime}}\nu\tau \frac{L}{a}. \tag{33}\]
It grows with time linearly in the very beginning.
For \(a^{2}/\nu\ll\tau\ll L^{2}/\nu\),
\[\mathrm{var}(Q,\tau)\approx\frac{4\pi^{4}\nu^{2}\tau^{2}}{5\beta^{2}\beta^{ \prime 2}a^{4}}(\beta^{2}-3\beta\beta^{\prime}+\beta^{\prime 2})\frac{L}{a}. \tag{34}\]
It scales as \(\tau^{2}\) as time elapses.
Finally, for \(\tau\gg L^{2}/\nu\), it reaches the saturation value in the long time,
\[\mathrm{var}(Q,\tau)\approx\frac{\beta^{2}+\beta^{\prime 2}}{\beta^{2}\beta^{ \prime 2}}\frac{L}{a}. \tag{35}\]
Figure 3: Variance of heat as a function of time. Parameters for both panels: \(d=1,\;\nu=1,\;\beta^{\prime}=4,\;\beta=2\). (a) \(\mathrm{var}(Q)\) as a function of \(\tau\) for three system sizes \(L=30,35,40\), fixing \(a=0.2\). Inset: the saturation value of heat variance \(\mathrm{var}(Q)_{st}\) as a function of system size \(L\). (b) \(\mathrm{var}(Q)\) as a function of \(\tau\) for three cutoff spacings \(a=0.2,0.25,0.3\), fixing \(L=10\). Inset: the saturation value of heat variance \(\mathrm{var}(Q)_{st}\) as a function of cutoff spacing \(a\).
As can be seen from Fig. 3, the variance of heat depends on the cutoff spacing \(a\) as well. Similar to the average heat, the saturation value of variance increases linearly with the system size \(L\) and will not diverge for finite spacing \(a\). Higher order cumulants of heat can be analyzed in a similar way.
### Large deviation rate function
We can also study the large deviation rate function of the heat statistics in the large size limit.
The scaled cumulant generating function (SCGF) \(\phi(u,\tau)\) of heat per volume over time \(\tau\), which is defined through
\[\langle\exp[(2L)^{d}u\frac{Q}{(2L)^{d}}]\rangle\asymp_{L\to\infty}e^{(2L)^{d} \phi(u,\tau)} \tag{36}\]
or
\[\phi(u,\tau) =\lim_{L\to\infty}\frac{1}{(2L)^{d}}\ln\langle\exp[(2L)^{d}u\frac {Q}{(2L)^{d}}]\rangle\] \[=\lim_{L\to\infty}\frac{1}{(2L)^{d}}\ln\chi_{\tau}(-iu), \tag{37}\]
can be computed by
\[\phi(u,\tau)=\lim_{L\to\infty}-\frac{1}{(2\pi)^{d}}\int_{\frac{\tau}{2}}^{\frac {\tau}{2}}d\mathbf{q}\ln\Big{(}\frac{-u(\beta^{\prime}-\beta+u)}{\beta\beta^{ \prime}}\Big{[}1-\exp(-2\nu q^{2}\tau)\Big{]}+1\Big{)}.\]
The large deviation rate function for heat per volume over time \(\tau\) is just the Legendre-Fenchel transform of the SCGF [67]
\[I(\frac{Q}{(2L)^{d}},\tau) =\lim_{L\to\infty}-\frac{1}{(2L)^{d}}\ln\mathcal{P}(\frac{Q}{(2L) ^{d}},\tau)\] \[=\sup_{u\in\mathbb{R}}\Bigl{\{}u\frac{Q}{(2L)^{d}}-\phi(u,\tau) \Bigr{\}}. \tag{38}\]
We emphasize that the large deviation rate function of work distribution in the large size limit has been studied in other models previously (See e.g., Refs. [68; 49]). But as far as we know, the large deviation function of heat in the large size limit has not been reported previously.
With the large deviation rate function Eq. (38), we can write down the probability distribution of heat per volume over time \(\tau\) as
\[\mathcal{P}(\frac{Q}{(2L)^{d}},\tau)\asymp_{L\to\infty}\exp\Big{[}-(2L)^{d}I( \frac{Q}{(2L)^{d}},\tau)\Big{]}, \tag{39}\]
which demonstrates the dependence of the heat distribution on the system size. And the fluctuation theorem of heat exchange Eq. (21) can also be formulated in terms of the large deviation rate function.
## IV Conclusion
Previously, the stochastic thermodynamics of systems with a few degrees of freedom have been studied extensively both in classical and quantum realms [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48]. However, less is known in systems with many degrees of freedom. What new results the complexity of many degrees of freedom will bring to stochastic thermodynamics remains largely unexplored.
In this article, we extend previous studies about the stochastic thermodynamics of systems with a few degrees of freedom to a continuous field. We compute the heat statistics in the relaxation process of an exactly solvable model -- an elastic manifold whose underlying dynamics can be described by the Edwards-Wilkinson equation. By employing Feynman-Kac approach, we calculate analytically the characteristic function of heat for any relaxation time. The analytical results of heat statistics have pedagogical value and may bring important insights to the understanding of thermodynamics in extreme nonequilibrium processes. For example, the cumulants of heat in such a system with many degrees of freedom require a spatial cutoff to avoid the ultra-violet divergence, which is a consequence of finite resolution. We also analyze the scaling behavior of the cumulants with time and the system size. In addition, the large deviation rate function of heat in the large size limit is analyzed.
This work can be regarded as an early step in the stochastic thermodynamics of continuous fields. More interesting problems remain to be explored such as the definitions for the thermodynamic quantities in every space-time point, the extension to nonlinear models, the work statistics in the presence of external driving and so on. Studies about these issues will be given in our future work.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China (NSFC) under Grants No.
12147157, No. 11775001, and No. 11825501.
## Appendix A Derivation of Eq. (111)
Similar to the probability density distribution, the modified function \(\eta(h,t)\) can be written as the product of the imaginary part and the real part over all modes in the Fourier space
\[\eta(\{h_{\mathbf{q}}\},t)=\prod_{q_{i}\geq\pi/L}\eta_{\mathbf{q}}(h_{\mathbf{q}}^{R},t) \eta_{\mathbf{q}}(h_{\mathbf{q}}^{I},t). \tag{11}\]
The probability-density-like function \(\eta(h,t)\) follows the same time evolution as \(\mathcal{P}(h,t)\) in Eq. (13)
\[\frac{\partial\eta_{\mathbf{q}}(h_{\mathbf{q}}^{R,I},t)}{\partial t}=\frac{(2\pi)^{d} }{2\beta}\frac{\partial^{2}\eta_{\mathbf{q}}}{\partial(h_{\mathbf{q}}^{R,I})^{2}}+ \nu q^{2}\eta_{\mathbf{q}}+\nu q^{2}h_{\mathbf{q}}^{R,I}\frac{\partial\eta_{\mathbf{q}}}{ \partial h_{\mathbf{q}}^{R,I}}. \tag{12}\]
with the initial condition
\[\eta(h,0)=e^{-iuH_{S}(0)}\mathcal{P}(h,0). \tag{13}\]
Due to the quadratic nature of the EW equation, we assume the time-dependent solution \(\eta(h,t)\) takes a Gaussian form at any time
\[\eta_{\mathbf{q}}(h_{\mathbf{q}}^{R,I},t)=\sqrt{\frac{\beta^{\prime}\nu q^{2}}{\pi(2 \pi)^{2d}}}\exp\Big{[}-A(t)(h_{\mathbf{q}}^{R,I})^{2}+B(t)\Big{]}. \tag{14}\]
The coefficients are governed by the following ordinary differential equations
\[\dot{A}(t)=-\frac{2(2\pi)^{d}}{\beta}A^{2}(t)+2A(t)\nu q^{2}, \tag{15}\]
\[\dot{B}(t)=-\frac{(2\pi)^{d}}{\beta}A(t)+\nu q^{2}. \tag{16}\]
The initial condition Eq. (13) gives way to the initial values of the coefficients
\[A(0)=(\beta^{\prime}+iu)\nu\frac{1}{(2\pi)^{d}}q^{2}, \tag{17}\]
\[B(0)=0. \tag{18}\]
By solving the above equations we obtain
\[A(t) =\frac{1}{(2\pi)^{d}}\frac{e^{2\nu q^{2}t}\beta(u-i\beta^{\prime })\nu q^{2}}{(e^{2\nu q^{2}t}-1)u-i[\beta+(e^{2\nu q^{2}t}-1)\beta^{\prime}]}, \tag{19}\] \[B(t) =\nu q^{2}t+\frac{1}{2}\ln\left[\frac{i\beta}{u-i\beta^{\prime}+ i\beta+(i\beta^{\prime}-u)e^{2\nu q^{2}t}}\right]. \tag{20}\]
Substituting Eqs. (19) and (20) into Eq. (14), we arrive at
\[\eta(\{h_{\mathbf{q}}\},t) =\prod_{q_{i}\geq\pi/L}\eta_{\mathbf{q}}(h_{\mathbf{q}}^{R},t)\eta_{\mathbf{q }}(h_{\mathbf{q}}^{I},t)\] \[=\prod_{\mathbf{q}(q_{i}\geq\pi/L)}\frac{\beta^{\prime}\nu q^{2}}{ \pi(2\pi)^{d}}\frac{i\beta\exp(2\nu q^{2}\tau)}{u-i\beta^{\prime}+i\beta+(i \beta^{\prime}-u)\exp(2\nu q^{2}t)}\] \[\exp\bigg{\{}-\frac{1}{(2\pi)^{d}}\frac{\exp(2\nu q^{2}t)\beta(u -i\beta^{\prime})\nu q^{2}}{\Big{[}-1+\exp(2\nu q^{2}t)\Big{]}u-i\Big{[}\beta- \beta^{\prime}+\beta^{\prime}\exp(2\nu q^{2}t)\Big{]}}\Big{[}(h_{\mathbf{q}}^{R })^{2}+(h_{\mathbf{q}}^{I})^{2}\Big{]}\bigg{\}}. \tag{21}\]
Substituting it into Eq. (16), we obtain the characteristic function of heat Eq. (18) of the EW elastic manifold in the relaxation process. |
2302.10111 | Reflection, emission, and polarization properties of surfaces made of
hyperfine grains, and implications for the nature of primitive small bodies | There are various indications that the most primitive small bodies (P, D-type
asteroids, comets) have surfaces made of intimate mixtures of opaque minerals
and other components (silicates, carbonaceous compounds, etc.) in the form of
sub-micrometre-sized grains, smaller than the wavelength at which they are
observed, so-called hyperfine grains. Here, we investigate how the Vis-NIR-MIR
spectral and V-band polarimetric properties of surfaces made of hyperfine
grains are influenced by the relative abundance of such hyperfine materials,
having strongly different optical indexes. Mixtures of grains of olivine and
iron sulfide (or anthracite), as analogues of silicates and opaque minerals
present on small bodies, were prepared at different proportions. The
measurements reveal that these mixtures of hyperfine grains have spectral and
polarimetric Vis-NIR properties varying in strongly nonlinear ways. When
present at even a few percent, opaque components dominate the Vis-NIR spectral
and polarimetric properties, and mask the silicate bands at these wavelengths.
The Vis-NIR spectral slope ranges from red (positive slope), for pure opaque
material, to blue (negative slope) as the proportion of silicates increases,
which is reminiscent of the range of spectral slopes observed on P, D, X, C-
and B-types asteroids. The spectra of the darkest mixtures in the Vis-NIR
exhibit the absorption bands of Si-O in olivine around 10 m in the MIR, which
is observed in emission for several small bodies. This work shows that both the
contrasted optical indexes of the components, and the dispersion or aggregation
(depending on their relative proportions) of their hyperfine grains, induce
different light scattering regimes in the Vis-NIR and MIR, as observed for
primitive small bodies. The optical separation of hyperfine grains seems to be
a major parameter controlling the optical properties of these objects. | Robin Sultana, Olivier Poch, Pierre Beck, Bernard Schmitt, Eric Quirico, Stefano Spadaccia, Lucas Patty, Antoine Pommerol, Alessandro Maturilli, JΓΆrn Helbert, Giulia Alemanno | 2023-02-20T17:15:17Z | http://arxiv.org/abs/2302.10111v1 | **This is the Accepted Manuscript of this published paper:**
## Abstract
Solar System small bodies were the first objects to accrete inside the protoplanetary disk, giving insights into its composition and structure. The P-/D-type asteroids are particularly interesting because of the similarity of their spectra, at visible and near infrared wavelengths (Vis-NIR), with cometary nuclei, suggesting that they are the most primitive types of small bodies. There are various indications that (1) their low albedo in the visible (Vis) and mid-infrared (MIR) wavelength ranges seems mainly controlled by the presence of opaque minerals (iron sulfides, Fe-Ni alloys etc.) (Quirico et al., 2016; Rousseau et al., 2018); and (2) their surfaces are made of intimate mixtures of these opaque minerals and other components
(silicates, carbonaceous compounds, etc.) in the form of sub-micrometre-sized grains, smaller than the wavelength at which they are observed, so-called "hyperfine" grains. Here, we investigate how the Vis-NIR-MIR (0.55-25 \(\upmu\)m) spectral and V-band (0.53 \(\upmu\)m) polarimetric properties of surfaces made of hyperfine grains are influenced by the relative abundance of such hyperfine materials, having strongly different optical indexes. Mixtures of grains of olivine and iron sulfide (or anthracite), as analogues of silicates and opaque minerals present on small bodies, were prepared at different proportions. The measurements reveal that these mixtures of hyperfine grains have spectral and polarimetric Vis-NIR properties varying in strongly nonlinear ways. When present at even a few percent, opaque components dominate the Vis-NIR spectral and polarimetric properties, and mask the silicate bands at these wavelengths. The Vis-NIR spectral slope ranges from red (positive slope), for pure opaque material, to blue (negative slope) as the proportion of silicates increases, which is reminiscent of the range of spectral slopes observed on P/D/X/C- and B-types asteroids. The spectra of the darkest mixtures in the Vis-NIR exhibit the absorption bands of Si-O in olivine around 10 \(\upmu\)m in the MIR, which is observed in emission for several small bodies. The samples studied here have macro- and micro-porosities lower than 78%, indicating that surfaces more compact than "fairy castle" hyperporous (80-99%) ones can also exhibit a blue spectral slope or a silicate signature at 10 \(\upmu\)m. Remarkably, some mixtures exhibit altogether a red spectral slope in the Vis-NIR, a 10-\(\upmu\)m feature in the MIR, and a V-band polarimetric phase curve similar (but not identical) to P-/D-type asteroids, reinforcing the hypothesis that these bodies are made of powdery mixtures of sub-micrometre-sized grains having contrasted optical indexes. This work shows that both the contrasted optical indexes of the components, and the dispersion or aggregation -depending on their relative proportions- of their hyperfine grains, induce different light scattering regimes in the Vis-NIR and MIR, as observed for primitive small bodies. The optical separation of hyperfine grains seems to be a major parameter controlling the optical properties of these objects.
## 1 Introduction
### Small bodies reflection, emission, and polarization properties
Our Solar System contains a variety of small body populations, which date from the first phases of its formation. A significant fraction of these objects did not accumulate enough mass, or not fast enough, to achieve temperature sufficient to induce partial or full differentiation. Hence, they hold key information about the materials and the conditions inside the
protoplanetary disk where they formed. During the early stage of planet formation, migrations of the giant gaseous planets should have disturbed the orbit of small bodies (Morbidelli et al., 2005; Tsiganis et al., 2005), which complexifies the retrieval of protoplanetary disk compositional and thermal structure. Still, large optical surveys of small bodies revealed a gradient in composition with an evolution in spectral types throughout the asteroid main belt, with the dominant types being successively S, C and P/D- types with increasing heliocentric distance (Gradie and Tedesco, 1982; DeMeo and Carry, 2013).
The P/D-type asteroids, found in the main belt and among Jupiter Trojans, are particularly interesting because their visible to mid-infrared spectra are similar to comets, suggesting that they are the most primitive types of asteroids (Vernazza and Beck, 2017). Reflectance spectra in the visible and near infrared (Vis-NIR) of cometary nuclei and P/D-type asteroids both display similar red to flattish spectral slopes, lacking absorption features (Emery et al., 2011; Capaccioni et al., 2015). In the 3-\(\upmu\)m spectral region, absorption bands possibly due to NH\({}_{4}\)+ are present in P/D-types, and comet 67P/Churyumov-Gerasimenko (Poch et al., 2020). In the mid-infrared (MIR) range, emission spectra of cometary comae and P/D-type objects display a similar emissivity plateau from around 9 to 11 \(\upmu\)m caused by the fundamental mode of vibrations of Si-O in silicates (Emery et al., 2006; Vernazza et al., 2012). A similar but not identical and generally weaker emissivity feature is observed on C-type and on some X-type asteroids (Emery et al., 2011; Marchis et al., 2012; Vernazza et al., 2017), and is more contrasted for larger objects (Marchis et al., 2012). On the contrary, B-type asteroids such as Pallas, Phaethon and Bennu, do not exhibit an emissivity plateau around 10 \(\upmu\)m (Lim et al., 2005; McAdam et al., 2018; Lim et al., 2019; Hamilton et al., 2019). In term of albedo and Vis-NIR color, these C/X/B-type asteroids are part of the same cluster of main-belt asteroids (Beck and Poch, 2021). Their Vis-NIR spectral slope span from slightly red (X/C-type) to significantly blue (B/C-type), and is redder for larger objects (Beck and Poch, 2021). Each of these spectral characteristics have been independently attributed to the surface texture of the objects, smaller asteroids being covered by rocks or large dust particles, while larger ones are covered by finer particles (Marchis et al., 2012; Beck and Poch, 2021). This apparent correlation between Vis-NIR slope and 10-\(\upmu\)m feature is also observed for Trojan asteroids (D-type) (Emery et al., 2011). In addition, low-albedo asteroids exhibit differences in the degree of linear polarization of the visible light they scatter at different phase angles, i.e. their polarimetric phase curves. Absolute values of the minimum of polarization in the negative branch of the polarimetric phase curves (hereafter called P\({}_{\text{min}}\)) and inversion angles (hereafter called \(\alpha_{\text{inv}}\)) of F/P/D-type asteroids are generally smaller compared to the values found for
B/C/Ch-type asteroids (Belskaya et al., 2017, 2019) (except for P-type for which \(\alpha_{\text{inv}}\) is similar to B-type). This has also been tentatively attributed to differences of microstructure of their regolith, F/D-types having an optically more homogenous microstructure (Belskaya et al., 2005; Bagnulo et al., 2015). Taken together, these observations suggest a possible correlation between the redness of Vis-NIR spectral slope, the presence of a contrasted 10-\(\upmu\)m plateau, and the absolute values of the polarimetric parameters P\({}_{\text{min}}\) and possibly \(\alpha_{\text{inv}}\). However, to our knowledge, the way these polarimetric and spectral properties in the Vis-NIR and MIR vary for specific asteroidal surfaces analogues of controlled microstructure has never been tested numerically nor experimentally.
These observations give rise to the following questions: Why are P/D-type asteroids spectrally similar to comets, in particular in the MIR where the emissivity spectra of asteroidal surfaces are similar to that of comae's clouds of particles? Among the P/D/X/C- and B-type asteroids, what cause(s) the Vis polarisation phase curve, Vis-NIR spectral slope and the 10-\(\upmu\)m MIR variations? As suggested by their possible correlation, could the variations of spectral and polarimetric features be due to similar textural properties of these small bodies' surfaces?
We now have multiple measurements - of inter-planetary dust particles collected on Earth (Rietmeijer, 1993), of samples returned from comets (Price et al., 2010), or analysed in situ (Mannel et al., 2019) -, all indicating that the surface material of comets and possibly C-, P- and D-types asteroids (Vernazza et al., 2015) is made of agglomerates of individual sub-micrometric monomers. The dust agglomerates observed at comet 67P/Churyumov-Gerasimenko have different morphologies from compact (porosity \(<\)10%) to porous (10-95%) or fluffy (\(>95\%\)) particles (Guttler et al., 2019). As in the case of interplanetary dust particles (IDPs), micro-meteorites, or primitive meteorites matrices, the different constituents (silicates, opaque minerals, carbonaceous compounds etc.) are mixed at the sub-micrometric scale. As discussed and shown in Quirico et al. (2016) and Rousseau et al. (2018), the low albedo of these cosmo-materials from the Vis to the MIR wavelength range seems mainly controlled by the presence of opaque minerals (iron sulfides, Fe-Ni alloys etc.) rather than by other major components such as silicates and carbonaceous materials. Strikingly, the spectra of comets, P/D-type asteroids and Jupiter Trojans are perfectly matched by spectra of anhydrous chondritic porous IDPs (CP-IDPs) (see Figure 1 in Vernazza et al., 2015), which are predominantly made of aggregates of sub-micrometer-sized grains of silicates (\(\sim\)20-50 vol%), Fe-Ni sulfides (\(\leq 40\) vol%) and carbonaceous materials (\(\leq 40\) vol%), arranged in a fluffy microstructure (Alexander et al., 2007; Bradley, 2014). Therefore, CP-IDPs appear as convincing analogues of the surface of comets and primitive asteroids, but how the various
compositional and textural (grain size, porosity) parameters influence the surface optical properties is still an open question.
In the following paragraph, we review the current knowledge of the MIR, Vis-NIR and polarimetric properties of particulate surfaces, especially those made of grains smaller than 1 um and of dark or mixed bright/dark components.
### Interpretations of small bodies reflection and emission properties
Typical MIR spectra of silicate minerals are characterized by reststrahlen bands (corresponding to the Si-O fundamental modes at 9-11 um, and 18-28 um), a Christiansen feature (where the refractive index varies rapidly with wavelength, near 8 um for most silicates), and transparency features between the reststrahlen bands (Mustard and Glotch, 2020). A large number of studies have shown how grain size (Hunt and Vincent, 1968; Hunt and Logan, 1972; Arnold and Wagner, 1988; Moersch and Christensen, 1995; Mustard and Hays, 1997; Le Bras and Erard, 2003) and surface porosity (Salisbury and Eastes, 1985; Salisbury and Wald, 1992; Salisbury et al., 1994) influence these mid-infrared spectral bands and features. Most of these measurements showed that mid-infrared spectral features progressively vanish with decreasing grain size from few hundreds to few micrometres. However, for grains smaller than 5 um, Hunt and Logan (1972) and Arnold and Wagner (1988) observed an increased spectral contrast around 10 um, with emission maxima. The work of Salisbury and Eastes (1985) shows that the spectral contrast of such small grains is very dependent on packing conditions, i.e. the surface porosity. Based on these laboratory measurements and on a Hapke-Mie hydride radiative transfer model, Emery et al. (2006) proposed that the emission feature observed between 9 and 11 um for Trojan asteroids is due to very fine silicate grains either in a very porous surface structure, or embedded in a more transparent material hypothesized to behave as void-space, which could explain why the same feature is observed for comae. Following this work, several studies have measured reflectance spectra of samples made by suspending minerals or meteorites powders in infrared-transparent potassium bromide (KBr) powder to simulate void spaces in the MIR. These spectra have mid-infrared spectral features similar to those observed on Trojan and main-belt asteroids. These results were interpreted as a confirmation of Emery et al. (2006) suggestions, i.e. either the presence of IR-transparent salts (King et al., 2011; Yang et al., 2013; Izawa et al., 2021), or the high porosity of asteroids surfaces (Vernazza et al., 2012; Young et al., 2019; Martin et al., 2022), the latter being consistent with their thermal inertia and radar albedo (Vernazza et al.,
2012). The presence of clouds of grains in electrostatic levitation over the surface is also a possible explanation (Vernazza et al., 2012; Wang et al., 2016).
However, the mixtures reproducing the 10-\(\upmu\)m feature are those containing a relatively large fraction of IR-transparent KBr salt grains used to simulate void spaces, which do not behave exactly as such in the radiative transfer. Indeed, these KBr grains scatter the light and make the samples relatively bright in the Vis-NIR and MIR reflected light (equivalent to a low emissivity in the MIR according to the Kirchoff's law), which is inconsistent with observations of low-albedo asteroids and comets (which have a low reflectance in the Vis-NIR and a high emissivity in the MIR). Using a modified Hapke model, Yang et al. (2013) showed that the Vis-NIR and MIR spectra of Trojan asteroids can both be explained by fine grained silicates (1-5 wt%) and highly absorbing material (2-10 wt%) suspended in a transparent matrix of IR-transparent salts. However, such a high abundance of these peculiar salts on asteroids and comets remains to be proven (e.g. the ammoniated salts detected on Ceres and comet 67P surfaces are absorbent in the MIR), and alternative explanations may be possible.
The possible origins of the Vis-NIR spectral slopes of small bodies, and their variability with grain size, porosity, mixtures, space weathering etc. have been the subject of many studies. A spectral reddening is observed after grinding minerals and meteorites (Ross et al., 1969; Johnson and Fanale, 1973; Cloutis et al., 2018; Beck et al., 2021), and a bluing is observed when sub-micrometre-sized or micrometre-sized grains of opaque minerals are present, embedded in mixtures with brighter grains (Clark et al., 2008; Loeffler and Prince, 2022) and/or in highly porous structures (Poch et al., 2016; Schroder et al., 2021). Bluing or reddening are also observed after laser or ion irradiations simulating space weathering (Lantz et al., 2017; Matsuoka et al., 2020). The explanations for these changes of Vis-NIR spectral slope are not always well-established, and, apart from changes of composition (after grinding, or irradiation), the changes of micro-structure and/or the appearance of sub-micrometre-sized scatterers are suspected to induce these effects (Beck and Poch, 2021).
However, very few systematic laboratory spectral measurements of such sub-micrometre-sized materials have been made. In a previous work (Sultana et al., 2021), we explored the influence on Vis-NIR reflectance spectra of the presence of sub-micrometre-sized grains, which are smaller than the wavelength at which they are observed, so called _hyperfine grains_. In this previous study, we only considered weakly absorbing materials (k \(<\) 0.01), and we showed that hyperfine grains have lower contrast of absorption bands, and lower spectral slopes than larger \(\upmu\)m-size grains. We concluded that the presence of these hyperfine grains tends to uniformize the spectra of different material, driving them toward blue featureless
spectra (_spectral degeneracy_), especially when they form hyper-porous structures, where they tend to scatter light as smaller aggregates of a few sub-\(\upmu\)m grains.
In this work, with the goal to improve our knowledge on the causes explaining the optical properties of primitive low-albedo asteroids and comets, we investigate altogether the Vis-NIR and MIR spectral and polarimetric properties of surface mixtures of sub-micrometre-sized grains made of various proportions of strongly and weakly absorbing materials. Such mixtures are more relevant to low-albedo primitive bodies of the Solar system, and have some similarities with IDPs thought to come from these bodies. We obtained reflectance spectra in the wavelength range from 0.55 to 25 \(\upmu\)m, that can also be used to infer emission properties of such samples, as well as polarization measurements at 0.53 \(\upmu\)m. We compare our laboratory results to observations and discuss the implications in term of composition and texture of a few classes of Solar system small bodies.
## 2 Methodology
### Samples preparation
Our goal was to produce heterogeneous aggregates of sub-\(\upmu\)m grains made of two materials having contrasted optical indexes and mixed in different volume proportions, as _optical models_ of the surface of primitive small bodies. The first material was the silicate mineral olivine, one the most abundant minerals in primitive meteorites and IDPs (Bradley, 2014; Scott and Krot, 2014) and detected in cometary particles (Brownlee, 2014). We used a Mg-rich forsterite olivine (Fo = [Mg] / ([Mg]+[Fe]) \(\geq\) 94) purchased from Donghai Pellocci Crystal Products, China. For the second component, we chose iron sulfide and anthracite because they are absorbent over the Vis to MIR wavelength range, as the opaque minerals found in primitive meteorites and IDPs (mainly iron sulfides and Fe-Ni alloys, see Dai and Bradley, 2001; Quirico et al., 2016 and references herein). Moreover, we note that Fe-Ni sulfides are the second most common minerals in CP-IDPs after crystalline silicates (Dai and Bradley, 2001; Bradley, 2014). Iron sulfide was purchased from Alfa Aesar (ref. A15569.0B). X-ray diffraction of this powder indicates a composition made of 55 vol% troilite (FeS) and 45 vol% pyrrhotite (Fe\({}_{1\cdot\text{x}}\)S with 0 \(<\) x \(<\) 0.2). For simplicity, this sample will be labelled as "FeS" in the rest of the text. Anthracite was obtained from the Musee de la Mure (France). It is made of 91.9 % carbon, almost no oxygen atom, and is composed of graphite-like domains connected to less-organized polyaromatic matter (Albiniak et al., 1996). We stress that anthracite is not a
material representative of the carbonaceous compounds expected on primitive small bodies (as discussed in Quirico et al., 2016), but we have chosen it here as another _optical analogue_ of opaque minerals expected to be present on these bodies. The spectra of these endmember materials are presented in Figure 1 for hyperfine grains. Sulfide and anthracite display relatively flat spectra in the Vis-NIR and in the MIR. In the MIR, their spectra have a higher reflectance than olivine, and do not have strong spectral features (Fig. 1).
To synthesize hyperfine powders of different materials, we used the grinding protocol described in detail in Sultana et al. (2021) to obtain grain diameters below 1 \(\upmu\)m. Each sample is successively dry- (20 min) and wet-ground (150 min, in ethanol) several times using a Planetary Grinder Retsch(r) PM100 with progressively decreasing sizes of zirconium oxide (ZrO\({}_{2}\)) grinding balls. After grinding, the balls are separated from the sub-micrometre-sized powder by sieving, and the ethanol is evaporated. Images obtained with a Scanning Electron
Figure 1: Reflectance in the Vis-NIR and in the MIR of olivine, iron sulfide and anthracite powders of sub-micrometric grain size. Spectra of iron sulfide and anthracite are relatively flat and do not present any absorption feature in the Vis-NIR or in the MIR. Note that the olivine is very bright in the Vis-NIR, but completely dark in the MIR. Iron sulfide and anthracite are both dark in the Vis-NIR but they are significantly brighter than olivine in the MIR. The small peak at 4.25 \(\upmu\)m is related to gaseous CO\({}_{2}\).
Microscope (SEM) were used to measure the grain size distribution of the obtained powders. The mean and maximal grain sizes in the powders for each component are listed in Table 1.
Mixing the two components (either iron sulfide or anthracite with olivine) was achieved by adding the two powders in a mortar and mixing them by hand grinding with the pestle for about 10 min. Powders were weighed separately to target the desired volume fraction of the mixture, using a Sartorius Quintix 35-1S scale, with a precision of 10 \(\upmu\)g. To enable this preparation, the densities of the materials were measured using a pycnometer for the olivine (3.32 g.cm-3) and iron sulfide (4.82 g.cm-3) samples, and via the liquid displacement method for the anthracite (1.62 g.cm-3). To ensure that the materials were well mixed together, they were imaged using a SEM. Figure 2 shows images of olivine and iron sulfide mixtures. These images are obtained in backscattered electron mode, sensitive to the mean molecular mass of the probed area. As the FeS is composed of atoms heavier than olivine, it backscatters electrons more efficiently, and FeS grains appear brighter in the images, whereas the olivine appears in lighter grey tone. Figure 2 shows how the grains of iron sulfide are efficiently dispersed in a matrix of olivine grains. The FeS grains do not form large aggregate and they are all separated by several olivine grains.
\begin{table}
\begin{tabular}{l l l l} \hline Sample & Olivine & Iron sulfide & Anthracite \\ \hline Mean grain size (\(\upmu\)m) & 0.69 & 0.30 & 0.33 \\ Standard deviation (\(\upmu\)m) & 0.47 & 0.28 & 0.35 \\ Median grain size (\(\upmu\)m) & 0.56 & 0.21 & 0.21 \\ Max. grain size (\(\upmu\)m) & 4.67 & 2.75 & 3.84 \\ \hline \end{tabular}
\end{table}
Table 1: Mean, standard deviation, median and maximal sizes of the grains in each hyperfine powder prepared with the grinding protocol developed in Sultana et al. (2021). The size distributions are shown in Supplementary Figure 3.
### Spectral and polarimetric measurements
Before each measurement, the powders were deposited into a sample holder (several millimetres thick and wide) using a spatula, and their surface was gently levelled to obtain a flat surface. Knowing the volume of the sample holders and the mass of powder introduced, we evaluated the samples porosity to range from 70% (for pure FeS powder) to 78% (for pure olivine powder). However, the porosity at the scale of the whole sample (several hundreds of \(\upmu\)m up to mm, i.e. the "macro-porosity") may not be relevant for the light scattering that occurs at nm-\(\upmu\)m scales. As seen on Figure 2, the sub-\(\upmu\)m grains are assembled in individual aggregates several-micrometres-large (the largest aggregates seen on these images are 10 to 30 \(\upmu\)m large), where they are in close contact with each other, and not in fluffy aggregates. Therefore, the bulk porosity (70-78%) might be mostly controlled by the void-spaces located between these multiple aggregates. More relevant for the light scattering is the surface rugosity and the sub-surface porosity at the nm-\(\upmu\)m scales accessible to the photons (i.e. the "micro-rugosity" and "micro-porosity"). The SEM images (Figure 2) indicate that the micro-porosity of the samples studied here is possibly around \(\sim\)50-70%, and definitely lower than that of fluffy aggregates obtained after sublimation of mixtures of water ice and mineral grains, measured in our previous study (see Figure 4 in Sultana et al., 2021; these samples had macro-porosity of 95-99%).
Figure 2: Scanning Electron Microscope (SEM) images of mixtures of sub-\(\upmu\)m grains of olivine and iron sulfide at different volume proportions. On this figure, we can observe that the grains of iron sulfide (brighter grains) are well dispersed in a matrix of olivine grains. They do not form aggregates of several grains of the same materials.
_Vis-NIR spectroscopy_: Reflectance spectra in the Vis-NIR range (0.55-4.2 \(\mathrm{\SIUnitSymbolMicro m}\)) were obtained at IPAG with the spectro-gonio radiometer SHADOWS in standard mode (Potin et al., 2018; Sultana et al., 2021). The data were calibrated by dividing the sample signal by that of two reference targets under the same illumination-observation geometry. Vis-NIR spectra of each mixture and the pure materials are presented in Figure 3 and 4. Spectral resolution and sampling were similar to Sultana et al. (2021).
_Emissivity measurements_: Direct emissivity measurements from 5 to 17 \(\mathrm{\SIUnitSymbolMicro m}\) were performed at the Planetary Spectroscopy Laboratory (PSL; Maturilli et al., 2019) at DLR in Berlin. The emissivity spectra were obtained by heating the sample, poured into aluminium sample holder, up to 673 K and measuring the emitted radiance with a Brucker Vertex 80V FT-IR spectrometer. The data were calibrated by dividing the sample radiance by the emitted radiance at the same temperature of a blast furnace slag, taken as blackbody, under the same temperature and observation conditions.
_MIR reflectance:_ Studying the emissivity of mixtures was also performed by measuring the reflectance (\(r_{\lambda}\)) of the samples, and approximating the emissivity (\(e_{\lambda}\)) using Kirchhoff's thermal law: \(e_{\lambda}=1-r_{\lambda}\). These MIR reflectance spectra from 1 to 25 \(\mathrm{\SIUnitSymbolMicro m}\) were obtained at IPAG in Grenoble, using a Brucker Vertex 70V FT-IR spectrometer equipped with a biconical reflectance kit A513/QA. However, the Kirchhoff's law is only valid for hemispherical reflectance, and not for biconical reflectance, so the resulting absolute emissivity and spectral contrast are not reliable. The reflectance measurements were performed at ambient pressure and under dry air. Since the reflectance kit was not designed to study small phase angle and illuminates the sample at normal incidence, we used a 45\({}^{\mathrm{o}}\) phase angle (\(\mathrm{i}=15^{\mathrm{o}}\), \(\mathrm{e}=30^{\mathrm{o}}\)).
_Polarimetric measurements:_ The polarimetric measurements were carried out on olivine, FeS and their mixtures with the POLarimeter for ICE Samples (POLICES) at the University of Bern (Poch et al., 2018; Spadaccia et al., 2022). A polarimeter (Hinds Dual PEM II/FS42-47) is placed above the sample at emission \(\mathrm{e}=0^{\mathrm{o}}\), and the sample is illuminated by a motorized arm holding a collimating head fed by an optical fiber connected to a 530 nm LED (Thorlabs M530F2). The beam is depolarized and creates a 15 mm light spot on the sample at normal incidence. In this configuration, the incidence angle is equal to the phase angle, and it was varied by rotating the motorized arm from \(\mathrm{i}=1^{\mathrm{o}}\) to \(\mathrm{i}=70^{\mathrm{o}}\). The linear polarization
measurements are the result of the averaging of four measurements performed by rotating the sample on the horizontal plane with 45\({}^{\circ}\) incremental steps. This azimuthal averaging mitigates geometry effects due to tilts and orientations of the sample surface.
## 3 Results
### 3.1 Vis-NIR spectra of mixtures of hyperfine opaques grains and olivine
Figure 3 and Figure 4 present the evolution of olivine/FeS and olivine/anthracite mixtures with decreasing volume fraction of olivine. For both mixtures, we present the reflectance spectra (Fig. 3a, Fig. 4a) and spectra normalized at 0.7 \(\upmu\)m (Fig. 3b, Fig. 4b). In both cases, we observe a general decrease of reflectance with increasing concentration of opaque grains. The olivine crystal-field absorption band at 1 \(\upmu\)m becomes indistinguishable when the amount of opaque grains is in excess of 1 vol% for anthracite and 5 vol% for FeS. Interestingly, the Vis-NIR spectral slope from 0.5 to 1.5 \(\upmu\)m is modified as the concentration of opaque increases: it becomes bluer for up to 10 vol% opaques, and redder for larger concentrations (Fig. 3) (by "bluer" or "redder", we mean that the relative spectral slope is getting more negative or positive, respectively, while the reflectance is decreasing at all wavelengths). At wavelengths larger than 1.5 \(\upmu\)m, the spectral slope remains dominated by the spectral properties of olivine, FeS or anthracite depending on their proportions.
Figure 3: Evolution of the reflectance spectra with increasing volume fraction of iron sulfide in the sample. The panel **(a)** shows the fast decrease of the reflectance level with the increasing amount of opaque. On the panel **(b)** the spectra are normalized at 1.64 \(\mu\)m and vertically shifted for the sake of clarity, so that the modifications of spectral slope are more visible. With the increasing fraction of opaque iron sulfide, the visible spectral slope becomes more and more blue, then neutral, and finally redder.
Spectral slopes were mostly calculated between 0.64 and 1.2 \(\upmu\)m and defined as per cent of reddening per 100 nm (%.(100 nm)-1) according to Delsanti et al. (2001) and Fornasier et al. (2015):
\[slope(\lambda_{1},\lambda_{2})=\frac{R_{\lambda_{2}}-R_{\lambda_{1}}}{R_{ \lambda_{1}}\times(\lambda_{2}-\lambda_{1})}\times 10^{4}\]
Where \(\lambda\) is the wavelength in nm, and \(R_{\lambda_{1}}\), \(R_{\lambda_{2}}\)the absolute reflectance at \(\lambda_{1}\), \(\lambda_{2}\).
The computed values of the spectral slope and reflectance at 0.7 \(\upmu\)m are provided in Table 2. For the highest fractions of olivine, the slope was computed on a different spectral interval (0.64-1.64 \(\upmu\)m) to account for the presence of the \(Fe^{2+}\) absorption band of olivine around 1 \(\upmu\)m.
Figure 4: Evolution of the reflectance spectra of hyperfine olivine powders with the increasing fraction of anthracite in the sample. The panel **(a)** shows the fast decrease of the reflectance level with the increasing amount of anthracite. On the panel **(b)** the spectra are normalized at 1.64 \(\upmu\)m and shifted vertically to better display the modifications of spectral slope. With the increasing fraction of opaque anthracite, the visible spectral slope becomes more and more blue, and then redder.
Figure 5a shows that the Vis-NIR slope decreases very fast to negative values (blue slope) with the amount of opaque, goes through a minimum value of about -7 %.100 nm-1 for 10 vol% of opaque, then increases and reverts from blue to red around 60 vol% of opaque material, before reaching the slope of the pure opaque material spectrum.
In Figure 5b we present the evolution of reflectance with increasing amount of olivine in the mixture. The reflectance remains below 5 % for up to 50 vol% olivine, and it only exceeds 0.1 for olivine concentrations larger than 80-90 vol% (Fig. 5b and Table 2).
\begin{table}
\begin{tabular}{c|c c c c}
**VOLUME** & \multicolumn{2}{c}{**SPECTRAL SLOPE**} & \multicolumn{2}{c}{**REFLECTANCE AT \(0.7\mu m\)**} \\
**FRACTION** & \multicolumn{2}{c}{(\%.(100nm)^{-1})} & \multicolumn{2}{c}{**(\%)**} \\
**Olivine-Opaque** & \multicolumn{2}{c}{\([0.64;1.2]\mu m\)**} & \multicolumn{2}{c}{} \\ (\%) & & & \\ \hline & Olivine-FeS & Olivine- & Olivine- & Anthracite \\
100-0 & -0.1* & -0.1* & 85.1 & 85.1 \\
99-1 & -1.2* & -4.7 & 75.6 & 35.7 \\
95-5 & -4.2* & -6.4 & 42.4 & 19 \\
90-10 & -7.0 & -6.8 & 13.3 & 10.8 \\
85-15 & -6.3 & - & 10.6 & - \\
80-20 & -5.6 & -5.1 & 10.6 & 5.8 \\
70-30 & -3.7 & - & 7.4 & - \\
60-40 & -2.9 & - & 6.3 & - \\
50-50 & -1.0 & -0.4 & 4.8 & 3.0 \\
40-60 & 0.2 & - & 4.3 & - \\
30-70 & 1.4 & - & 3.6 & - \\
20-80 & 3.1 & - & 3.2 & - \\
10-90 & 4.9 & - & 2.7 & - \\
1-99 & 7.3 & - & 2.3 & - \\
0-100 & 7.4 & 6.0 & 2.5 & 1.8 \\ \end{tabular}
\end{table}
Table 2: Values of spectral slopes and reflectance for the hyperfine olivine-opaque mixtures. *Due to the presence of the 1-\(\mu\)m absorption band of the olivine on these spectra, the interval to compute the spectral slope was changed to [0.64;1.64] \(\mu\)m.
### Grain size-effect on olivine MIR spectra and effect of dilution in KBr
Figure 6 displays reflectance spectra in the mid-infrared for a suite of olivine samples with decreasing grain sizes, from hundreds of micrometres to sub-micrometre in diameter. Spectra first display an increase of the continuum reflectance in the inter-bands wavelength ranges, around 3.9 and 6.4 \(\mu\)m, as the grain size decreases This is due to an increase of the volume density of scatterers at the scale of the light penetration depth with decreasing grain size, leading to more scattering of incident photons relative to absorption. In the case of the hyperfine grains, the reflectance is overall much lower in the mid-infrared range. While reflectance maxima are observed around 9.5, 10.5, 16, 19 and 23 \(\mu\)m for grain sizes \(>\) 25 \(\mu\)m (where the strong stretching and bending modes of SiO\({}_{4}\) are located), the spectra obtained on olivine grains with hyperfine sizes show an overall low reflectance (below 5 % from 6 to 25
Figure 5: Evolution of the spectral slope in the Vis-NIR **(a)** and of the reflectance at 0.6 \(\mu\)m **(b)** with increasing volume fraction of opaque grains in the samples. The slopes in the wavelength range ([0.64;1.64] \(\mu\)m) decrease rapidly with the volume fraction of olivine in the mixture for both types of mixture (FeS and anthracite). They reach minimal values of -7.0 and -6.8 %(100.nm)\({}^{\text{-1}}\) for concentrations of 10 vol% of FeS and anthracite respectively. Then, for larger volume fractions of opaques, the slope increases progressively until it reaches the value of the respective pure opaque material spectrum. The reflectance is decreasing very fast with the amount of opaque grains in the mixtures, until they occupy -10 vol% of the sample. It then decreases very progressively.
\(\mathrm{\SIUnitSymbolMicro m}\)). One can observe that none of these spectra displays spectral signature in the 10-\(\mathrm{\SIUnitSymbolMicro m}\) region similar to what is observed on small bodies (Vernazza and Beck, 2017).
However, when the olivine is diluted in KBr powder as shown in Figure 7, we observe a spectral feature close to 10 \(\mathrm{\SIUnitSymbolMicro m}\) in emissivity or "1-Reflectance" spectra. This feature consists in a double peak at 10 and 11 \(\mathrm{\SIUnitSymbolMicro m}\) together with a smaller one around 12-\(\mathrm{\SIUnitSymbolMicro m}\). One can observe that this latter feature is the sharpest for the lowest concentration of olivine in the KBr matrix, and that the contrast of the feature with respect to the continuum decreases when the concentration of olivine increases. This can be interpreted by the fact that when mixed with KBr, the olivine sub-\(\mathrm{\SIUnitSymbolMicro m}\) grains are de-agglomerated and dispersed in the sample, and more efficiently for lower concentration of olivine, so that they can absorb the light individually (and
Figure 6: Mid-infrared reflectance of olivine with decreasing grain size. The letters indicate the positions of different spectral features. OH: water and hydroxyl absorption bands; O/C: olivine overtone/combination absorption bands; C: Christiansen feature; TF: transparency feature; R: reststrahlen features. An absorbance spectrum of olivine, obtained by measuring the transmittance spectrum of a KBr pellet containing olivine powder, is shown for comparison. The reststrahlen features (R) tend to first increase with decreasing grain size, but for grains smaller than 1 \(\mathrm{\SIUnitSymbolMicro m}\), their reflectance level drops and the sample becomes very dark in the mid-infrared range. The small peak at 4.25 \(\mathrm{\SIUnitSymbolMicro m}\) is related to gaseous CO\({}_{2}\).
not collectively, as larger grains). The KBr crystals are non-absorbing and just scatter the light, inducing higher reflectance in the inter-bands wavelength ranges (transparency windows) and higher contrast. The spectral effects of mixing silicates with salts was studied in details by Izawa et al. (2021). In emissivity, the contrast increases with decreasing concentration of olivine because in the inter-bands, the thermal flux emitted from deeper layers is less efficiently transported to the surface. Therefore, a 10-\(\upmu\)m plateau can be produced in "1-Reflectance" or emissivity spectra if a material "brighter" in the MIR and less thermally conductive is mixed with the hyperfine olivine.
Additionally, these measurements show that, in all three cases, the contrasts of the 10-\(\upmu\)m plateau in emissivity spectra is lower than those calculated from reflectance spectra using the Kirchhoff's law. A likely explanation is that inverting biconical reflectance does not actually satisfy the conditions required for Kirchhoff's law (Hapke, 2012), and does not give proper contrasts (Salisbury et al., 1991). Another possibility would be the presence of a thermal gradient in the sample, with increased radiance from hotter deeper layers reaching the surface in the inter-bands wavelength ranges (transparency windows), although this effect should be stronger at \(\sim\)4-8 \(\upmu\)m (more transparent) than at 13 \(\upmu\)m, which is not always the case as seen in Figure 7.
Figure 7: Evolution of the emissivity contrast with olivine concentration in KBr measured directly in emissivity and in reflectance. The emissivity measurements presented here were performed at \(T=400^{\circ}\)C. Reflectance measurements were converted to emissivity using Kirchhoffβs thermal law. One can observe that the lowest the concentration of olivine in KBr, the highest the emissivity feature around 10 \(\upmu\)m. Moreover,
the contrast of the 10-\(\mu\)m band is lower in direct emissivity measurements than calculated from Kirchhoff's law, probably due to the influence of thermal gradients in the samples measured in emissivity._
### Mid-infrared spectra of mixtures of hyperfine opaques and silicates
Fe-rich opaques (Fe, FeS, Fe-Ni alloys) are absorbing in the VIS, but their reflectance spectra are significantly brighter than olivine in the MIR, probably due to their lower MIR absorption index k (e.g., Querry 1985 and Sato 1984, for Fe and FeS\({}_{2}\) respectively) (Fig. 2). They could thus be considered as brightening agents in the MIR. In Figure 8 and Figure 9 we present the emissivity spectra calculated with Kirchhoff law for the same mixtures of olivine and opaques studied in the VIS-NIR spectral range.
A relatively similar behaviour is observed for the two suites of mixtures (Figure 8 and Figure 9). While the spectra of the endmembers are relatively flat, the spectra of mixtures tend to show the presence of positive features in the 9 to 12 \(\upmu\)m spectral range. Of interest is the appearance of the triplet around 10, 11 and 12 \(\upmu\)m, where IR absorption bands are present in infrared transmission spectra of olivine. These observations reveal that mixing sub-\(\upmu\)m grains of olivine and opaques (opaques in the visible spectral range) is a mechanism to produce an emissivity feature.
Figure 8: Evolution of the MIR spectra of the olivine-iron sulfide mixtures. With the decreasing volume of olivine in the surface, we observe more pronounced silicate features with 2 components plateau between 9 and 12 \(\upmu\)m. The original reflectance spectra are shown in Supplementary Figure 1.
Figure 9: Evolution of the MIR spectral shape of the mixture olivine-anthracite with the increasing volume fraction of opaque. With the decreasing volume of olivine in the surface, we observe more pronounced features of silicates with 2 components plateau between 9 and 12 \(\upmu\)m. The original reflectance spectra are shown in Supplementary Figure 2.
### 3.3 Linear polarization of visible light
Polarimetric phase curves were obtained for a selection of six olivine-FeS hyperfine mixtures as well as for the pure endmembers at the University of Bern. These polarimetric phase curves reveal the typical shape observed for particulate samples (in the laboratory or on planetary surfaces, Belskaya et al., 2015), with a "negative branch" at low phase angle (\(<\)30\({}^{\circ}\)) and a "positive branch" at higher phase angle (Fig. 10a, 10b). The behaviour and magnitude of polarization in the positive branch can be related to the contribution of photons that were reflected once at the surface of grains (Hapke, 2012). This can explain that the magnitude of polarization at high phase angle (\(>\) 50\({}^{\circ}\)) seems to be correlated with the reflectance of the sample (Fig. 10c). The higher the reflectance, the more contribution from multiple scattering can be expected, which explains the lower polarization.
In the analysis of polarimetric phase curves, the inversion angle (the angle at which linear polarization is null) and the absolute value of the minimum (referred to as \(|\)P\({}_{\rm min}\)\(|\)) have been used to empirically characterize the polarization properties of a variety of materials compared to observations of Solar system surfaces (Dollfus et al., 1989). The two endmember materials, FeS and olivine, display the shallowest negative branches with \(|\)P\({}_{\rm min}\)\(|\) values of 0.25 % and 0.75 % respectively (Fig 10b, 10d). On the other hand, the pure FeS presents a polarization ratio of 25.4 % at 70\({}^{\circ}\) phase, and the pure olivine is the least polarizing material (with a polarization ratio of a few percent only at 70\({}^{\circ}\) phase).
The results obtained on the mixtures reveal a contrasted behaviour between the negative and positive branch of polarization (Fig. 10a,b). Reflected light from the pure iron sulfide is strongly polarized at high phase angle (25.4 %), but the strongest polarization (here 29 %, always occurring at 70\({}^{\circ}\), the highest phase angle we measured) is obtained for the mixture with a small fraction of olivine (10 vol% olivine) (Fig 10a and 10c). Interestingly, the mixtures containing 1 to 20 vol% olivine have both stronger 530-nm polarization at 70\({}^{\circ}\) (Figure 10a) and redder Vis-NIR reflectance spectral slope (Figure 5a) than the pure FeS. This could be explained by the fact that olivine grains are more diffusing than FeS gains, so isolated sub-\(\upmu\)m olivine grains at the surface of the sample constitute optical pits, diffusing sub-\(\upmu\)m light onto more FeS-absorbing interfaces than when they are absent, decreasing the reflectance and increasing the positive polarization at sub-\(\upmu\)m wavelengths. Then, with increasing olivine fraction, the maximum value of polarization seems to decrease monotonically with the fraction of FeS in the mixture, and is correlated with the visible reflectance.
The negative branch shows a highly non-linear behaviour with the opaque fraction in the mixture (Fig. 10b). The value of \(|\)P\({}_{\rm min}|\) is the strongest for the lowest fraction of FeS (P\({}_{\rm min}\) = -2.5 % for 10 vol.% of FeS) and then progressively decreases (in absolute value) toward the value for the pure FeS (Fig. 10c). In the case of the inversion angle (Fig. 10e), it decreases from around 25\({}^{\circ}\) for pure FeS to about 21\({}^{\circ}\) for FeS:Olivine 90:10 vol%, then increases to its highest value for FeS:Olivine 80:20 vol%, and then decreases progressively toward the value for pure olivine (about 15\({}^{\circ}\)). These non-linear behaviours were previously observed for several types of mixtures by Shkuratov (1987), Shkuratov et al. (1992), and Spadaccia et al. (2022).
Figure 10: Polarimetric phase curves obtained for the olivine-FeS mixtures (A). The bottom left panel (B) focuses on the negative branch, and values derived for \(P\), \(D\), \(B\), \(C\) and \(Ch\) asteroids are taken from Belskaya et al. (2017). The three right panels are the calculated parameters for the polarimetric phase curves, also compared to asteroid data from Belskaya et al. (2017). The equivalent albedo was calculated using the law derived by Beck et al., (2021).
## 4 Discussion
### Rayleigh-like scattering as an origin of VIS spectral bluing
As described in the previous section and presented in Figure 3 and Figure 4, the spectra of mixtures of hyperfine grains of olivine and opaque material change with their volume proportions. When only few percent of opaques are present, the reflectance strongly decreases, the Fe\({}^{2+}\) absorption band at 1-\(\upmu\)m of olivine disappears, and the VIS spectral slope is bluing in a relative sense (by "bluing", we mean that the relative spectral slope is getting more negative while the reflectance is decreasing at all wavelengths).
The influence of very fine opaques on reflectance spectra has been particularly studied in the framework of space weathering (Hapke, 2001; Noble et al., 2007; Pieters and Noble, 2016). The incorporation of \(<25\) nm opaques within larger translucent grains (intra-mixture) induces a fast decrease of reflectance, associated to a reddening of the spectra (Hapke, 2001, Noble et al., 2007). The incorporation of 25-200 nm opaques induces a reddening or a bluing depending on the concentration (Noble et al., 2007). Our mixtures with \(\sim\)300 nm hyperfine opaque grains (inter-mixture) present the same evolution of the reflectance, with an efficient darkening of the samples, but on the contrary only a bluing of the spectra. The spectral slope variation trend reported in Figure 3 and Figure 4 for two different types of opaques are similar, indicating that this bluing phenomenon is independent from the composition of the opaque material, so a physical cause is to be searched for. Note that the two opaque grains used here are of similar sizes (cf. Table 1, 0.30 \(\upmu\)m for FeS and 0.33 \(\upmu\)m for anthracite).
To explain the observation of a blue peak in the spectra of Saturn's moons, (Clark et al., 2008), measured reflectance spectra of mixture of ice with 0.2 \(\upmu\)m diameter sized particles of carbon black. Clark et al., 2008 showed that these mixtures display a blue peak around 0.5 \(\upmu\)m increasing in intensity with the amount of the contaminant, then decreasing after a threshold of concentration. This study also indicated that this peak was not dependent on the contaminant spectrum, but that it was related to light scattering in a Rayleigh regime.
Following Clark et al. (2008) we propose that in our sample the hyperfine grains of opaques, well dispersed in a bright transparent matrix of silicate grains, can scatter light following a Rayleigh-like regime, leading to a bluing of the spectra; lower wavelengths are more efficiently scattered than higher wavelengths. In order to test this interpretation, the theoretical approach of Brown (2014) was used, where the author calculated the parameter
space in term of optical constant and size parameter where spectral bluing should occur. We reproduced the map of maximum variation of the single scattering albedo \(\varpi\) as a function of the size parameter \(X\) and the imaginary part of the optical index \(k\), to assess if our grains are situated in the region of expected bluing in the \(X\)-\(k\) space. To do so, optical indexes of olivine, anthracite and iron sulfide are needed. For the two first materials, we respectively used the indexes from Zeidler et al. (2011) and Blokh and Burak (1972). For iron sulfide, as there is no optical indexes available in the Vis-NIR for either pyrrhotite or troilite, we decided to approximate it by the optical indexes of metallic iron, using the indexes from Querry (1985).
With all the optical indexes and the mean grain size of our hyperfine powders, we could compute the first derivative to the Rayleigh approximation of the single scattering albedo as established in Brown (2014):
\[\frac{d\varpi}{dX}=X^{2}\]
The results of this computation are shown in Figure 11. For each material, we computed the size parameter \(X\) and the first derivative of the single scattering albedo at 0.7 \(\upmu\)m for each material. On Figure 11, we observe that olivine is outside of the bluing region, but that the anthracite and the iron sulfide are very close to it in the \(X\)-\(k\) space. If we consider their size distribution (displayed as error bars on Fig. 11), it appears that both types of opaque grains are crossing the region of spectral bluing. This confirms that the strong blue slope appearing in our mixtures takes its origin in the light scattered by absorbing and optically isolated hyperfine grains.
To assess whether olivine grains act as a non-absorbing matrix in the mixture, we computed the ratio of the absorption \(Q_{\alpha}\) and scattering \(Q_{S}\) cross sections in the same _X-k_ space for the three materials (Figure 12). To this aim, we used Mie theory and a Python implementation coded by (Sumlin et al., 2018). On Figure 12 we position the grain size distributions of olivine, iron sulfide and anthracite in the _X-k_ space indicating regions where grains are more absorbing than scattering light. Olivine grains are situated in a region where light is more scattered than absorbed, whereas the grains of anthracite and iron sulfide are in an absorbing region. This suggests that in the visible range, light is absorbed following the Rayleigh regime by the absorbing grains of opaques, and that the olivine acts as a matrix of transparent materials.
The presence of the olivine grains is nonetheless essential for the bluing phenomenon to be observed, because they allow the separation of the individual hyperfine opaque grains
Figure 11: Evolution of the spectral Rayleigh bluing region in the X-k space following the work of Brown (2014) for optical indexes of olivine (a), iron (b) and anthracite (c). The figure indicates that the spectral bluing is restricted to a well-delimited curved region in the X-k space (higher values of \(\frac{d\varpi}{dx}\) means more bluing). The mean grain size and the range covered by the size distribution are indicated for each sample. Some of the grains present in the powders of the two opaque materials are crossing the bluing region because of the size distribution (b, c), indicating these grains are interacting with light following the Rayleigh regime. On the contrary, olivine grains are outside of the bluing region.
from each other (see Fig. 2). In the pure opaque powder, the optical scatterers' size may be larger than the physical size of grains, and may correspond to the size of agglomerates rather than individual grains (this was observed in Sultana et al. (2021) for bright materials). The mixture with olivine grains allow the de-agglomeration of the opaque grains, which can then scatter the light individually and produce the spectral bluing.
The fast disappearance of the olivine Fe\({}^{2+}\) absorption band at 1 \(\upmu\)m with increasing proportion of opaques can be explained by the reduction of the optical path lengths through the olivine grains, induced by the presence of the absorbing opaque grains. Even for the small grain sizes investigated here, the single scattering albedo of the opaque grains is low, and the encounter between a photon and an opaque grain will lead to its extinction.
Figure 12: Ratio between Mie scattering and absorption efficiency in the X-k space at a wavelength of 0.7 \(\upmu\)m. These plots indicate the region of this X-k space where light scattering is predominant during interaction of light with the grains. We placed our grains in this X-k space. It appears that anthracite and iron sulfide are absorbers whereas the olivine is mostly scattering light.
### Polarimetry of mixtures of hyperfine grains
The properties of the polarimetric phase curves (inversion angle \(\alpha_{inv}\), \(P_{min}\), slope at inversion \(h\)) for powders of minerals and meteorites of various grain sizes and albedo have been studied historically by Dollfus, Shkuratov and colleagues. Based on a P\({}_{\rm min}\)-\(\alpha_{\rm inv}\) diagram, Dollfus et al. (1989) proposed that most asteroids are covered by a coarser regolith (\(\sim\)30-300 \(\upmu\)m) compared to the lunar surface. Based on the same diagram, the low inversion angles recently measured for F and D-type asteroids (Belskaya et al., 2005; Bagnulo et al., 2016) would suggest that their surfaces are covered with "bare rocks". However, their peculiar polarimetric properties may also be explained by their microstructure. Indeed, laboratory measurements performed by Shkuratov (1987) have shown that mixtures of powders having grains \(\leq\) 1 \(\upmu\)m and very contrasted albedos exhibit an amplified negative branch of polarization compared to powders of the single components. Other measurements reported in Belskaya et al., 2005 demonstrate that adding small amounts of a bright powder (e.g., MgO) to a dark powder (e.g., pure carbon soot) noticeably increases P\({}_{\rm min}\) and \(\alpha_{\rm inv}\), while producing a very faint albedo increase. Interestingly, the polarimetric slope h is recognised to be a good indicator of the albedo, except for very dark surfaces. Laboratory measurements showed that the empirical relationship between h and albedo began to "saturate" for very dark surfaces (albedos of about 0.06 or lower) made of mixtures of very small grains (\(\leq\) 1 \(\upmu\)m), principally dark ones mixed with a small portion of bright ones. Zellner et al. (1977) first reported that h remains approximately constant when the albedo of mixtures decreases. For darker mixtures, Shkuratov et al. (1992) reported that as the albedo decreases, h and P\({}_{\rm min}\) reach a maximum before decreasing and \(\alpha_{\rm inv}\) also decreases. This "saturation" effect is apparent in the observations of several F-type and D-type asteroids (Belskaya et al., 2005; Cellino et al., 2015; Bagnulo et al., 2016), _suggesting_ that they have a peculiar surface structure at the (sub)-micrometric scale compared to other asteroids.
Our experiments with mixtures using small bodies analogue constituents are in agreement with earlier findings, that the intimate mixture of materials with strongly contrasted absorption properties induces an amplified polarization in the negative branch (Fig. 10b). We also observe the saturation effect described by Shkuratov et al. (1992). There is a maximum of \(|\)P\({}_{\rm min}|\) for olivine contents in the 90-50 % range, followed by a progressive decrease of \(|\)P\({}_{\rm min}|\) for higher FeS content (Fig. 10c). Remarkably, the introduction of a small fraction of FeS in olivine changes drastically the negative branch, with a high value of \(|\)P\({}_{\rm min}|\) and a lowering of the inversion angle. The reflectance at 0.7 \(\upmu\)m is decreased from around 0.85 to 0.13 from pure
olivine to the 90:10 % mixture, showing that the FeS fraction has a strong influence on the radiative transfer, which explains the major changes in polarimetric properties. With further increase in the FeS fraction (50-50 %), the inversion angle increases and then we reach the "saturation region" where \(|\mathrm{P_{min}}|\) and \(\mathrm{\SIUnitSymbolMicro a_{inv}}\) decrease with increasing FeS content.
Inspection of the polarimetric parameters for small bodies of different types seems to reveal a similar saturation behaviour. As the albedo decreases from E-type toward L-types, the \(|\mathrm{P_{min}}|\) values increase progressively (Belskaya et al., 2017). The highest value of \(|\mathrm{P_{min}}|\) are encountered for the Ch spectral types, which experienced significant aqueous alteration, and are likely chondrules-bearing given their spectral connection to CM chondrites (Vernazza et al., 2016). Then \(|\mathrm{P_{min}}|\) values seem to decrease when considering C, P, D and F-type asteroids that have slightly lower albedos. Interestingly, these spectral types land nicely on the trend defined by our mixtures in the \(\mathrm{p_{v}}\) vs \(|\mathrm{P_{min}}|\) diagram (Fig. 10d), in the locations defined by FeS-rich mixtures (80 or 90 % FeS). In the case of the P and D-type, the curves we obtained for the 90 % FeS mixture also seem to match the inversion angle.
While the FeS content is high in the 90 % FeS mixture, and Fe/Si and S/Si are in excess of a solar-like composition, we propose that the combination of texture (hyperfine grains) and the cohabitation of weakly and strongly absorbing constituents explains the polarimetric properties of P and D-type small bodies. These constituents may be various types of silicates, opaque minerals (iron sulfides, Fe-Ni alloys, oxides etc.) and organic matter. We can also remark that pure iron sulfides show polarimetric properties that differ from typical values for expected-to-be-metallic M-type asteroids (Belskaya et al., 2022), but also show a quite different visible reflectance. This suggests that the surfaces of metallic asteroids are not covered only by metallic fines, but by a coarser-grained regolith, possibly silicates-bearing.
### B-type asteroids and Rayleigh-like scattering
Objects belonging to the B-type asteroid population are by definition objects with a negative spectral slope. A now famous example of B-type object is the near Earth asteroid (101995) Bennu (the target of the OSIRIS-Rex mission) that displays a blue slope in the VisNIR (Clark et al., 2011; Hamilton et al., 2019) and lacks any absorption band until 2.7 \(\mathrm{\SIUnitSymbolMicro m}\) where an absorption band related to hydrated minerals can be seen (Hamilton et al., 2019). Other notorious examples of B-types are the asteroids (2) Pallas and (3200) Phaethon (Kareta et al., 2018). Rayleigh-like scattering is one of the possibilities that have been formulated to
explain the peculiar B-type spectra (Beck and Poch, 2021), following experiments designed to explain the spectral bluing of selected crater ejecta on 1-Ceres (Schroder et al., 2021).
Besides being spectrally blue, the spectra of B-type asteroids present a convex shape, reminiscent of the Rayleigh-like scattering observed for some of our samples. In Figure 13a we plot the normalized spectra of these three B-type asteroids against the most spectrally similar mixtures of olivine and iron sulfide that we prepared. Pallas and Phaethon are spectrally bluer than Bennu, which would be consistent with a higher volume fraction of opaque minerals on Bennu (but other explanations are possible: this could also be due to different constituents having different optical indexes, or to different rock porosities).
However, a proportion of about 40 to 60 vol% of opaque minerals on Bennu is significantly higher that what is found for proposed spectral analogues of Bennu. These analogues includes CM and CR Chondrites (Hamilton et al., 2022), whose FeS fraction is
Figure 13: Comparison of normalized Vis-NIR reflectance spectra of asteroids with some of mixtures of sub-\(\mu\)m grains. **(a)** Vis-NIR spectra of three B-type asteroids (101995) Bennu (Hamilton et al., 2019), (2) Pallas (DeMeo et al., 2009) and (3200) Phaethon (Kareta et al., 2018) and the most similar ones in terms of spectral shape measured on Olivine-FeS sub-\(\mu\)m mixtures, all normalized at 1.4 \(\mu\)m. Pallas and Phaethon are spectrally bluer than Bennu, which would be consistent with a lower volume fraction of opaque materials on their surfaces (but other explanations, for example differences of constituents or porosity, are also possible). **(b)** Vis-NIR spectra of the D-type Jupiter Trojan (624) Hektor (Emery and Brown, 2003) and the most similar ones in terms of spectral shape measured on Olivine-FeS and Olivine-Anthracite sub-\(\mu\)m mixtures, all normalized at 2.2 \(\mu\)m. The spectral shape varies strongly depending on the nature of the opaque material, and only the mixtures containing very high proportions of opaques have spectral slopes closest to D-type asteroids. The spectral slope of D-types may be explained by the presence of other constituents, possibly having different size distributions, and/or different micro-porosity, and/or by space weathering of their surface.
typically of the order of several to 10 vol% (Howard et al., 2011). Nevertheless, it is well known that the reflectance spectra of intimate binary mixtures are extremely dependent on the relative grain size of the two constituents (see for instance Pommerol and Schmidt, 2008). Mixtures of hyperfine opaques with larger grained silicates may produce a spectral bluing, together with a darkening, for a much lower fraction of opaques. We should also remark that while spectral analogues of Bennu may be found in databases, spectra obtained on textural analogues are scarce if not absent in the literature. Based on orbital observation, the surface of Bennu resembles a rubble-pile, that seems to be covered by angular rocks rather than fine grained-regolith. However, mid-infrared observations suggest that these rocks are substantially porous (Rozitis et al., 2020). As a consequence, meteorites, that need some strength to survive atmospheric entry, do not seem to provide good textural analogue. Our mixtures are unconsolidated aggregates of hyperfine grains, but they show that the presence of dispersed hyperfine grained opaques in these porous rocks could provide an explanation for the blue slope of Bennu. We note that consolidation of particulate media can induce darkening (Schroder et al., 2017), and possibly other spectral changes, which would need to be studied further.
### 4.4 Hyperfine grains and MIR emissivity features
In the MIR spectral range, silicates present strong optical absorptions related to Si-O vibrations (stretching around 10 um and bending at longer wavelength). As a consequence, silicate absorptions bands of powders with "large" grain size (\(>\) 20 um) observed in this spectral range are often an interplay of surface and volume scattering, as is usually the case for strong absorption bands (Hapke, 2012). Also, grain size will not only change the intensity of the spectral features (such as restrahlen bands, transparency bands, and the Christensen feature), but also their position (Salisbury and Wald, 1991; Mustard and Hays, 1997).
In the case when grain size is decreased below the wavelength, the scattering efficiencies of grains decreases, as well as their single scattering albedo. Consequently, photons will penetrate deeper in the samples, and have an increase likelihood to be absorbed before exiting the sample toward the observer. As studied experimentally in Mustard and Hays (1997), and confirmed here for even smaller grains, decreasing grain size below the wavelength leads to an overall flat and very low reflectance of olivine in the mid-infrared spectral range. In the case of our hyperfine olivine powder, reflectance is below 3 % above 6 um.
In the case of the hyperfine powders of "opaques" studied here, iron sulfide and anthracite, the behaviour should be virtually identical to that of olivine. If the grains size is small enough, we might also expect a Rayleigh-like behaviour for individual grains, leading to a very low reflectance of the sample. However, the reflectance of our pure opaques is not as low as the hyperfine olivine. This is possibly explained by the fact that the critical grain diameter, the diameter for which a powder shows a strong decrease of reflectance (Mustard and Hays, 1997), depends on the refractive index of the material. Another important point is that we are studying particulate samples, and that particles may behave collectively. A photon of 10-\(\upmu\)m wavelength may sense a continuous medium (i.e. effective media theory) rather than a "cloud" of isolated hyperfine grains.
When hyperfine olivine is mixed with KBr, it was found here that the resulting reflectance spectra show pronounced spectral features of olivine, which would result in a strong 10-\(\upmu\)m emissivity feature according to Kirchoff's law. This effect can be explained by the fact that the large KBr grains are dispersing and isolating the hyperfine grains of olivine and they scatter light, unlike larger olivine grains, enabling photons to escape the sample. Because some of these escaping photons will have interacted with olivine grains, the olivine signature is imprinted on the measured signal. We should then remark that using KBr powder to simulate porosity is not a valid approach, since while not absorbing, it still strongly influences radiative transfer through scattering. In order to simulate porosity with KBr, one needs to prepare the sample as pressed pellet in order to minimize scattering by the salt.
We propose that the effect of mixing olivine with sulfide and anthracite is somehow similar to that of mixing with KBr powder, but with a smaller magnitude. Moreover, the sub-micrometre-sized olivine grains are de-agglomerated and isolated from each other when mixed with opaque grains, as in a "cloud" of isolated hyperfine olivine grains. The opaque grains enable photons to escape the sample, and some of those will have interacted with olivine grains in the mixture. As a consequence, the obtained spectrum is reminiscent of an olivine absorption spectrum as seen in Figure 8.
measured reflectance is the hemispherical reflectance (Hapke, 2012). Here, we have measured the biconal reflectance so band positions should be the same, but contrasts (absolute and relative) can be different compared to emissivity spectra (Salisbury et al., 1991). We can observe a fair match of the 10-\(\upmu\)m feature between the observations and the mixtures containing between 10 to 40 vol% olivine. We can however notice that the emission feature is broader for the astronomical observations than for these mixtures. This difference is likely due to the presence of amorphous silicates (see for instance models of ejected dust from Tempel-1 by Gicquel et al., 2012) and crystalline pyroxene (present in anhydrous CP-IDPs, see Brunetto et al., 2011) on P/D-type asteroids and in cometary dust, which are absent in our samples.
The samples studied here have macro- and micro-porosities lower than 78%, more compact than "fairy castle" hyperporous (80-99%) ones, but still exhibiting a silicate signature at 10 \(\upmu\)m as observed in MIR spectra of P/D-type asteroids and cometary dust tails. We propose that, in both cases, an optical isolation of olivine grains occurred, either by vacuum in the case of cometary dust tails, or by opaque grains and vacuum in the case of P/D-type asteroids. We should note that the strength of the emissivity feature we measured in the laboratory is at the percent level. As seen in Figure 14, this is of the order of what is observed for some D-type objects like (515) Athalia (~3-4 %, Licandro et al., 2012), but not as high as the strength of the emissivity feature observed for some Trojans (of the order of 10 %, Emery et al., 2006). The relative grains sizes of the opaque and silicate materials may have an impact on the intensity of the feature. In addition, while high surface porosity does not seem to be required to produce an emissivity feature, it may have an impact on its magnitude.
### On the origin of red spectral slopes on primitive small bodies
The mixtures containing 10 to 40 vol% olivine are the ones exhibiting a 10-\(\upmu\)m feature (Figure 14) and also a red (positive) Vis-NIR spectral slope (Figure 5a, Figure 13b). However, as for the 10-\(\upmu\)m feature, the shape and magnitude of the Vis-NIR spectral slope are different from the ones of P/D-type asteroids (Figure 13b). Only mixtures with excessively high proportions of opaque grains (80-100 vol%) reproduce the magnitude of the spectral slope. These proportions are unrealistic (CP-IDPs contain \(\leq\) 40 vol% Fe-Ni sulfides, Bradley, 2014), therefore other parameters control the Vis-NIR spectral slope of these bodies. These parameters may be all or some of the following: other types of spectrally red and opaque materials, space weathering reddening grain surfaces, higher micro-porosity, and different grain size distributions than the samples studied here.
In addition, the mixtures containing more realistic proportions of opaque grains do not show the 1-\(\upmu\)m absorption band of olivine (Figure 3, Figure 4). Following these measurements, it is normal that the 1- and 2-\(\upmu\)m absorption bands due to silicates are absent on P/D-type asteroids and comets spectra.
Figure 14: Mid-infrared emission spectra of the coma of 17P/Holmes from Reach et al. (2010) (panel a), of surfaces of a comet (10P/Tempel 1, from Kelley et al. (2017)) and of C and D-type asteroids (the spectra are from Emery et al. (2006) for the D-type (624) Hektor, and Licandro et al. (2012) for the C-type (65) Cybele and (515) Athalia) (panel b), compared to β1-Reflectanceβ spectra of mixtures of sub-\(\upmu\)m grains of olivine with iron sulfide or anthracic (panel c). The 10-\(\upmu\)m emissivity plateau is prominent on the asteroidsβ spectra. A 10-\(\upmu\)m plateau is also observed at a similar position on the spectra of the mixtures containing 10 to 50 vol% olivine. The vertical red dashed lines indicate maxima of absorption of olivine. We can notice that the plateau on the asteroids and comets spectra extends to lower wavelengths (from 8 to 12 \(\upmu\)m) than on the measured spectra (from 9.5 to 12 \(\upmu\)m). This is related to the presence of other types of silicates, widening the 10-\(\upmu\)m feature on small bodies spectra. The contrast between the plateau and the continuum is at the percent level for the mixtures, and ranges from 1 to 10 % for small bodies. These differences may be due to variations of relative grain sizes of opaques and silicates, and/or of porosity.
### 4.7 Summary: combination of Vis-NIR, MIR spectral and Vis polarimetric properties
With the Vis-NIR, MIR spectral and Vis polarimetric measurements in hand, we can try to retrieve information regarding surfaces compositions and textures of B, C and D type objects.
By comparing the measurements from Vis-NIR and MIR, we observe that the mixtures exhibiting blue-sloped spectra in the Vis-NIR also lack a 10-\(\upmu\)m emissivity plateau in the MIR, whereas surfaces presenting reddish slopes in the Vis-NIR exhibit a 10-\(\upmu\)m emissivity plateau in the MIR. In Figure 15, we compile optical spectra in the Vis-NIR and in the MIR of cometary nuclei and asteroids. Similarly to our analogues, we observe an anti-correlation between the presence of the plateau in the MIR and the blue slope in the Vis-NIR when conjugating the two spectral ranges. Bennu spectrum in the Vis-NIR is indeed characterized by a blue slope between 0.4 and 2 \(\upmu\)m, but its MIR spectrum is lacking any emissivity plateau. On the contrary, the Trojan asteroids, spectrally red in the Vis-NIR, do present a highly contrasted plateau between 8 and 12 \(\upmu\)m.
These experimental data reveal that optical separation of sub-\(\upmu\)m grains is a major parameter controlling the optical properties of low-albedo small bodies. For the samples considered here, this optical separation is obtained by mixing at extreme proportions two sub-\(\upmu\)m materials having contrasted optical indexes in the Vis-NIR and MIR. But such an optical separation may also be provided by the porosity of the medium. Surfaces made of sub-\(\upmu\)m opaque grains optically isolated (by a matrix of brighter grains, and/or by porosity) tend to have a blue spectral slope in the Vis-NIR and lack a 10-\(\upmu\)m plateau. As larger aggregates of opaque grains are formed (by increasing their proportion in mixture, and/or by decreasing the medium porosity) they do not scatter light in the Rayleigh regime anymore, but they absorb the light, and the spectra become more neutral to red. These opaque grains responsible for the absorption in the Vis-NIR are more reflective than silicates in the MIR, so that surfaces made of sub-\(\upmu\)m silicate grains optically isolated (by these more reflective grains, and/or by porosity) tend to have redder Vis-NIR spectra and display a 10-\(\upmu\)m plateau.
Polarimetric parameters can be further added to this comparison. In the case of D-type objects, the mixtures that are best spectral analogues in the Vis-NIR and MIR (Fig. 15) seem to have close polarimetric properties as well (Fig. 10). The fact that the combination of these three optical properties provides a good match to the observation of D-type asteroids builds a very strong case for the fact that heterogeneous aggregates of hyperfine grains having contrasted optical properties are essential for explaining the D-type asteroid optical properties.
In the case of B-type asteroids, the polarimetric properties of the best spectral analogues among our mixtures (olivine-FeS from 40:60 to 70:30 vol%, Fig. 15) display higher values of \(|\mathrm{P_{min}}|\) and \(\alpha_{\mathrm{inv}}\) than ground-based observations of these objects (Fig. 10). This may suggest that the bluing may not be related to Rayleigh-like scattering for B-type asteroids, or that another phase or process plays a role in their polarimetric phase dependence without significantly impacting their reflectance spectra. In the case of Phaethon, polarimetric curves up to high-phase angles have been obtained (Ito et al., 2018) and the positive branch is well constrained. The polarization degree is of about 20 % for a phase angle of 63\({}^{\circ}\), which is close to the value obtained (around 17-18 %, Fig. 10) for the mixtures that best match its reflectance spectra (olivine-FeS 60:40 to 70:30 vol%, Fig. 13). The reflectance of these mixture are of 0.077 and 0.066 that convert to equivalent albedos of 0.087 and 0.073 using the law derived in Beck et al. (2021), and are slightly higher than the average of B-types in DeMeo and Carry (2013) (\(0.071\pm 0.033\) with a mode at \(0.061\pm 0.21\)).
Figure 15: Comparison of the spectra of mixtures of sub-\(\mu\)m grains of Olivine and FeS **(a, b)**, with Vis-NIR reflectance spectra and MIR emission spectra of small bodies **(c, d)**. In the mixtures with low proportions of opaques, the Vis-NIR spectral slope is blue due to Rayleigh-like scattering by isolated opaque sub-\(\mu\)m grains. In mixtures with higher proportions of opaques, the Vis-NIR slope becomes redder as the size of the opaque aggregates increases. As the proportion of opaques increases, the silicate grains are de-agglomerated and isolated, de-saturating the silicate absorption bands in the MIR and resulting in the emergence of the 10-\(\mu\)m plateau. Spectra of B-type asteroids (Phaethon, Pallas, Bennu) have a blue Vis-NIR slope and lack a 10-\(\mu\)m plateau, whereas spectra of P/D/X/C-type asteroids (Hektor, Agamemmon) and comets have a Vis-NIR red slope and a 10-\(\mu\)m plateau. The comparison with the mixtures suggests that variations of the degree of dispersion/aggregation of sub
\(\mu\)m grains in the surface material of asteroids and comets is a major parameter explaining the variations of their spectral properties._ Vis-NIR spectra are _from_ _Kareta et al.__(_2018_)_ _for_ _Phaethon, Rivkin and DeMeo (_2019_)_ _for_ _Pallas, Hamilton et al.__(_2019_)_ _for_ _Bennu, the SMASS/MIT database for Cybele, Raponi et al.__(_2020_)_ _for comet 67P/C.G.,_ _Campins et al.__(_2006_)_ _for comet 162P, Emery and Brown (2003) for Hektor and Agamennon. MIR spectra are from_ _Lim et al.__, (2019)_ _for_ _Phaethon, Lim et al.__(_2005_)_ _for_ _Pallas, Hamilton et al.__(_2019_)_ _for_ _Bennu, Licandro et al.__(_2011_)_ _for_ _Cybele, and Emery et al.__(_2006_)_ _for_ _Hektor and Agamennon._
## 5 Conclusion
This work was aimed to test the hypothesis that hyperfine grains control, or have a significant contribution, to the spectral and polarimetric properties of primitive bodies of the Solar system. We prepared and measured the properties of mixtures of bright and opaque materials of sub-micrometric grains at different concentrations. We draw the following observations and conclusions:
- Mixtures of opaques and silicates in the realm of hyperfine grains (\(<1\)\(\mu\)m) reveal a non-linear behaviour. While the strong darkening effect of very fine opaques mixed with larger translucent grains has been shown in several earlier studies, we show that this effect also occurs when mixing hyperfine grains of opaques with hyperfine grains of silicates. Notably, a small amount of opaque material (\(\geq 1\)-5 vol%) is sufficient to mask the absorption bands of silicates in the Vis-NIR, explaining their absence on P/D-type asteroids and comets spectra.
- Relatively low amounts of opaques mixed with the silicates can produce a strong bluing in the visible range. By performing an analysis similar to Brown (2014) we show that this bluing is likely due to a Rayleigh-like scattering in our particulate sample.
- The polarimetric phase curves of these mixtures show a highly non-linear behaviour with increasing fraction of opaques at low phase angles, i.e. in the negative branch. The \(|\)P\({}_{\rm min}|\) value is always higher for the mixtures than for the endmembers. The \(|\)P\({}_{\rm min}|\) value is highest for the mixture of olivine 90 vol% and opaque iron sulfide (FeS) 10 vol% and then progressively decreases with increasing FeS content. The inversion angle decreases from the pure olivine to the olivine 90 % and FeS 10 %, then increases for the 50:50 % mixture, then decreases
progressively toward the pure FeS value. At high phase angles, i.e. in the positive branch, the intensity of linear polarization value is roughly correlated with the reflectance.
- Emissivity calculated in the MIR from Kirchoff's law for these hyperfine mixtures can show a 10-\(\upmu\)m emissivity peak that resembles the silicate signature observed in transmission (i.e. with no contribution from the real part of the optical constant). We explain this observation by an optical separation of silicate grains by opaque grains, and a diffusion of the MIR light by the opaque grains, enabling photons to escape after some absorption by silicate grains. This effect is similar to observation when mixing silicate with infrared-transparent salt to simulate porosity, as observed in previous works (King et al., 2011; Yang et al., 2013; Izawa et al., 2021). However, while mixtures with KBr are very reflective, mixtures with FeS have a much lower reflectance, compatible with the low reflectance and high emissivity values observed on small bodies.
- The separation of silicate grains, not only by vacuum (elevated micro-rugosity and/or micro-porosity) but also by opaque grains, can explain the peculiar emissivity spectra of P/D-type asteroids and their resemblance to emission spectra from cometary dust tails. Opaque grains (opaque in the visible) in fine-grained mixtures contribute to isolate optically individual silicate grains and reduce optical path length within the silicate phase. These measurements show that the quality of the P/D-type asteroid spectra in terms of the 10-\(\upmu\)m emissivity plateau is explained with hyperfine opaques, but not the spectral contrast. The polarimetric properties of some of our MIR spectral analogues are similar (but not strictly identical) to those measured for P/D-type asteroid surfaces.
- Rayleigh-like scattering as observed in our mixture may explain the spectra of B-type asteroids. Our mixtures exhibiting Rayleigh-like scattering have MIR spectra that do not show the silicate 10-\(\upmu\)m emission plateau, in agreement with MIR spectra of the B-type Bennu. The positive branch of polarization is similar to that of the B-type Phaethon, but the negative branch is different from those measured for B-type asteroids. Although Bennu appears to be made of rocks and not fine-grained regolith, the presence of dispersed hyperfine grained opaques in these porous rocks could provide an explanation for its blue spectral slope. To improve the analogy with B-types, future studies should investigate how different degrees of consolidation of mixtures of hyperfine grains influence their spectra.
- Our measurements suggest that the apparent correlation of NIR slope with 10-\(\upmu\)m plateau observed on P/D/X/C- and B-types (Emery et al., 2011; Marchis et al., 2012; Beck and Poch, 2021) could be due to different degrees of dispersion/aggregation of sub-\(\upmu\)m grains of silicate and opaque materials at their surfaces.
Finally, the mixtures we measured that resemble the comets and P/D-type asteroids spectra best contain a large proportion of opaque minerals (50-90 vol%), which is unrealistic for these objects. CP-IDPs whose spectra match very well these objects (Vernazza et al., 2015) contain less than 40 vol% of Fe-Ni sulfides (Bradley, 2014). However and interestingly, CP-IDPs contain a proportion of crystalline silicates (\(\sim\)20-50 vol%, Alexander et al., 2007) similar to the mixtures bearing the best resemblance to the observations (10-50 vol%). In our mixtures, the optical separation of crystalline silicate grains may be quantitatively similar to CP-IDPs, but the compositional and textural parameters controlling the separation are different. Indeed, our mixtures lack important characteristics of CP-IDPs: (1) analogous carbonaceous materials and other components, (2) fluffy microstructures, (3) grain size distributions from 0.1 \(\upmu\)m to several micrometres, and cementing semi-continuous matrixes. Some of these characteristics and possibly others (i.e. space weathering) could also explain why the shape and magnitude of the Vis-NIR spectral slope and of the 10-\(\upmu\)m feature are different between the samples studied here and the comets and P/D-type asteroids (Figure 13b, Figure 14). Future experiments aiming at understanding the optical properties of primitive small bodies should thus be dedicated to the production and measurement of mixtures having more realistic composition and texture (different grain size distributions, higher micro-porosity etc.).
## Acknowledgements
This work has been funded by the European Research Council (ERC) under the grant SOLARYS ERC-CoG2017-771691. We acknowledge Nathaniel Finding and Bruno Lanson from ISTerre for the XRD measurements, Frederique Charlot from the Consortium des Moyens Techniques Communs (CMTC) of University Grenoble Alpes for the SEM images. We acknowledge Laurene Flandinet and Olivier Brissaud from IPAG for their help with the sample preparation protocol and the use the spectroscopic facilities respectively. The contributions of SS, CLP and AP have been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. Part of this work has been done during a transnational access visit in the frame of the Europlanet
2020 RI program. Europlanet 2020 RI has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 654208. Support from the Centre national d'etude spatiales (CNES) is also acknowledged. Part of the data shown in this publication were obtained and made available by the MITHNEOS MIT-Hawaii Near-Earth Object Spectroscopic Survey. The IRTF is operated by the University of Hawaii under contract 80HQTR19D0030 with the National Aeronautics and Space Administration. The MIT component of this work is supported by NASA grant 80NSSC18K0849. We are grateful to Pierre Vernazza and an anonymous reviewer for their reviews and suggestions.
## Data availability
All measured spectra and their associated sample information are freely available through the GhOSST database of the SSHADE infrastructure for solid spectroscopy, supported by the Europlanet 2020-RI program. Direct links to the data are provided in the references (Sultana 2019a, 2019b, 2019c, 2019d). The polarimetric data are also freely available (Sultana et al., 2023).
**Supplementary Figure 1:** Reflectance spectra of the mixtures of sub-\(\upmu\)m grains of olivine and iron sulfide. From 0.5 to 4.2 \(\upmu\)m, the spectra were measured with the SHADOWS goniometer, and from 2.0 to 20 \(\upmu\)m the spectra were measured with a FTIR goniometer. To produce this plot, the spectra measured with the FTIR goniometer were scaled on the absolute reflectance level measured by the SHADOWS goniometer at wavelength between 2.5 to 4.0 \(\upmu\)m depending on the spectrum.
**Supplementary Figure 2:** Reflectance spectra of the mixtures of sub-\(\upmu\)m grains of olivine and anthracite. From 0.5 to 4.2 \(\upmu\)m, the spectra were measured with the SHADOWS goniometer, and from 2.0 to 20 \(\upmu\)m the spectra were measured with a FTIR goniometer. To produce this plot, the spectra measured with the FTIR goniometer were scaled on the absolute reflectance level measured by the SHADOWS goniometer at wavelength between 2.5 to 4.0 \(\upmu\)m depending on the spectrum.
**Supplementary Figure 3:** Size distributions of the powders produced and measured in this study. The red curves are lognormal functions fitting the data. Table 1 shows the mean, standard deviation, median and maximal sizes of the grains. |
2308.05221 | Alexa, play with robot: Introducing the First Alexa Prize SimBot
Challenge on Embodied AI | The Alexa Prize program has empowered numerous university students to
explore, experiment, and showcase their talents in building conversational
agents through challenges like the SocialBot Grand Challenge and the TaskBot
Challenge. As conversational agents increasingly appear in multimodal and
embodied contexts, it is important to explore the affordances of conversational
interaction augmented with computer vision and physical embodiment. This paper
describes the SimBot Challenge, a new challenge in which university teams
compete to build robot assistants that complete tasks in a simulated physical
environment. This paper provides an overview of the SimBot Challenge, which
included both online and offline challenge phases. We describe the
infrastructure and support provided to the teams including Alexa Arena, the
simulated environment, and the ML toolkit provided to teams to accelerate their
building of vision and language models. We summarize the approaches the
participating teams took to overcome research challenges and extract key
lessons learned. Finally, we provide analysis of the performance of the
competing SimBots during the competition. | Hangjie Shi, Leslie Ball, Govind Thattai, Desheng Zhang, Lucy Hu, Qiaozi Gao, Suhaila Shakiah, Xiaofeng Gao, Aishwarya Padmakumar, Bofei Yang, Cadence Chung, Dinakar Guthy, Gaurav Sukhatme, Karthika Arumugam, Matthew Wen, Osman Ipek, Patrick Lange, Rohan Khanna, Shreyas Pansare, Vasu Sharma, Chao Zhang, Cris Flagg, Daniel Pressel, Lavina Vaz, Luke Dai, Prasoon Goyal, Sattvik Sahai, Shaohua Liu, Yao Lu, Anna Gottardi, Shui Hu, Yang Liu, Dilek Hakkani-Tur, Kate Bland, Heather Rocker, James Jeun, Yadunandana Rao, Michael Johnston, Akshaya Iyengar, Arindam Mandal, Prem Natarajan, Reza Ghanadan | 2023-08-09T20:56:56Z | http://arxiv.org/abs/2308.05221v1 | # Alexa, play with robot: Introducing the First Alexa Prize SimBot Challenge on Embodied AI
###### Abstract
The Alexa Prize program has empowered numerous university students to explore, experiment, and showcase their talents in building conversational agents through challenges like the SocialBot Grand Challenge and the TaskBot Challenge. As conversational agents increasingly appear in multimodal and embodied contexts, it is important to explore the affordances of conversational interaction augmented with computer vision and physical embodiment. This paper describes the SimBot Challenge, a new challenge in which university teams compete to build robot assistants that complete tasks in a simulated physical environment. This paper provides an overview of the SimBot Challenge, which included both online and offline challenge phases. We describe the infrastructure and support provided to the teams including Alexa Arena, the simulated environment, and the ML toolkit provided to teams to accelerate their building of vision and language models. We summarize the approaches the participating teams took to overcome research challenges and extract key lessons learned. Finally, we provide analysis of the performance of the competing SimBots during the competition.
## 1 Introduction
Conversational assistants such as Amazon Alexa, Apple's Siri, and Google Assistant have become an increasingly commonplace way for people to access information and content and control connected devices such as smart outlets, lighting, and home security systems. A key frontier in conversational
AI is to advance from spoken dialog and enable embodied conversational systems where the conversational agent is able to perceive the physical world, navigate within it, and move and manipulate objects. In the future, we envision a world where everyday conversational assistants will be able to navigate and actuate in the physical world. They could, for example, make you an omelette, pour you a cup of coffee, explore your house to find your slippers, or identify and address an issue such as a door left open or a leaking faucet.
The Alexa Prize1 is an Amazon Alexa sponsored program that in recent years has enabled hundreds of university students and faculty to compete in advancing the state-of-the-art in conversational AI. Since 2016, the SocialBot Grand Challenge has hosted a competition among universities from across the world to compete in creating the best _SocialBot_, i.e., an Alexa skill that can engage in extended open-domain dialogs with users around popular topics and current events [1]. Since 2021, the TaskBot Challenge has engaged teams in building conversational assistants that can assist users in completing complex tasks such as recipes or Do It Yourself (DIY) projects [2]. One of the key advantages of the program is that it enables university teams to rapidly test and iterate on their approaches through testing with real-world users at scale through Alexa.
Footnote 1: [https://www.amazon.science/alexa-prize](https://www.amazon.science/alexa-prize)
The motivation for the third Alexa Prize Challenge, the SimBot Challenge, is to push the boundaries of embodied AI and drive towards the vision above. Inspired by the significant role that games have played in showcasing and fostering the evolution of core AI capabilities, the SimBot Challenge incorporates elements of game design into the development process. As Zobrist's paper [3] on the application of AI to the game Go emphasizes, "More study of this complex game may reward us with new insight into human perceptual and problem solving abilities as well as foster the development of new techniques for artificial intelligence." To facilitate interaction with the system by Alexa users and as a precursor to physical embodiment, the SimBot Challenge presents a simulated office/lab environment named Alexa Arena [4], created using the Unity gaming engine. This environment comprises multiple rooms, each equipped with various devices and objects. Users interact with the environment by giving instructions to a robot companion, the SimBot. By adopting a gaming-inspired framework, the challenge provides users with an engaging and dynamic environment to interact with the simulation through screen-enabled devices such as Echo Show and FireTV. The SimBots respond to spoken commands from users, taking actions to move in and manipulate the environment,
Figure 1: The egocentric view of the robot in a simulated room.
asking verbal questions, and providing feedback. The fusion of game mechanics and AI development allows for an immersive user experience, bridging the gap between virtual simulation and physical embodiment. In Figure 1, we can see the robot's view of the simulated environment display on an Echo Show device. The SimBots were launched first for testing with a cohort of Amazon employees in November 2022, followed by a general public launch in December 2022 at which time Alexa users in the United States with supported devices could interact with the participating SimBots by saying "Alexa play with robot" to a screen-enabled Alexa device.
As an initial pre-phase of the program, we conducted an offline evaluation using the TEACh embodied AI dataset [5]. We summarize this phase in Section 2. In Section 3, we detail the operation of the online challenge. In Section 3.1, we describe the infrastructure supporting the challenge and the tools and capabilities provided to teams. In Section 4, we discuss the scientific innovations and advancements in the competition. The performance of the SimBots is reviewed in Section 5 and insights gathered from the SimBot Challenge are discussed in Section 6.
## 2 Offline Challenge
As an initial phase for teams to develop modeling capabilities for embodied task completion, an offline challenge was conducted using the TEACh dataset [5]. The TEACh dataset consists of annotators role playing interactions between a Commander (User) and Follower (Robot) collaborating to complete tasks in a simulated home environment. The data was collected using the AI2-THOR simulator [6], employing a web interface that allowed both annotators to navigate and observe the simulated home from a first-person perspective. In this setup, only the Commander had access to the details of the task, while the Follower could interact with objects in the room to complete the task, necessitating communication and coordination between them through live text chat. To encourage multi-turn interactions, the Commander could additionally search for objects that were not directly visible and provide appropriate directions to the Follower. The Follower could perform \(8\) possible object interaction actions - _Pickup_, _Place_, _Open_, _Close_, _ToggleOn_, _ToggleOff_, _Slice_, and _Pour_. Successful task completion required navigating around the room, searching for objects inside receptacles such as cabinets or refrigerators, and reasoning about physical state changes (e.g. place a slice of potato on a pan located on the stove and turn on the stove to cook it). In each data collection session, the initial state, dialogue utterances and actions taken by each annotator were saved so that the gameplay session could be replayed in the AI2-THOR environment.
The Alexa Prize SimBot Offline Challenge required teams to build models for the Execution from Dialog History (EDH) task based on this dataset. Given some dialogue history and a partial sequence of actions and corresponding egocentric image observations from the Follower, an EDH model should predict subsequent actions for the Follower. The EDH instances are constructed from data collection sessions by examining the action sequence between every pair of dialogue utterances. The target action sequences are selected based on criteria such as non-empty preceding dialogue history, presence of at least one object interaction within the action sequence, and inclusion of the state changes in at least one task-relevant object.
An EDH model receives input comprising the dialogue history, history of actions by the Follower, and their corresponding egocentric image observations. At each time step, the model is responsible for predicting an action which could be an object interaction, a navigation action, or a special _Stop_ action. If the model predicts an object interaction action, it must additionally predict an \((x,y)\) coordinate in the egocentric observation of the Follower to identify the target object for the action. After the action predicted by the model is executed in the simulator, the simulator state is updated and the model receives an updated egocentric image observation. The execution process continues until the model predicts the _Stop_ action, 1,000 actions are executed or 30 actions result in API failures. Models are evaluated by comparing the state changes resulting from the models' predicted actions with those taken by the annotators.
Teams were provided with code to replay TEACh sessions as well as wrapper code to train and perform inference for TEACh EDH models. Furthermore, a baseline model based on the Episodic Transformer [7] model was provided to the teams. To proceed to the next stage of the challenge, teams needed to develop and submit a model that outperformed the baseline ET model.
Online Challenge
The next phase of the challenge was an online challenge, where models are integrated into a runtime service to support a real-time gaming experience on Alexa multimodal devices. In the online experience, a robot operates within a simulated office/lab environment powered by the Unity gaming engine. University teams were tasked with setting up a robot action inference service supported by their vision models, which seamlessly integrated into a runtime system developed by the Alexa Prize team. The entire interaction experience is built on top of Alexa skills, leveraging the Alexa Web API for Games interface. Users can engage with the robot through voice commands and observe updates in the simulated environment through video streaming to their device. Voice requests transmitted via the device are converted to text by Alexa speech recognition services, initially processed by the SimBot skill. The user command is then forwarded to the SimBot runtime system, where it undergoes further interpretation to generate executable actions within the simulated environment. The model is responsible for predicting the robot's actions or engaging in dialog based on the input utterance text and the image of the robot's egocentric view. Upon concluding the interaction with the SimBot, users are presented with an opportunity to provide feedback in the form of a verbal rating and optional free-form comments. These ratings and feedback are valuable resources shared with the university teams, enabling them to gain insights and make improvements to their model performance.
The SimBot online phase began with a comprehensive three-day Bootcamp in August 2022. During this event, ten university teams were exclusively invited to receive Amazon Web Service (AWS) training, SimBot tooling, and hands-on development experience. Throughout the Bootcamp, all ten teams successfully developed a SimBot using a baseline model provided by Alexa Prize, utilizing the resources offered by AWS and Alexa. Following this training, teams put their efforts into refining and enhancing their SimBots until the end of October, ultimately completing the skill certification process required for integration with Alexa users. An initial feedback phase then took place to gather early feedback from beta users, followed by the general availability launch in December 2022. All ten teams progressed from the launch phase and advanced to the Semifinals from February 2, 2023 through March 22, 2023. From the Semifinals, five teams successfully qualified as Finalists and participated in the Finals phase that ended on April 28, 2023. The closed-door Finals event took place on May 3, 2023, where the teams competed for the top honors.
### Capabilities Provided to Teams
To facilitate the development of SimBot, the university teams were granted exclusive access to a range of Amazon Alexa resources, technologies, and experts. The following is an overview of the resources that were made available to them.
#### 3.1.1 Alexa Arena Simulated Environment
Alexa Arena [4] is a Unity-based 3D embodied AI simulator built by Amazon Alexa AI. In Alexa Arena, an agent acts in a 3D environment that supports a variety of indoor object interaction tasks. Alexa Arena features high-quality graphics, animations, navigation and object manipulation to enable highly interactive and user-centric multimodal embodied AI research.
There are 336 unique objects in Alexa Arena. Each object has a set of properties (i.e., affordances), which specify if a certain type of robot-object interaction is possible. For example, the agent can toggle the _3-D printer_ since it has an object property _toggleable_. In total, there are 14 object properties, including _pickupable_, _openable_, _breakable_, _receptacle_, _toggleable_, _powerable_, _dirtyable_, _heatable_, _eatable_, _chillable_, _fillable_, _cookable_, _infectable_, and _decor_. Each object property has a corresponding action and object state to go into when acted on. For example, _break_ is the corresponding action for _breakable_, and _broken_ is the corresponding state after the action has been performed.
In Alexa Arena, robot actions can be categorized into two types: 1) user interaction actions for communicating with the user via starting a dialog or highlighting objects in the scene2, and 2) robot physical actions to interact with the simulation environment. Robot physical actions include both navigation and object interaction. To improve the user experience, all the navigation and interaction actions are animated in a continuous fashion and accompanied by environmental sounds.
Footnote 2: Note that highlighting is used as proxy for deictic gestures by the robot. The current generation of SimBots are not able to point using their arms.
#### 3.1.2 ML Toolkit
Along with the Alexa Arena simulator, we also provided teams with an ML toolkit to support model development. This toolkit provides a baseline robot model (Figure 2) that can handle basic visual perception, action prediction, and dialog management for completing game missions in the SimBot Challenge. Furthermore, the toolkit includes two datasets to aid in the training of robot models. The first dataset is a hybrid dataset where ground-truth robot action trajectories are paired with human annotated dialogs. The second dataset comprises a collection of over 600,000 images labeled with object segmentation, which can be used to train object detection models.
Within the baseline model, the visual perception module is a Mask-RCNN model trained on the provided image dataset. This model takes an RGB image as input and predicts masks for all object instances (across 86 object classes) present in the image. This model exhibits reasonable object detection performance on the validation set. Table 1 shows its mean Average Precision (mAP) for small, medium and large objects.
#### 3.1.3 Automatic Speech Recognition and Text to Speech
To improve the experience for users interacting with SimBots, we supplied Automatic Speech Recognition (ASR) technology that converts user utterances to text and Text-To-Speech (TTS) technology that generates spoken responses from SimBots. Additionally, the participating university teams were given access to tokenized N-best ASR hypotheses that included confidence scores for each token. This resource allowed the teams to fine-tune and optimize their SimBots for more accurate and effective communication with users.
To further improve the accuracy of ASR, we extended the SimBot skill interaction model and introduced hints for the SimBot skill intents. This model included over 10,000 sample utterances encompassing a diverse range of supported robot actions and objects, and was provided to the teams as a template to incorporate into their models. With a comprehensive set of hints that covered a wide range of possible interactions, the teams could create more accurate models that could better understand and respond to user requests.
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Obj Category** & **Area (\(px^{2}\))** & **mAP** \\ \hline Small & 0 - 1296 & 37.63 \\ Medium & 1296 - 9216 & 60.41 \\ Large & 9216 - 90000 & 64.72 \\ \hline Overall & 0 - 90000 & 46.03 \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation results for the provided Mask-RCNN model on small, medium, and large objects.
Figure 2: The provided baseline model in ML Toolkit.
#### 3.1.4 Infrastructure and SDK
As part of the Alexa Prize SimBot Challenge, we provided participating university teams with a powerful runtime system that seamlessly integrates their models into a gaming experience for end-users on various devices, including Echo Show and Fire TV. This system provides an opportunity for teams to showcase their models and offer users an engaging interactive experience. The following sequence flow illustrates the respective steps shown in Figure 3, for one interaction with the end-user:
* **1**: Alexa-user interacts with SimBot Skill using natural language instruction, such as "Get me a spanner from the robotics lab".
* **2**: SimBot Skill receives the user utterance, and invokes the Runtime System with the contextual information.
* **3**: SimBot Runtime System sends the image from the robot's current viewpoint (egocentric view), along with the user's input utterance, to the Action Inference Service (developed by the respective university team).
* **4-5**: University model predicts the next sequence of actions (e.g. move forward 2 steps), or generates a suitable text response.
* **6-7**: Each of the actions in the predicted sequence (in Step 4) are executed in Arena Simulation Engine (built using Unity), and the visuals are live-streamed to the Alexa device. Note: Steps 4-7 are repeated, to execute subsequent sequences (_look down\(\rightarrow\)find lamp\(\rightarrow\)toggle on/off_), until the university model determines that the action sequence is complete, and/or generates a text response.
* **8-9**: The language response from the university SimBot (if any) is played on the Alexa device, and the microphone is opened for subsequent interaction with the user.
At the end of each turn, the SimBot Runtime System checks the state of the simulation environment, to verify if the respective goal has been completed successfully. Steps [1-9] are repeated until the successful completion of the goal. A user-satisfaction score (1-5) is requested at the end of a game session.
The sequence flow above involved the main components listed below:
1. **SimBot Application**: A RESTful web application hosted in AWS Elastic Container Service (ECS) Fargate. It manages the lifecycle of a game session with the end user. SimBot
Figure 3: Flow diagram illustrating the sequence flow of an user interaction with SimBot runtime services.
application orchestrates the inputs between the robot action inference service and the simulation container. User commands are interpreted into robot actions that can be executed in the simulation container. It also functions as a proxy layer which validates and proxies the WebSocket connection from the multimodal device to the simulation container.
2. **Simulation Container Service**: This service acts as a wrapper to the Alexa Arena simulation engine to execute robotic actions and to stream visuals from the engine to upstream applications.
3. **Robot Action Inference Service**: This component represents the brain of the embodied agent. Natural language instructions from end-users along with the live visuals from the Alexa Arena simulation engine are processed by the Action Inference service, to generate the respective sequence of robot actions and optional clarification dialog. To achieve this, this service hosts the ML models and performs inferencing at runtime.
4. **SimBot Skill**: This is an Alexa skill built on top of the Alexa Skills Kit (ASK). The SimBot skill receives ASR outputs from the Alexa service. An AWS Lambda function handles the skill requests and delegates the requests to the SimBot Application. The skill also uses the Alexa Web API for Games interface which supports the multimodal gaming experiences that run on Alexa-enabled devices.
The Alexa Prize SimBot Challenge provides an opportunity for university teams to prove their expertise in machine learning and conversational AI. To enable university teams to focus on scientific innovation, we developed an SDK that provides a CLI, scripts, and utilities to simplify engineering work and operational tasks. The university teams could spend minimal manual effort executing the SDK, making minor code changes to their own systems and operationally maintaining them once spawned.
The SimBot SDK leverages the AWS Cloud Development Kit (CDK) to provision and manage resources within their AWS accounts. The CDK application for SimBot automates the deployment of the ASK skill Lambda, Virtual Private Cloud (VPC), Identity Access Management (IAM) role, Cloud Watch logs, and other resources required for the challenge. It provides continuous integration for the Action Inference service and skill Lambda, making it easier for developers to iteratively update the service. It also enforces a separation of test and production stages for enhanced reliability. In addition, the SimBot SDK includes several utilities, such as template implementations based on the API contract of the Action Inference service, integrated with baseline bootstrapped models and DynamoDB tables. The utilities also provide pointers to third-party libraries for ML utilities such as profanity-filter, spaCy, and AllenNLP libraries.
Figure 4: SimBot Runtime System Diagram and Workflow
### Customer Feedback Data and Evaluation Metrics
A key benefit provided to the university teams was the ability to field their SimBots with Alexa users. Throughout general availability and Semi-finals phases, users interacted with the SimBots and were subsequently prompted for satisfaction ratings and free-form feedback on their experience. In addition, the system was instrumented to capture the duration of conversations, and the status of game mission completion. Mission completion is measured by a metric called Mission Success Rate (MSR), which calculates a team's average rate of successfully completing mission goals in games:
\[MSR=\frac{N(\text{succeeded missions})}{N(\text{total missions})}\]
The average user satisfaction ratings together with mission success rate served as the primary evaluation metrics for the challenge. Each university team had access to these metrics and also received an anonymized leaderboard daily that presented the average metrics and rankings for all SimBots participating in the challenge. These provided the teams with valuable information to assess their performance and allowed them to gauge their relative performance compared to other teams. In addition, teams had access to transcriptions of the free-form qualitative feedback shared by users at the end of their interactions with the team's SimBot allowing the teams to gain qualitative insights into the users' impressions of the SimBots.
### Support from the Alexa Prize team
In addition to providing data and infrastructure, we engaged with university teams in several ways to provide support and feedback:
Figure 5: Developer experience for a university team
* A virtual pre-bootcamp to onboard university teams to the SDK and prepare teams for the bootcamp.
* A hands-on bootcamp with training materials, best practices, and design guidelines.
* Two virtual sessions with university teams on CX design, model training and evaluation, and competition guidelines to prepare teams for each phase of the competition.
* An internal beta phase, to provide traffic from Amazon employees to help inform and improve SimBot performance before general availability to all Alexa users.
* Detailed report on SimBot experiences prior to public launch, evaluating functionality as well as the SimBot's ability to maintain anonymity and handle inappropriate interactions.
* Weekly office hours for 1:1 consultations with a dedicated Alexa Prize Solutions Architect, Program Managers, UX Designers, and members of Alexa science and engineering teams.
* On-demand access to Alexa Prize personnel via Slack and email.
## 4 Scientific Advancements
During the challenge, the participants worked actively to improve their robots to enhance user satisfaction during game interaction and improve task completion rates. These include scientific innovations and engineering optimizations across multiple areas including data generation and annotation, efficient data storage and retrieval, user interaction tracking, visualization systems, dialog management, visual grounding, action prediction, multimodal and language understanding, and continuous improvement workflows. In this section, we present a summary of the main scientific advancements explored by the participants during the implementation of their robots. Each participating team described their specific innovations in more detail as part of their paper in these proceedings. The scientific contributions span multiple aspects that are instrumental to the seamless functionality of embodied AI agents. End-users interact with the embodied AI agents using voice commands through their Echo Show or Fire TV devices. The voice commands are then transcribed to text using the Alexa ASR system. Following this transcription, teams work with the text input to perform natural language understanding, the first party view of the robot to perform vision understanding and grounding, and combine both modalities to eventually execute the user-intended instruction on Alexa Arena.
Generalizability of models was key scientific theme and influenced the structure of the competition phases. Throughout the challenge, participating robots were evaluated on both seen game missions and unseen game missions. In phases with the seen game missions, teams had the opportunity to play with the games, review user feedback, and update their robots accordingly. During phases with unseen games, the robots were evaluated on their ability to solve missions that had not been seen before, and no updates or modifications to the robot models were permitted. In those unseen games, the robots may encounter new objects and new action types while completing previously unseen mission goals. To tackle these challenges, the teams focused on improving the generalizability of various aspects within their robots, via building (a) robust vision modules that cover all task-related objects, (b) natural language understanding mechanisms that can reliably predict user intents and map them to robot actions, and (c) adaptive dialog management strategies that offer informative responses and valuable guidance to users, even in unseen scenarios.
### Natural Language Understanding and Action Prediction
During each game mission in the SimBot Challenge, the users are presented with a task description and a list of subgoals, while the SimBot can only get access to this information through the user's language inputs, and in this real world scenario, users can instruct the SimBot in any way they want. The user utterances are often incomplete, ambiguous or completely out-of-domain. Furthermore, user utterances can have different levels of abstraction. Some users may prefer to provide procedural step-by-step instructions (e.g., "pick up the mug"), while others may prefer to give high-level commands (e.g., "repair the broken bowl") or combinations of actions (e.g. "go to the fridge in the break room and pick up the mug"). This diversity in user instructions poses a major challenge for robustness in language understanding.
To robustly handle user utterances, most teams have adopted modular architectures, where input language is first processed by natural language processing modules (e.g., part of speech tagging,
semantic role labeling, named entity recognition, intent classification) or neural models (e.g., Transformer [8] based deep models) for identifying the user intent and related objects. The intermediate representation of the input is then mapped to a sequence of robot actions via symbolic planners, pre-defined templates/rules or trained neural models, which often take into consideration the robot state and environment state to make context-aware predictions. Moreover, some teams have also injected common sense knowledge into their action prediction process. For example, knowledge of the affordances of an object can often help the robot eliminate unlikely action-object predictions. One of the major challenges the teams faced was grounding the understanding onto Alexa Arena - teams have proposed a combination of rule-based and neural architectures to do this transformation, making their solutions more versatile to other applications as well. Team EMMA proposed a foundational transformer based end-to-end model with pre-training strategies, datasets and a curriculum of pre-training tasks to train a foundational model before eventually fine-tuning the model for the embodied AI task on Alexa Arena. This approach showed good performance both offline and online. The same team also shows preliminary results on sim-2-real transfer using the proposed pre-training strategy and the provided Alexa Arena datasets.
### Visual Grounding
In the game, the robot can observe the 3D scene via a first-party RGBD camera. Any object-related actions (like navigate to an object or manipulate an object) require the robot to provide the correct object mask based on the image from it's current first-party view. Therefore, it is essential for a robot to efficiently recognize objects in the scene and correctly ground user utterances to the corresponding objects.
To ground user instructions to objects in the scene, teams use neural methods to perform object detection (or semantic segmentation). Their contributions involve fine-tuning the baseline mask RCNN model for mask prediction, and building additional models to detect object categories, states and relations. For example, team GauchoAI fine-tuned a MaskFormer model [9] using the provided image dataset, showing better visual understanding capability (absolute improvement of 12% - 22% mAP for medium and large objects compared to the provided baseline system). Team Seagull built a hierarchical visual perception system, including a Mask2Former model to detect coarse object types, a ResNet model to detect fine-grained object types and states, and a heuristic method to verify object spatial relations with high accuracy. Team EMMA fine-tuned a pre-trained VinVL model [10] with the Alexa Arena dataset to improve detection accuracy. The numbers are not directly comparable to the baseline model metrics because the team has also modified the number of object detection classes. Additionally, team EMMA also showed preliminary results for sim2real transfer for object detection by benchmarking on a synthetic dataset curated from the public GQA dataset [11] showing similar performance among Alexa Arena objects as well as other objects not present in the Alexa Arena dataset. The same team has also trained a visual ambiguity detector module to efficiently ground instructions in cases where there are multiple occurrences of the referred object. The output is modeled as a sequence that first predicts the presence of ambiguity in grounding, which is then used by a downstream grounding module. Team KnowledgeBot used the baseline model to produce object masks but determines which masks to retrieve based on objects generated from their planner. Team SlugJARVIS trained a MaskFormer and a ResNet based classifier model to do both coarse and fine-grained object detection, showing a high accuracy of 93% on fine-grained object classification. They also coupled an object state detector with an object relation detector to identify object states and spatial relationships between them. Across the teams, visual grounding is performed using heuristics, or through efficient integration of vision language models. Since visual grounding relies on language input, teams have proposed highly interlinked language and visual grounding modules.
A common challenge in object grounding arises when there are multiple instances of the same object type. Sometimes there are some details in user utterances that can help to disambiguate, for example, location information or object attributes (e.g. color). Most teams have built object attribute detectors based on simple rules or statistical models (e.g. K-means clustering).
To facilitate efficient navigation and object localization, several teams maintain a symbolic scene representation, including semantic maps and scene graphs, from visual observations at each time step. The representation enables the robot to efficiently explore the virtual environment and navigate to the requested objects. Some teams also introduce a memory bank to incorporate visual memory, which is populated with beliefs about various objects in different rooms that are updated periodically during
missions. This approach provides a probability distribution of seen objects for a given location which can be used by the robot for easy navigation when user instructions allude to previously seen objects.
### Knowledge
To efficiently assist users in completing game missions, it is important for the robot to possess enough background knowledge on the mechanism of environment state transition, for example, regarding objects and their properties, actions and their effects. Most teams maintain a collection of offline knowledge sources, including knowledge based on game missions like common action sequences, as well as more general common knowledge like object affordances and object aliases. The offline knowledge provides guidance for action prediction, visual grounding, object disambiguation, dialog management and response generation.
In addition, team SlugJARVIS also maintains and actively updates a set of online knowledge, which contains multimodal information from vision, text, and executed actions. They propose a progressive and evolving task experience retrieval algorithm that can identify unseen tasks and adapt to various environments and tasks by leveraging past successful interactions.
### Dialog Management
For regular users, the experience of instructing a robot via language to complete game missions is quite different from playing the game mission by themselves, especially when they are not familiar with the robot's capabilities or the limitations of the game environment. Therefore, actively providing appropriate feedback becomes critical for building user trust and delivering an engaging user experience. Most teams propose a multitude of template-based dialog generation modules that are easily extendable and facilitate continuous development. These modules include data structures that store dialog acts, template based dialog generation and tracking, as well as question answering based architectures for understanding user responses. To make the generated responses more natural and human-like, teams also use a variety of techniques including using large language models (LLM) to generate diverse response templates, and adding emotional prosody to the speech.
To further simplify users' efforts in completing game missions, several teams propose strategies for proactively suggesting next actions based on the current game state. Note that the robot cannot directly access the game mission description; it has to infer the next proper actions based on the dialog history and previous executed actions. For example, team GauchoAI makes suggestions based on objects recently interacted with and their affordance, e.g., when the robot approaches the microwave with a heatable object in hand, it is likely the user wants to heat the object.
### Training and Data Generation
Utilizing the provided Alexa Arena simulator, baseline model, and trajectory and vision datasets, several teams have managed to generate more synthetic data to further enhance their model training. These include multimodal vision and language datasets as well as language-based task decomposition and coreference resolution datasets. For example, team ScottyBot uses template-based synthetic language-actions data to train a BART model [12] for action sequence prediction from user utterances. To handle ASR errors, team SlugJARVIS employs LLMs to generate user utterances with simulated ASR errors for action prediction. Other examples include generating multimodal vision and language data, as well as language based coreference resolution data. In addition to generating these datasets, teams build dedicated annotation systems to create and refine these datasets either using offline approaches or by leveraging online user interaction data.
## 5 SimBot Performance: Results and Analysis
Building on the success of the SocialBot and TaskBot challenges, the users' explicit ratings and feedback were used to evaluate the SimBots. Additionally, a task-oriented measure known as the mission success rate was introduced, allowing for a direct way to evaluate the SimBots' effectiveness in accomplishing the tasks within a game mission. Furthermore, new unseen game missions were introduced during the competition to evaluate the SimBots' generalizability. In this section, we
provide various metrics to evaluate the performance of the SimBots in the first year of the competition, including comparisons between the Finalists, all SimBots, and our baseline system.
### Satisfaction Ratings
The primary mechanism of evaluating the SimBots was capture of explicit user satisfaction ratings. After each interaction, Alexa users were asked to rate their interaction with the SimBot on a scale of 1-5, according to the prompt, "How would you rate your interaction with the robot?". It's important to note that the SimBot rating prompt differed from the prompt used in the SocialBot competition ("How do you feel about speaking with this SocialBot again?") and the Taskbot competition ("How helpful was this TaskBot in assisting you?"), and thus, the ratings should not be directly compared between the different competitions. As shown in Figure 6, the Finalists improved their rolling seven-day average ratings by \(30\%\) (from 3.0 to 3.9) over the span of 22 weeks in the competition. The cumulative average rating across all teams also experienced an increase of \(3\%\), progressing from 3.6 to 3.7 throughout the competition.
### Task Completion Metrics
In addition to the satisfaction ratings, the Mission Success Rate (MSR) was introduced as a task oriented metric in Week 13. The mission success rate for each team was calculated by dividing the number of successful missions by the total number of missions played by that team. As shown in Figure 7, the Finalists improved their rolling seven-day average MSR by \(4\%\) (from \(49\%\) to \(52\%\)) over 8 weeks of the competition. The cumulative average MSR across all teams also increased by \(8\%\) during the course of the competition, from \(41\%\) to \(49\%\).
In Week 17 of the competition, we introduced five new game missions that the SimBots had not previously seen. To successfully complete the unseen missions, the SimBots had to complete new actions and interact with new objects. Table 2 presents the results comparing the seen and unseen missions. On unseen missions, the MSR for Finalist teams improved by \(2\%\) from \(53\%\) to \(55\%\) and all 10 university teams improved by \(2\%\) from \(45\%\) to \(47\%\). The baseline system exhibited an improvement of \(10\%\) from \(45\%\) to \(55\%\) on the unseen missions.
Notably, a high correlation between customer satisfaction (CSAT) and MSR was observed across all teams. During the Semifinals, there was a correlation of 0.92 (Pearson's Correlation) between
Figure 6: Rolling 7-Day Average Rating of User Satisfaction over the period of the competition for all SimBots (Blue), Finalists (Green), the progression of the cumulative ratings for all SimBots excluding Baseline (Orange), and the Baseline (Gray). The dashed green and blue line indicate weeks with missing data.
CSAT and MSR across all 10 university teams, highlighting the strong relationship between user satisfaction and task completion.
## 6 Discussion and Conclusions
While substantial advances have been made in the application of AI to create compelling and useful conversational assistants, significant challenges remain in advancing from the digital domain to create embodied conversational agents that can navigate the real world, manipulate objects, and complete tasks. The SimBot Challenge enabled university teams from around the world to compete to create effective and usable embodied conversational AI agents that were able to operate in a simulated environment that was fielded to Alexa users. This was the first edition of the SimBot Challenge and developing a competition in embodied AI that could be used by Alexa users on multimodal devices was a very challenging mission. In addition to providing the infrastructure for creating robust interactive spoken dialog systems, we also had to design and build the Alexa Arena simulation environment, develop compelling game missions for everyday users, and ensure support for capturing the robot's first-person view and applying computer vision models. Teams ratings and mission completion rates improved steadily across the course of the competition and teams were able to create approaches that generalized to unseen objects and tasks. The collaborative efforts of the Alexa Prize team and the participating university teams have laid a solid foundation for expanding the Alexa Prize SimBot Challenge, driving advancements in embodied conversational AI, and extending the possibilities for Alexa users in the future.
#### Acknowledgments
We would like to thank all the university students and their advisors (Alexa Prize SimBot Teams) who participated in the competition. We thank Amazon leadership and Alexa principals within the Alexa Natural Understanding (NU) organization for their vision and support through this entire program;
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**MSR** & **Seen Missions** & **Unseen Missions** & **Variance** \\ \hline
**All 10 teams** & 45\% & 47\% & 2\% \\ \hline
**Finalist teams** & 53\% & 55\% & 2\% \\ \hline
**Baseline System** & 45\% & 55\% & 10\% \\ \hline \end{tabular}
\end{table}
Table 2: MSR comparison for seen and unseen missions for all 10 teams, finalist teams, and our baseline system.
Figure 7: Rolling 7-Day Average MSR over the period of the competition for all SimBots (Blue), Finalists (Green), the progression of MSR for all SimBots excluding Baseline (Orange), and the Baseline (Gray).
Marketing for helping drive the right messaging and traffic to the Alexa Prize SimBot Challenge, ensuring that the participating teams received real-world feedback for their research; and Alexa Engineering for the work on enabling the Alexa Prize SimBot skills. We are grateful to the Alexa Developer Experience and Customer Trust (ADECT) Gadgets team for the many certification requests they worked on quickly to certify the university bots. We'd also like to thank the NU-Customer Experience team for exemplifying customer obsession by providing teams with critical inputs on building the best user experiences. We thank our leaders who took the time to virtually visit the university teams, learning from the teams and probing them to help them improve their designs. The competition would not have been possible without the support of all Alexa organizations including Speech, NLU, Data Services, and Conversation Modeling leadership. And finally, we would like to thank Alexa users who engaged in many interactions with the new Alexa Prize SimBot Challenge and provided feedback that helped teams improve over the course of the year.
|
2301.09110 | Responses of quark-antiquark interaction and heavy quark dynamics to
magnetic field | We investigate the impact of the magnetic field generated by colliding nuclei
on heavy quark-antiquark interactions and heavy quark dynamics in the
quark-gluon plasma (QGP). By means of hard-thermal-loop resummation technique
combined with dimension-two gluon condensates, the static heavy quark potential
and heavy quark momentum diffusion coefficient, which incorporate both
perturbative and non-perturbative interactions between heavy quarks and the QGP
medium, are computed beyond the lowest Landau level approximation. We find that
the imaginary part of the heavy quark potential in the magnetic field exhibits
significant anisotropy. Specifically, the absolute value of the imaginary part
is larger when the quark-antiquark separation is aligned perpendicular to the
magnetic field direction, compared to when it is aligned parallel to the
magnetic field direction. The heavy quark momentum diffusion coefficient in the
magnetized QGP medium also becomes anisotropic. As the temperature rises, the
influence of higher Landau levels becomes increasingly significant, resulting
in a decrease in the anisotropy ratio of the heavy quark momentum diffusion
coefficient to values even below 1. At sufficiently high temperatures, this
ratio ultimately approaches 1. The non-perturbative interactions are
indispensable for understanding heavy quark dynamics in the low-temperature
region. We also study the response of viscous quark matter to the magnetic
field and explore its implications for heavy quark potential, thermal decay
widths of quarkonium states, as well as heavy quark momentum diffusion
coefficient. | He-Xia Zhang, Enke Wang | 2023-01-22T12:15:11Z | http://arxiv.org/abs/2301.09110v2 | # Responses of quark-antiquark interaction and heavy quark dynamics to magnetic field
###### Abstract
We study how a magnetic field induced by colliding nuclei influences both heavy quark (HQ) potential and HQ momentum diffusion coefficients beyond the lowest Landau level approximation. By means of real-time hard thermal loop resummed technique combined with dimension two gluon condensate, the effective gluon propagator describing both perturbative and nonperturbative QCD nature at finite temperature and magnetic field are obtained. We find that HQ momentum diffusion coefficients in magnetized QCD medium become anisotropic, and with increasing temperature, the higher Landau levels become significant, which leads to the reduction of the anisotropic ratio (\(>\) 1) and even overturn the behavior (\(<\) 1) at high temperature. The nonperturbative (perturbative) contribution of HQ momentum diffusion coefficient in the low (high) temperature is dominant. And the anisotropy feature of the real part of the potential is essentially encoded in angular dependent Coulomb coupling and string tension. Whereas, the imaginary part of the potential from quark-loop in this work displays a significant anisotropy even though using the angular-independent Coulomb coupling and string tension. Furthermore, we study the magnetic response of viscous quark matter, which is manifested in the non-equilibrium distribution function of (anti-)quark by solving Boltzmann equation within the relaxation time approximation. We find the anisotropy ratio is almost insensitive to the magnetized bulk viscosity, although HQ momentum diffusion coefficient and HQ potential itself have visible changes.
## I Introduction
In the past two decades, the heavy-ion collisions (HICs) experiments at Relativistic Heavy Ion Collider (RHIC) and at Large Hadron Collider(LHC) have provided convincing evidence to indicate that a deconfined state of quarks and gluons - quark-gluon plasma (QGP) predicted by quantum chromodynamics (QCD), can exist[1; 2; 3]. And the strong transient magnetic field also can be generated in the direction perpendicular to the reaction plane due to the relativistic motion of the colliding heavy-ions. The estimated value of the field strength in the primary stage (\(<0.5\) fm) can reach \(eB\sim m_{\pi}^{2}\) in Au+ Au collisions for the RHIC energies and \(eB\sim 15m_{\pi}^{2}\) in Pb+Pb collisions for the LHC energies[4; 5; 6; 7; 8; 9], where \(m_{\pi}\) is pion mass. The existence of such intense magnetic field in HICs opens a new line of investigation and induces novel quantum transport phenomena like chiral magnetic effect (CME)[9; 10; 11]. Since the unique feature of the CME is charge separation along the magnetic field direction, it can be measured with charge-dependent particle correlators in the experiments[12; 13]. However, the contributions from electromagnetic field in the final experimental results for the charge-dependent correlations are hard to be extracted due to the large back-groud sources such as flow fluctuation and local charge conservation[14]. The more simpler and cleaner observables with direct sensitivity to magnetic field are needed to calibrate the strength and lifetime of the magnetic field.
Heavy quarks (charm, bottom quark), which are produced early in the collision via hard scattering process and hard to thermalized, can be excellent candidates to characterize the properties of the medium they cross[15; 16]. And due to the formation times of heavy quark (HQ) are comparable to the time scale of maximum electromagnetic field, the propagation of heavy quark in QCD medium can be significantly affected by magnetic field. The difference between the directed flow \(v_{1}\) of open charm mesons \(D^{0}\) and \(\bar{D}^{0}\), which arises from the competing results from Farady effect and Hall effect induced by decreasing magnetic field,
can be an direct probe to the initial electromagnetic (EM) field created in HICs. Theoretical predictions on the basis of the Langevin transport equation with the relativistic hydrodynamics [17; 18] have indicated that the \(v_{1}\) of open charm meson is larger than the \(v_{1}\) of charged light hadrons and \(\Delta v_{1}=v_{1}(D^{0})-v_{1}(\bar{D}^{0})\) is nonzero, which also has been confirmed by the STAR experimental result at RHIC[19]. However, at LHC energies the Langevin transport model calculations coupled to hydrodynamics model [17] with a constant electrical conductivity extracted from lattice QCD calculation[20; 21] gives an opposite qualiative behavior of \(\Delta v_{1}\) compared to the experimental result[22]. Recently, the authors of Ref.[23] have adapt the EM field evolution model from Ref.[24] instead of the direct solution of the Maxwell equation with a constant electric conductivity to enhance the effect of Lorentz force relative to Coulomb force, subsequently obtained the qualitative results consistent with the exper |
2307.04962 | Intrinsically motivated graph exploration using network theories of
human curiosity | Intrinsically motivated exploration has proven useful for reinforcement
learning, even without additional extrinsic rewards. When the environment is
naturally represented as a graph, how to guide exploration best remains an open
question. In this work, we propose a novel approach for exploring
graph-structured data motivated by two theories of human curiosity: the
information gap theory and the compression progress theory. The theories view
curiosity as an intrinsic motivation to optimize for topological features of
subgraphs induced by nodes visited in the environment. We use these proposed
features as rewards for graph neural-network-based reinforcement learning. On
multiple classes of synthetically generated graphs, we find that trained agents
generalize to longer exploratory walks and larger environments than are seen
during training. Our method computes more efficiently than the greedy
evaluation of the relevant topological properties. The proposed intrinsic
motivations bear particular relevance for recommender systems. We demonstrate
that next-node recommendations considering curiosity are more predictive of
human choices than PageRank centrality in several real-world graph
environments. | Shubhankar P. Patankar, Mathieu Ouellet, Juan Cervino, Alejandro Ribeiro, Kieran A. Murphy, Dani S. Bassett | 2023-07-11T01:52:08Z | http://arxiv.org/abs/2307.04962v4 | # Intrinsically Motivated Graph Exploration Using Network Theories of Human Curiosity
###### Abstract
Intrinsically motivated exploration has proven useful for reinforcement learning, even without additional extrinsic rewards. When the environment is naturally represented as a graph, how to guide exploration best remains an open question. In this work, we propose a novel approach for exploring graph-structured data motivated by two theories of human curiosity: the information gap theory and the compression progress theory. The theories view curiosity as an intrinsic motivation to optimize for topological features of subgraphs induced by the visited nodes in the environment. We use these proposed features as rewards for graph neural-network-based reinforcement learning. On multiple classes of synthetically generated graphs, we find that trained agents generalize to larger environments and to longer exploratory walks than are seen during training. Our method computes more efficiently than the greedy evaluation of the relevant topological properties. The proposed intrinsic motivations bear particular relevance for recommender systems. We demonstrate that curiosity-based recommendations are more predictive of human behavior than PageRank centrality for several real-world graph datasets, including MovieLens, Amazon Books, and Wikispeedia.
## 1 Introduction
Providing a task-agnostic incentive for exploration as an intrinsic reward has proven useful in a variety of reinforcement learning settings, even in the absence of any task-specific (extrinsic) rewards [1; 2]. Termed _curiosity_ in reference to the analogous drive in humans, prior formulations are based on different means of quantifying the novelty or surprisal of states encountered by an agent [3]. If states are represented as graphs, the task-agnostic motivation to explore can additionally be content-agnostic, depending only on the topological properties of the visited state subgraph. Leading theories of curiosity in humans are similarly content-agnostic, based on structural properties of a relational graph that connects atoms of knowledge without regard to their actual content [4].
Theories of curiosity attempt to describe the intrinsic motivations that underlie human decision-making when acquiring information through exploration. The _information gap theory_ (IGT) argues that curiosity collects knowledge that regulates gaps in our understanding of the world [5]. Exposure to a small amount of novel information pushes an individual's uncertainty about the environment past an acceptable threshold, creating an information gap. Curious agents are driven to resolve the discrepancy by acquiring information to close the gap [6; 7]. An alternative account, the _compression progress theory_ (CPT), posits that information-seeking behavior is motivated to build increasingly compressible state representations [8; 9]. Compression enables abstraction and improved
generalization by emphasizing the essential latent structures of knowledge [10; 11; 12]. Information gap theory and compression progress theory provide optimization objectives for the human exploration of graph-structured environments.
In this work, we demonstrate that network theoretic measurements of information gaps and compression progress can be meaningful exploration incentives for reinforcement learning (RL). We train agents that use graph neural networks (GNN) to explore graph-structured data while optimizing for gap creation and improved compression (Figure 1). Once trained, the agents navigate network structures to optimize certain topological features without regard to the content of the network. The agents can be used to modify statistics that are based on random walk processes on graphs. As an example, we use data of humans traversing spaces with natural graph structure--books and movies to review or Wikipedia pages to visit--to compute node centrality measures that best align with human navigation data. Our primary contributions are the following:
* We adapt intrinsic motivations for human curiosity as reward functions for reinforcement learning.
* We replace expensive reward computations with graph neural networks. Our method is computationally efficient and generalizes to shorter and longer exploratory walks and to smaller and larger environments than are seen during training.
* We demonstrate that modifying measures of node centrality with curiosity-trained agents increases alignment with human behavior in real-world graph datasets without using any domain-specific feature information.
## 2 Related work
**Human curiosity as graph exploration.** Curiosity in humans is commonly conceptualized as the intrinsic motivation to gather information from the environment [5; 13; 14]. Humans acquire information even when it is expensive to obtain [15; 16] and may have no immediate tangible utility [17; 18], suggesting that exploration is inherently valuable. Recent work has expanded the acquisitional framing of curiosity with a more general connectional account. This perspective defines curiosity as an exploratory walk on a graph. Here, curiosity entails building a growing knowledge network by acquiring informational units as nodes and their relationships as edges [4; 19]. The state of an individual's knowledge is viewed as the subgraph of the environment induced by the visited nodes [20; 21]. Under this formulation, humans explore Wikipedia via trajectories with fewer information gaps and greater network compressibility than relevant null models [21].
**Intrinsic motivations in reinforcement learning.** The need for improved exploration has led reinforcement learning to incorporate curiosity-like intrinsic motivations into its algorithmic framework [22; 23]. Exploration rewards in RL take several forms. At the core of all approaches is an inducement for the learning agent to seek novelty. Count-based approaches encourage visits to unfamiliar or infrequently visited states [24; 25; 26; 27; 28]. When the state space is large, enumerating the frequencies of visits to all possible states is prohibitively expensive. To overcome this challenge, neural density models derive uncertainty-based pseudo-counts [24; 25]. A complementary perspective emphasizes
Figure 1: **Neural network for graph exploration. The subgraph induced by the set of currently visited nodes is denoted in orange. Candidate nodes to visit at the next time step are denoted in green. We build candidate subgraphs by adding each neighbor to the already visited subgraph. The candidates are processed with a GNN to obtain Q-values, denoting their long-term potential to create or close gaps or to improve compressibility. Two example trajectories are shown: one with a high number of gaps and one with greater compressibility.**
model building and formulates curiosity rewards in terms of learning progress and surprisal [1; 9; 29; 30; 31; 32]. For instance, in the prediction error approach--alongside an extrinsic task--the agent attempts to learn a model of the environment's dynamics. Curiosity rewards are proportional to the model error when predicting transitions between states. Memory-based methods assign rewards based on how different a newly visited state is from those stored in memory [33; 34]. Instead of a prescriptive approach, parametric methods attempt to explicitly learn an intrinsic reward function [35; 36; 37; 38]. In general, improved exploration is a means to an end, with intrinsic rewards supplementing extrinsic task-specific rewards.
**Graph combinatorial optimization and reinforcement learning.** Combinatorial optimization entails selecting elements from a finite set of options such that the chosen subset satisfies an objective function [39]. Graph analyses often involve combinatorial optimization, with graph structure imposing constraints on the solution space. Recent work combines graph neural networks and reinforcement learning to construct solutions by incrementally adding nodes to a partial set [40; 41; 42]. First, a GNN constructs an embedding for the candidate solution; second, an agent, for instance, a deep Q-network (DQN), trained via RL, selects an action to expand the solution [43]. The two networks can be trained end-to-end with an optimization objective driving gradients for learning. This approach solves various graph combinatorial tasks, such as the traveling salesperson problem [43; 44; 45], finding the maximum independent set [46], or the minimum vertex cover [43; 47], and identifying isomorphic subgraphs [48]. Instead of uncovering nodes, GNNs can also sequentially collapse nodes into each other with implications for matrix multiplication [49]. GNNs, in combination with RL, have also been used to build and rewire graphs such that they possess high values of specific features of interest [50; 51].
## 3 Methods
Our goal is to train an agent to explore the environment while optimizing for a structural property of the visited subgraph. Consider a graph-structured environment \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with node set \(\mathcal{V}\) and edge set \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\). Let \(\mathcal{V}_{T}=\{v_{1},v_{2},\cdots,v_{T}\}\subseteq\mathcal{V}\) be an ordered set of explored nodes at time \(T\). The corresponding subgraph trajectory is the sequence \(\mathcal{S}_{1}\subset\mathcal{S}_{2}\subset\cdots\subset\mathcal{S}_{T}\), wherein the \(t\)-th subgraph \(\mathcal{S}_{t}\) is induced by the first \(t\) visited nodes. Specifically, given the graph \(\mathcal{G}\), the number of nodes to visit \(T\), a graph feature function \(\mathcal{F}:2^{\mathcal{G}}\rightarrow\mathbb{R}\), and a discount factor \(\gamma\in[0,1]\), we seek an ordered set \(\mathcal{V}_{T}^{*}\) such that \(\sum_{t=1}^{T}\gamma^{t-1}\mathcal{F}(\mathcal{S}_{t})\) is maximal. The feature function acts as an intrinsic reward to encourage exploration. The discounting parameter determines the extent to which the future values of \(\mathcal{F}\) factor into the decision-making at every step. Drawing inspiration from human curiosity, we adopt information gap theory and compression progress theory to design two functions, \(\mathcal{F}_{IGT}\) and \(\mathcal{F}_{CPT}\).
### Network theories of curiosity
**Information gap theory** views curiosity as an intrinsic motivation to regulate gaps in knowledge. For humans, new information pushes the level of uncertainty about the environment past an acceptable threshold, creating an uncertainty gap. Curiosity seeks to find information units to close this gap. By modeling the state of knowledge as a graph, we can characterize information gaps as topological cavities. In a graph, cavities can take several forms: dimension \(0\) cavities represent disconnected network components, whereas those of dimension \(1\) represent non-triangular loops of edges (Figure 2A). In order to identify and count topological cavities, a graph is first converted into a higher-order relational object known as a _simplicial complex_[52]. A simplicial complex is comprised of simplices. Geometrically, a \(d\)-simplex is a shape with flat sides formed by connecting \(d+1\) points. For \(0\leq d\leq 2\), by definition a node is a \(0\)-simplex, an edge is a \(1\)-simplex, and a filled triangle is a \(2\)-simplex. We can construct a simplicial complex by assigning a \(d\)-simplex to each \((d+1)\)-clique in a binary graph. In a simplicial complex, a \(d\)-dimensional topological cavity is identified as an enclosure formed by \(d\)-simplices that a higher-dimensional simplex cannot fill. We refer the reader to Refs. [53; 54; 55; 56; 57] for a more comprehensive treatment of algebraic topology.
Given a simplicial complex, the \(d\)-th _Betti number_\(\beta_{d}\) counts the number of topological gaps of dimension \(d\). Prior work examining human knowledge-network-building finds compelling evidence in support of information gap theory when gaps are conceptualized as \(1\)-dimensional cavities [21]. In this work, at each time step \(t\) with a visited subgraph \(\mathcal{S}_{t}\), we assign rewards equal to \(\beta_{1}\),
\[\mathcal{F}_{IGT}=\beta_{1}(\mathcal{S}_{t}). \tag{1}\]
**Compression progress theory** posits that curiosity is a drive to compress the state of knowledge [8]. During graph exploration, at each step \(t\) in the trajectory, the compression reward can be assigned as network compressibility [58]. Consider a subgraph \(\mathcal{S}_{t}\) with \(t\) nodes and \(q\) edges, represented by a symmetric adjacency matrix \(M\in\mathbb{R}^{t\times t}\). Information about the subgraph's structure can be encoded in the form of a random walk \(\mathbf{x}=(x_{1},\ x_{2},\ \dots\,)\). The walk sequence is generated by randomly transitioning from a node to one of its neighbors. Thus, for a random walk on \(\mathcal{S}_{t}\), the probability of transitioning from node \(i\) to node \(j\) is \(P_{ij}=M_{ij}/\sum_{j}M_{ij}\). Since the walk is Markovian, its information content (or its _entropy_) is given by \(H=-\sum_{i}\pi_{i}\sum_{j}P_{ij}\log P_{ij}\). Here, \(\pi_{i}\) is the stationary distribution representing the long-term probability that the walk arrives at node \(i\), given by \(\pi_{i}=\sum_{j}M_{ij}/2q\).
Assigning nodes to clusters leads to a coarse-grained sequence \(\mathbf{y}=(y_{1},\ y_{2},\ \dots\,)\). The number of clusters \(n\) can be used to define a scale of the network's description \(s=1-\frac{n-1}{t}\). For example, when \(n=t\), the network is described at a fine-grained scale \(s=1/t\); at the other extreme, when \(n=1\) the network is described at the coarsest scale \(s=1\). At every description scale in between, it is possible to identify a clustering of nodes that minimizes the information rate (Figure 2B). After computing these optimal clusterings across all scales, we arrive at a rate-distortion curve \(R(s)\), representing a bound on the information rate as a function of the scale \(s\). The compressibility \(C\) of the network is then given as the average reduction in the information rate across all scales [58], \(C=H-\frac{1}{t}\sum_{s}R(s)\). Therefore, the compression reward is
\[\mathcal{F}_{CPT}=C(\mathcal{S}_{t}), \tag{2}\]
where \(C(\mathcal{S}_{t})\) denotes the compressibility of subgraph \(\mathcal{S}_{t}\).
### Reinforcement learning for graph exploration
We formulate the graph exploration problem as a Markov decision process (MDP) [59]:
* **States**: The state is defined as the subgraph induced by the visited nodes at time \(t\), \(\mathcal{S}_{t}=\mathcal{G}[\mathcal{V}_{t}]\). We specify the initial state \(\mathcal{S}_{1}\) by randomly selecting a starting node \(v_{1}\in\mathcal{V}\). Each state represents a partial solution to the broader sequential exploration task.
* **Actions**: The agent can transition to any neighbor of the most recently visited node. We denote the neighborhood of a node \(v\) as \(\mathcal{N}(v)=\{u\in\mathcal{V}\ |\ (v,u)\in\mathcal{E}\}\). Therefore, given the state at time \(t\), the set of available next nodes is \(\mathcal{A}(\mathcal{S}_{t})=\mathcal{N}(v_{t})\setminus\mathcal{V}_{t}\). If no nodes are available in the immediate neighborhood, we expand the action set to include all neighbors of the explored subgraph.
* **Transitions**: Given the pair \(\mathcal{S}_{t}\) and \(v\in\mathcal{A}(\mathcal{S}_{t})\), the transition to state \(\mathcal{S}_{t+1}\) is deterministic with \(P\left(S_{t+1}\ |\ S_{t},v\right)=1\).
* **Rewards**: The reward at time \(t\) is defined as \(R_{t}=\mathcal{F}(\mathcal{S}_{t})\). We train RL agents using either \(\mathcal{F}_{IGT}\) or \(\mathcal{F}_{CPT}\) as the reward function.
The policy \(\pi(v\ |\ \mathcal{S}_{t})\) maps states to actions, fully describing the agent's behavior in the environment. At each step, the agent makes decisions using a value function \(Q(\mathcal{S}_{t},v)\), which evaluates candidate
Figure 2: **Quantifying network theories of human curiosity.**_(A)_ Gaps or cavities in a graph can be formalized using algebraic topology. A \(1\)-dimensional cavity is a non-triangular loop of edges. _(B)_ The information rate of a random walk \(\mathbf{x}\) on a graph is given by its entropy. If we cluster the nodes, the walk sequence \(\mathbf{x}\) is compressed into a new sequence \(\mathbf{y}\), where \(y\) is the cluster that contains node \(x\). The new sequence has a lower information rate than the original sequence. The number of clusters defines the scale at which the network is described. We can find an optimal clustering at every scale of description that maximally lowers the information rate. These values can be recorded in a rate-distortion curve. Network compressibility is the maximal reduction in the information rate, averaged across all scales. Graphically, this value represents the area above the rate-distortion curve bounded by the entropy of the unclustered random walk.
nodes \(v\in\mathcal{A}(\mathcal{S}_{t})\) in the context of the currently explored subgraph. The function measures the total (discounted) reward that is expected to accumulate if the agent selects action \(v\) in state \(\mathcal{S}_{t}\) and thereafter follows policy \(\pi\). In turn, the policy can be viewed as behaving greedily with respect to the value function, \(\pi=\arg\max_{v\in\mathcal{A}(\mathcal{S}_{t})}Q\left(\mathcal{S}_{t},v\right)\). Solving an MDP entails finding an optimal policy that maximizes the expected discounted sum of rewards.
We parameterize the value function \(Q\) using a GNN \(\Phi(\cdot):\mathcal{G}\rightarrow\mathbb{R}\). GNNs build vector embeddings for nodes by iteratively aggregating their features with those from their local neighborhoods [60]. Each aggregation step is typically followed by a fully connected layer and a non-linear activation function. Depending on the number of rounds of aggregation, features from more distant locations in the graph can inform the embedding for each node. Specifically, we use the _GraphSAGE_ architecture [61], where at the \(l\)-th round of feature aggregation, the embedding for node \(u\) is given as,
\[h_{u}^{(l)}=f^{(l)}\left(h_{u}^{(l-1)},h_{\mathcal{N}(u)}^{(l-1)}\right)=g \left[\theta_{C}^{(l)}h_{u}^{(l-1)}+\theta_{A}^{(l)}\tilde{A}\left(h_{ \mathcal{N}(u)}^{(l-1)}\right)\right], \tag{3}\]
where \(\tilde{A}\) represents the aggregation operator, \(g\left[.\right]\) is the activation function, and \(\theta_{C}\) and \(\theta_{A}\) are parameters for combination and aggregation, respectively [42, 61]. We use the local degree profile (LDP) of each node as the initial set of features [62]. LDP comprises various features of a node's neighborhood, including its own degree, the minimum and maximum degrees of its neighbors, and the average and standard deviation of the degrees of its neighbors.
We train GNNs for exploration using the DQN algorithm, with a replay buffer for experience sampling, a target network, and a decaying \(\epsilon\)-greedy exploration rate [63]. Details of the full neural network architecture and the training process are included in the Supplementary Materials.
### Curiosity-biased node centrality
Several graph theoretic quantities can be defined in terms of random walk processes on a graph. We can use agents trained to explore graphs to bias random walk processes and, by extension, the corresponding quantities. PageRank is a widely recognized algorithm that assigns node centrality scores to graph data [64, 65, 66, 67]. The per-node score \(\eta\) can be interpreted as the stationary distribution of a random walk process on a network. With probability \(\alpha\), a random walker moves along an edge from node \(v_{i}\) to one of its neighbors. The probability of reaching a connected node \(v_{j}\) is \(P_{ij}\). Alternatively, with probability \(1-\alpha\), the walker jumps, or teleports, to a random node in the network. The probability of jumping to node \(v_{k}\) is \(q_{k}\). Under conditions of irreducibility and aperiodicity [68], the stationary distribution is given as
\[\sum_{i}(I-\alpha P_{ij}^{t})\eta_{i}=(1-\alpha)q_{j}. \tag{4}\]
The PageRank algorithm follows a random walk that is entirely Markovian. Typically, the probability \(P_{ij}\) depends solely on the out-degree of \(v_{i}\) and, in the case of node-weighting, on the personalization vector \(q\). Personalized PageRank biases the random walk process using \(q_{k}\) by taking into account nodes that are already visited in the network [69].
We can integrate agents trained to optimize for the exploration objectives described earlier into the PageRank algorithm. Specifically, given an already visited subgraph, we propose to modify transition probabilities using Q-values assigned to candidate nodes. Consider a non-Markovian random walker sitting at node \(v_{l}\) with a path history \(V_{l}=\{v_{1},\cdots,v_{l-1},v_{l}\}\). The visited nodes in the path induce a corresponding subgraph \(\mathcal{S}_{l}\). Paths are built starting from the most recent initialization or teleportation event. We use a Q-value function trained to optimize for an objective \(\mathcal{F}\) to bias the walker. The transition probability from node \(v_{l}\) to node \(v_{m}\) can be re-defined as,
\[P_{lm}^{\mathcal{F}}(\mathcal{S}_{l})\equiv\begin{cases}\frac{(1-p_{g})p_{g}^{ rank(l(\mathcal{S}_{l},v_{m}))-1}}{1-p^{|\mathcal{A}(\mathcal{S}_{l})|}},&v_{m}\in \mathcal{A}(\mathcal{S}_{l}),\\ 0,&\text{otherwise},\end{cases} \tag{5}\]
where \(rank(Q(\mathcal{S}_{l},v_{m}))\) is the rank of the Q-value for action \(v_{m}\) and \(p_{g}\in[0,1]\) is a parameter that controls how likely the walker is to select actions greedily. To compute biased per-node PageRank values, we simulate a walker using \(P_{ij}^{\mathcal{F}}(\mathcal{S}_{i})\) until probabilities converge.
Experiments
### Exploration in synthetically generated networks
We train a curiosity-based GNN agent to explore synthetically generated graph environments. Each environment is constructed to have \(N=50\) nodes. Each episode lasts for \(10\) steps and, therefore, consists of visits to \(10\) distinct nodes. We examine four synthetic graph models that exhibit a broad range of degree profiles and topologies [70; 71]:
* Erdos-Renyi (ER): The ER model produces random graphs by adding edges between nodes with probability \(p\). We set \(p=0.2\).
* Barabasi-Albert (BA): Starting with a randomly connected skeleton of \(m\) nodes, the BA model, also known as the preferential attachment model, adds nodes sequentially. Each new node is connected to \(m\) existing nodes with a probability proportional to node degree. This "rich-gets-richer" growth scheme results in graphs with heavy-tailed degree distributions. We set \(m=4\).
* Random geometric: Graph-structured environments, such as transportation networks or power grids, are embedded in physical space. Random geometric graphs model such environments by placing nodes within a unit cube of specified dimensionality. The model places nodes uniformly at random inside the cube. An edge connects a pair of nodes if the distance between the nodes is less than or equal to a radius value. For a \(2\)-dimensional space, we set the radius value to \(0.25\).
* Watts-Strogatz (WS): Many real-world networks possess a "small-world" topology, whereby distant nodes can be reached by a small number of hops from any node in the graph. The WS model creates graphs with a small-world topology by creating a ring graph and adding edges from each node to its \(k\) nearest neighbors. Each edge is then rewired at random with probability \(p\). We set \(k=4\) and \(p=0.1\).
For each of the four graph models, we build \(100\) training, \(10\) validation, and \(10\) testing environments. After training, we evaluate the GNN agent in the testing environments against four baseline approaches:
* Random: Select a candidate node at random.
* Greedy: For each candidate node, build a candidate state subgraph. Evaluate the reward function for each subgraph and select the node that results in the biggest one-step improvement.
* Max Degree: Select the candidate node with the largest degree.
* Min Degree: Select the candidate node with the smallest degree.
The total average reward gathered by the different agents is presented in Table 1. For the IGT reward, in all graph models except for ER, the GNN outperforms the greedy agent. By contrast, the one-step-ahead greedy agent consistently performs best for CPT, with the GNN a close second.
Baseline approaches broadly perform well compared to the GNN for CPT than they do for IGT. When exploring a graph with the IGT objective, adding a single node can close several topological gaps simultaneously, requiring careful consideration of options. By contrast, compressibility is less sensitive to the choice of node at each step due to its strong correlation with the clustering coefficient
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \(\mathcal{F}\) & \(\mathcal{G}\) & Random & Max Degree & Min Degree & Greedy & GNN \\ \hline IGT & RG & \(0.312_{\pm 0.034}\) & \(0.010_{\pm 0.007}\) & \(0.144_{\pm 0.027}\) & \(1.495_{\pm 0.079}\) & \(\mathbf{2.308_{\pm 0.092}}\) \\ & WS & \(1.141_{\pm 0.068}\) & \(1.048_{\pm 0.068}\) & \(1.586_{\pm 0.082}\) & \(2.707_{\pm 0.103}\) & \(\mathbf{3.303_{\pm 0.106}}\) \\ & BA & \(7.593_{\pm 0.145}\) & \(2.565_{\pm 0.083}\) & \(3.932_{\pm 0.115}\) & \(19.332_{\pm 0.206}\) & \(\mathbf{21.970_{\pm 0.169}}\) \\ & ER & \(9.197_{\pm 0.144}\) & \(9.638_{\pm 0.162}\) & \(4.953_{\pm 0.127}\) & \(\mathbf{25.20_{\pm 0.164}}\) & \(24.058_{\pm 0.183}\) \\ \hline CPT & RG & \(8.607_{\pm 0.027}\) & \(8.928_{\pm 0.027}\) & \(7.864_{\pm 0.033}\) & \(\mathbf{9.615_{\pm 0.014}}\) & \(9.271_{\pm 0.017}\) \\ & WS & \(7.117_{\pm 0.021}\) & \(6.788_{\pm 0.025}\) & \(6.937_{\pm 0.021}\) & \(\mathbf{7.668_{\pm 0.012}}\) & \(7.174_{\pm 0.014}\) \\ & BA & \(6.926_{\pm 0.023}\) & \(8.526_{\pm 0.015}\) & \(5.899_{\pm 0.016}\) & \(\mathbf{8.669_{\pm 0.016}}\) & \(8.556_{\pm 0.010}\) \\ & ER & \(6.767_{\pm 0.020}\) & \(6.931_{\pm 0.019}\) & \(6.022_{\pm 0.017}\) & \(\mathbf{8.262_{\pm 0.016}}\) & \(7.880_{\pm 0.015}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of GNN-based agent using information gap theory (IGT) and compression progress theory (CPT) compared to four baseline methods (random, max degree, min degree, greedy). We compare results using the total average return gathered by agents in four types of synthetic graph environments (random geometric - RG, Watts-Strogatz - WS, BarabΓ‘si-Albert - BA, ErdΕs-RΓ©nyi - ER).
[58]. If exploring inside a cluster, neighbors of a node are likely to be neighbors of each other, lowering the likelihood that a single choice will significantly alter compressibility. For instance, the max degree baseline performs well for the CPT objective in random geometric graphs because high-degree nodes are centrally placed and surrounded by dense, highly clustered neighborhoods [71]. Barabasi-Albert graphs, similarly, have highly clustered cores due to preferential attachment in their generative process [72]. Watts-Strogatz networks have high clustering when the edge rewiring probability is low. As a result, even random exploration in such topologies tends to occur inside clusters leading to greater compressibility. In support of this view, the minimum degree baseline, which is likely to select a node outside of a cluster, is typically further apart from the performance of the GNN compared to the other baselines.
#### 4.1.1 Trajectory length and environment size generalization
After training the GNN agent to explore \(10\) nodes in random geometric graph environments of \(50\) nodes, we evaluate generalization performance for shorter and longer trajectories and smaller and larger environments. We test trajectory length generalization while holding environment size fixed at \(50\) nodes. For walks shorter and longer than \(10\) steps, the GNN performs comparably to the greedy agent for both IGT and CPT (Figure 3). We test environment size generalization by taking \(10\) steps on graphs that are smaller or larger than \(50\) nodes. The GNN agent outperforms the greedy agent in smaller environments. In larger environments, the GNN is superior to the greedy agent for IGT and exhibits comparable performance for CPT. In summary, the performance of trained GNNs does not degrade for settings outside the training regime. These results indicate that we can train GNNs for graph exploration in regimes where reward computations are relatively inexpensive due to the smaller size of subgraphs and expect them to scale to longer walks and larger networks. We also report generalization results for the other graph models in the Supplementary Materials.
#### 4.1.2 Time complexity
Using graphs of different sizes, we evaluate the computational efficiency of our approach by comparing the wall time for a forward pass through the GNN with that for a greedy evaluation of the rewards. Figure 4 displays results for random geometric synthetic graphs. The time for greedy evaluation of the topological features for both IGT and CPT grows quickly with subgraph size, whereas the GNN offers a faster alternative. Comparing the rewards for the two theories of curiosity, the information gap reward is significantly cheaper to evaluate compared to network compressibility. Therefore, in addition to approximating human intrinsic motivations for exploration, we find that the GNN offers a route to efficient computation of meaningful topological features of graphs.
Figure 4: **Wall time. Wall time for a forward pass through the GNN compared to the greedy evaluation of rewards. Bands denote standard error over computations for \(50\) networks.**
Figure 3: **Trajectory length and environment size generalization. GNNs trained for graph exploration generalize to shorter and longer trajectories and to smaller and larger environments than are seen during training. We train GNNs to explore \(10\) steps for IGT and CPT in random geometric environments with \(50\) nodes. Performance does not degrade for exploratory walks of a different length in \(50\)-node environments. Similarly, when taking \(10\) steps, GNN-based agents outperform or match the greedy agent in smaller and larger environments than of size \(50\). Bands denote standard error.**
### Alignment with human navigation of graph data
Next, we evaluate the utility of curiosity-trained agents in predicting human behavior in graph-structured environments. To gather path-based information for our analyses, we use two types of real-world graph datasets. Reviews enable us to approximate consumer paths on a similarity graph of available content. We create two separate graphs, one comprising movies from the MovieLens dataset [73] and the other comprising books from the Amazon Product Reviews dataset [74; 75]. We also examine a second type of dataset consisting of user paths on Wikipedia in the Wikispeedia game environment [76; 77]. The three datasets are:
1. MovieLens: The MovieLens dataset consists of movie reviews [73]. We use IMDB user summaries and Word2Vec to construct vector embeddings for each movie. We build a graph environment by treating each movie as a node. For each movie, we use cosine similarity to add edges to the \(20\) most similar movies.
2. Amazon Books: The Amazon Product Reviews dataset encompasses reviews for diverse products [74; 75]. To narrow our focus, we specifically extract and retain reviews associated with books. We filter out books with fewer than \(150\) reviews and limit our analysis to reviewers with at least \(5\) reviews. To represent each book as a distinct entity, we use Word2Vec-based vector embeddings. For each book, we add edges by identifying the top \(20\) most similar books based on their embeddings.
3. Wikispeedia: The Wikispeedia dataset consists of paths collected for a navigation game on Wikipedia [76; 77]. In the game, users are presented with a starting article and a destination article and are tasked with reaching the destination article using hyperlinks within Wikipedia. Here, the underlying hyperlink structure of Wikipedia acts as the graph environment.
We train GNNs for graph exploration in each of the three real-world environments for both information gap theory and compression progress theory.
To incorporate person-specific data, the PageRank hop vector \(q\) is modified to be zero for all nodes except a user's \(n_{\text{burn-in}}\) most recently visited nodes [69]. We assign a uniform jump probability to the \(n_{\text{burn-in}}\) nodes, with \(q_{k}=1/n_{\text{burn-in}}\). Each graph feature function \(\mathcal{F}\) yields a PageRank vector \(\eta_{i}^{\mathcal{F}}\). We combine these vectors linearly to obtain a final PageRank vector, denoted as \(\eta^{\prime}\) such that \(\eta^{\prime}(\alpha,\tilde{\beta},\tilde{\gamma},\tilde{\delta})\equiv\tilde{ \beta}\eta_{\text{PR}}(\alpha)+\tilde{\gamma}\eta_{\text{GT}}(\alpha)+\tilde{ \delta}\eta_{\text{CPT}}(\alpha)\) where \(\tilde{\beta}^{2}+\tilde{\gamma}^{2}+\tilde{\delta}^{2}=1\) and \(\eta_{\text{PR}}\) is the score vector obtained using standard PageRank. To evaluate this approach, we optimize the set of variables \(\alpha,\tilde{\beta},\tilde{\gamma},\tilde{\delta}\) using a training set of transitions. We then compare performance against unbiased PageRank, where only \(\alpha\) is optimized. Formally, we generate two sets of human transitions denoted as \(\mathbf{S}_{\text{test}}\) and \(\mathbf{S}_{\text{train}}\). These sets consist of portions of human trajectories with a length of \(n_{\text{burn-in}}+1\). Next, we perform Bayesian optimization to compute parameters \(\hat{a}\) and \(\hat{a}_{\text{bias}}\) for the two sets,
\[\hat{a} \equiv\arg\max_{\alpha}\sum_{S\in\mathbf{S}_{\text{train}}} \text{rank}_{v_{\text{burn-in}}}(\eta_{\text{PR}}(\alpha)) \tag{6}\] \[\hat{a}_{\text{bias}} \equiv\arg\max_{\alpha,\tilde{\beta},\tilde{\gamma},\tilde{ \delta}}\sum_{S\in\mathbf{S}_{\text{train}}}\text{rank}_{v_{\text{burn-in}}}( \eta^{\prime}(\alpha,\tilde{\beta},\tilde{\gamma},\tilde{\delta})). \tag{7}\]
To evaluate our method, we calculate the ratio of improvement on the test set, given as
\[r_{\mathbf{S}_{\text{test}}}\equiv\sum_{S\in\text{Best}}\text{rank}_{v_{ \text{burn-in}}}(\eta^{\prime}(\hat{a}_{\text{bias}}))/\sum_{S\in\text{Best}} \text{rank}_{v_{\text{burn-in}}}(\eta_{\text{PR}}(\hat{a})). \tag{8}\]
Table 2 displays \(r_{\mathbf{S}_{\text{test}}}\) in percentage terms for the three datasets when considering curiosity theories alone or in combination. Across all combinations, improvement ranges from 2.9% to 32.2%,
\begin{table}
\begin{tabular}{l c c c} \hline \hline Graph dataset \(\mathcal{G}\) & IGT & CPT & IGT + CPT \\ \hline MovieLens & \(+4.2\%_{\pm 2.1\%}\) & \(+5.1\%_{\pm 1.5\%}\) & \(+\mathbf{7.9\%_{\pm 1.7\%}}\) \\ \hline Amazon Books & \(+5.4\%_{\pm 1.9\%}\) & \(+4.6\%_{\pm 1.6\%}\) & \(+\mathbf{9.6\%_{\pm 1.9\%}}\) \\ \hline Wikispeedia & \(+2.9\%_{\pm 2.9\%}\) & \(+12.2\%_{\pm 3.2\%}\) & \(+\mathbf{32.2\%_{\pm 7.7\%}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Percentage improvement (\(r_{\mathbf{S}_{\text{test}}}\)) with curiosity-biased centrality across the three datasets (MovieLens, Amazon Book, Wikispeedia).
indicating that incorporating curiosity for the biasing of walks is useful. Depending on the dataset, the IGT or CPT-trained agent performs better with similar values. In the Wikispeedia data, however, CPT leads to improvement that is nearly four times higher than IGT. The books and movie datasets exhibit similarities since the selection mechanism in both is not directed towards a goal. By contrast, the Wikispeedia dataset involves goal-directed navigation.
Figure 5B shows the improvement in predicting the transitions made by humans in the Wikispeedia dataset. We compare percentile ranks for each transition made by the human when making predictions with and without biasing the random walk process. We find that biased curiosity assigns higher percentile ranks to actual transitions than standard PageRank. We also analyze the distance from the initial node with respect to time for individual random walk trajectories (Figure 5C). In general, observed differences between the biased walkers are small and fall within the standard deviation of the walk process. These observations suggest that the differences observed in the biased PageRank algorithm are not solely attributable to changes in the diffusion properties of the random walks.
## 5 Limitations
In our implementation, when computing an embedding for the state subgraph, the GNN does not distinguish candidate nodes from those already visited. Appending a one-hot vector to differentiate candidates could potentially lead to improved performance. This approach would allow the network to recognize and, therefore, prioritize candidate nodes during the decision-making process. The PageRank algorithm includes various hyperparameters that can be further fine-tuned; for instance, \(p_{g}\) or refining the distribution \(P_{ij}^{\mathcal{F}}(\mathcal{S}_{i})\) that is used to select nodes for the walker.
## 6 Discussion
We can use intrinsic motivations that underpin human curiosity to train neural networks to explore graph-structured environments with diverse topological structures. Our approach generalizes to longer exploratory walks and larger environments than are seen during training. Importantly, relying only on the structure of the visited subgraph and without any domain-specific node features, we find that our method is more predictive of human behavior than PageRank centrality for several real-world graph datasets.
Figure 5: **Re-defining centrality using agents trained for curiosity.**_(A)_ We measure curiosity-biased PageRank centrality using a set of biased walkers that explore the graph starting from a subset of already visited nodes. Biases are incorporated using GNNs trained for IGT and CPT rewards. _(B)_ Example demonstrating the improvement in predicting human transitions when using curiosity-biased versus standard PageRank. Biased curiosity assigns higher percentile ranks to actual transitions than standard PageRank. _(C)_ Random walker diffusion, measured as the distance from the initial node for each graph. A comparison is made between the unbiased (blue), IGT-biased (orange), and CPT-biased walkers (green). |
2304.03585 | ArmanTTS single-speaker Persian dataset | TTS, or text-to-speech, is a complicated process that can be accomplished
through appropriate modeling using deep learning methods. In order to implement
deep learning models, a suitable dataset is required. Since there is a scarce
amount of work done in this field for the Persian language, this paper will
introduce the single speaker dataset: ArmanTTS. We compared the characteristics
of this dataset with those of various prevalent datasets to prove that ArmanTTS
meets the necessary standards for teaching a Persian text-to-speech conversion
model. We also combined the Tacotron 2 and HiFi GAN to design a model that can
receive phonemes as input, with the output being the corresponding speech. 4.0
value of MOS was obtained from real speech, 3.87 value was obtained by the
vocoder prediction and 2.98 value was reached with the synthetic speech
generated by the TTS model. | Mohammd Hasan Shamgholi, Vahid Saeedi, Javad Peymanfard, Leila Alhabib, Hossein Zeinali | 2023-04-07T10:52:55Z | http://arxiv.org/abs/2304.03585v1 | # ArmanTTS single-speaker Persian dataset
###### Abstract
TTS, or text-to-speech, is a complicated process that can be accomplished through appropriate modeling using deep learning methods. In order to implement deep learning models, a suitable dataset is required. Since there is a scarce amount of work done in this field for the Persian language, this paper will introduce the single speaker dataset: ArmanTTS. We compared the characteristics of this dataset with those of various prevalent datasets to prove that ArmanTTS meets the necessary standards for teaching a Persian text-to-speech conversion model. We also combined the Tacotron 2 and HiFi GAN to design a model that can receive phonemes as input, with the output being the corresponding speech. 4.0 value of MOS was obtained from real speech, 3.87 value was obtained by the vocoder prediction and 2.98 value was reached with the synthetic speech generated by the TTS model.
Text-to-speech dataset; Vocoders; Acoustic models
## I Introduction
By expanding deep learning models, a complicated process like converting text to speech can be done with great accuracy, even generating human-like speech [1, 2, 3, 4]. In order for deep learning models to perform efficiently, they require ample amounts of suitable data to be able to cover all aspects of the process. There is an abundance of datasets for dealing with English text-to-speech conversion [5, 6, 7]. However, Persian-text-to-speech[8] is the only available dataset for Persian, showing the lack of research and datasets in this field. In order to improve Persian TTS, We introduce the ArmanTTS as a Farsi text to speech conversion dataset. To evaluate our dataset we used two models on for generating spectrograms and the other for converting the spectrograms to the audio. Table I shows a comparison of ArmanTTS's sample rates, total speaking time, and number of speakers to available prominent datasets in English, German, and Farsi.
The characteristics of the ArmanTTS dataset are listed as follows:
* Sample rate of 22.0g kHz
* Single speaker
* Studio recorded speech
* Model input given is a combination of phonemes and output provided as waveforms
* Audio duration of 9 hours, 12 minutes and 14 seconds
* Average signal to noise ratio of 25dB
In order to evaluate this dataset, we designed a TTS model that receives the phonemes of a sentence as input and creates its corresponding speech. To convert phonemes to acoustic features, the Tacatron 2[9] model is used and to convert acoustic features to waveforms, HiFi-GAN[3] is used. In this paper we review previous datasets and research in the field of TTS in Related Work. Next in Dataset Structure we will look at some features and characteristics of the ArmanTTS dataset. Then, in Experiment, we evaluate the models on the ArmanTTS dataset. The final segment, Conclusion, concludes this paper with an overview and conclusion from our research.
There have been a number of works in the field of TTS, each with their own collection of datasets [10, 11, 12, 13, 14, 15, 2]. In this section we examine previous researches done in the field of TTS. Text to speech models are comprised of Acoustic models and vocoders. Acoustic models convert text to spectrograms and the vocoders convert spectrograms to waveforms. We will inspect both techniques in more detail.
### _Acoustic Models_
Acoustic models can be categorized as either SPSS or end-to-end. HMM[13] models and RNN based BiLSTM[14] are examples of SPSS. These models take linguistic features from the input and produce acoustic features in the output. End-to-end models on the other hand directly take the characters or phonemes as input and produce the acoustic features as output. Tacatron[1], TransformerTTS[2], and Fast Speech[15] are examples of the end-to-end model. The results of our experimentation with the Tacatron [2] model is reported in Experiment.
### _Vocoders_
Vocoders are classified into four categories: autoregressive, flow-based, GAN-based and diffusion-based [16]. WaveNet [17] is a vocoder that uses dilated convolutions [18] to create autoregressive wave points. Flow-based models are a type of generative model in which invertible mappings are used for transforming the probabilistic density. Based on their type of transformation, flow-based models are either autoregressive transformers or bipartite transformers. Autoregressive transformations are more accurate while bipartite transformations have a simpler training process [16]. GAN-based models, however, consist of a generative part that generates synthetic data and a separator part to check the similarity of synthetic data to real data in the training phase [19]. The Mel-GAN[12], HiFi-GAN[3], GAN-TTS[4], and Wave-GAN[12] models are examples of GAN-based vocoders [16]. We experimented with the HiFi-GAN vocoder, which results are reported in Experiment. Finally, the diffusion-based model uses Denoising Diffusion Probabilistic Models (DDPM or Diffusion) [20] and converts acoustic features into waveforms [16].
### _Datasets_
TTS datasets require samples that are noise-free and of good quality. This can make the data collection process difficult and costly. Because of this, TTS datasets are relatively scarce and do not exist for most languages. Although there have been researches trying to reduce the cost of preparing TTS dataset with solutions such as unsupervised learning, cross-linguistic transformation, removing noise and increasing the quality of environmental sounds, etc [16].
Persian-text-to-speech [8] is currently the only available dataset in Persian. Therefore, in order to expand TTS in the Persian language, we have collected a new dataset and presented it in this paper. There are various datasets for TTS in other languages, particularly in English which we will review below;
#### Ii-A1 LibriTTS Dataset
LibriTTS[5] is a multi-speaker English dataset derived from the LibriSpeech corpus [21]. This dataset contains 582 hours of speech and 2456 speakers, with a sample rate of 24kHz. Each sample in the dataset contains one sentence, each including the original text and the normalized text of the samples.
#### Ii-A2 LJSpeech Dataset
The LJSpeech dataset [6] is a single-speaker English dataset derived from the LibriVox books [22]. It contains about 24 hours of speech with a sample rate of 22.05 kHz.
#### Ii-A3 HiFi-TTS Dataset
The HiFi-TTS dataset [7], is a high quality English dataset with 292 hours of speech and 10 speakers. The sample rate seen in this dataset is above 44.1 kHz.
#### Ii-A4 HUI-Audio-Corpus-German Dataset
HUI-Audio-Corpus-German[23] is a high quality German dataset. It contains speech from 122 speakers for a sum of 326 hours. The sample rate of this dataset is 44.1 kHz.
#### Ii-A5 Blizzard-2013 Dataset
The Blizzard-2013 dataset [10] resulted from a challenge presented in 2013 to generate artificial speech. It contains about 319 hours of single-speaker speech in Indian, with a sample rate of 44.1 kHz.
M-AILABS[24] is a dataset for converting from speech to text, also involving automatic speech recognition. It contains 75 hours of speech with a sample rate of 16kHz. The samples in the dataset are mono-wave, containing female voice only, male voice only, and a combination of female and male voice. The samples consist of various Latin languages such as English, German, and Russian.
#### Ii-A7 Persian-Text-To-Speech Dataset
The Persian-text-to-speech[8] dataset, is composed of samples collected from audiobook recordings, chosen based on the availability of the corresponding text. The sampling rate of the original sounds is 44.1kHz, which is reduced to 22.05kHz for the dataset. The number of audio channels of the samples was reduced to single channel files. This dataset contains 30 hours of single-speaker speech. Because of the long duration of the original audio file, shorter samples were extracted using a silence detector.
## III Dataset Structure
To produce ArmanTTS Farsi text-to-speech dataset, we took the following steps;
### _Persian Text_
To obtain the persian text of the audio files, we used OpenSubtitles[25]. Each sentence was separated into a separate audio file, resulting in 9 hours, 12 minutes, and 14 seconds of audio duration. Figure 2 shows the distribution of sentence length throughout the dataset. As you can see, the number of words in most samples is below 10. Figure 1 shows the duration histogram of the audio files. According to Figure 1, the duration of most audio files is close to 2.5 seconds.
### _Mapping Phonemes and Letters_
In this dataset, the model input is the phoneme. Table III shows the mappings between different phonemes and certain persian words. Table III contains 29 phonemes; e, o, i, A, a and u are vowels and the rest are consonants. In the following, we see the mapping of some examples of sentences to phonemes.
Each example is written in three forms. For each example above, the first line is written in the common form of Persian text. In the second line, the sentence is written so that it can be mapped with phonemes, and in the third line, the sentence is written using phonemes. The third line is the input of the model in the ArmanTTS dataset.
### _Normalizing the text_
In the ArmanTTS dataset, the input to the model is phonemes. Therefore, there is no process to normalize the text. Figure 4 shows the occurrence frequency of each phoneme in the dataset. According to Figure 4, vowel phonemes occurred more than consonant phonemes, which is natural.
### _Audio and Speech_
The speech production of this dataset was done in a studio environment with a sampling rate of 22.05 kHz. The average signal to noise ratio is 25 dB. Also, this dataset is single-speaker. Figure 3 shows the signal-to-noise histogram of the dataset files. It can be seen that the signal-to-noise ratio of the majority of the audio files is above zero, which indicates that the dataset is acceptable in terms of the amount of noise.
Figure 1: The histogram of duration time of the audios in the dataset
Figure 2: histogram of the sentence length of the samples in the data
### _Categorization_
The dataset consists of two separate sections, training and testing. The training set contains 8419 samples and the test set contains 30 samples.
## IV Experiment
To evaluate the Arman TTS dataset, we used the text-to-speech model consisting of the HiFi-GAN[3] vocoder and the Tacotron 2[9] acoustic model. We used MOS(mean opinion score) metrics for the evaluation. MOS is a metric used for evaluation of ground-truth audio, TTS models and vocoders. We evaluated 30 samples which results can be seen in Table II.
The results show that we can achieve acceptable performance with ArmanTTS dataset. However, we need larger amount of data to cover the problem domain appropriately to reach better performance.
## V Conclusion
Due to the lack of datasets in the field of Persian text to speech conversion, in this paper we introduced the ArmanTTS dataset. It includes a single male speaker and contains 8449 Persian audio samples, resulting in 9 hours and 12 minutes of speech. Additionally, we evaluated the dataset with a base method. This method allowed us to model and test the dataset. The resulted evaluation obtained for the ground-truth voice, vocoder prediction with ground-truth acoustic features, and the TTS model voice, are respectively 4.0 MOS, 3.86 MOS and 2.98 MOS. Of course, there is still a lot of potential to expand in Persian text-to-speech conversion. Some of the measures that can be taken to expand this area is the collection of multilingual datasets, datasets with less noise, higher sampling rates and longer durations for the Persian language.
## Acknowledgement
This dataset was collected with the aid of the Arman Rayan Sharif Company located in Iran. We thank this company for their full support in producing and publishing this dataset.
|
2303.08477 | Classification and calibration of affine models driven by independent
LΓ©vy processes | The paper is devoted to the study of the short rate equation of the form $$
dR(t)=F(R(t))dt+\sum_{i=1}^{d}G_i(R(t-))dZ_i(t), \quad R(0)=x\geq 0, \quad t>0,
$$ with deterministic functions $F,G_1,...,G_d$ and independent L\'evy
processes of infinite variation $Z_1,...,Z_d$ with regularly varying Laplace
exponents. The equation is supposed to have a nonnegative solution which
generates an affine term structure model. A precise form of the generator of
$R$ is characterized and a related classification of equations which generate
affine models introduced in the spirit of Dai and Singleton
\cite{DaiSingleton}. Each class is shown to have its own canonical
representation which is an equation with the same drift and the jump diffusion
part based on a L\'evy process taking values in $\mathbb{R}^{g}, 1\leq g\leq
d$, with independent coordinates being stable processes with stability indices
in the range $(1,2]$. Numerical calibration results of canonical
representations to the market term structure of interest rates are presented
and compared with the classical CIR model. The paper generalizes the classical
results on the CIR model from \cite{CIR}, as well as on its extended version
from \cite{BarskiZabczykCIR} and \cite{BarskiZabczyk} where $Z$ was a
one-dimensional L\'evy process. | MichaΕ Barski, RafaΕ Εochowski | 2023-03-15T09:32:59Z | http://arxiv.org/abs/2303.08477v1 | # Classification and calibration of affine models driven by independent Levy processes
###### Abstract
The paper is devoted to the study of the short rate equation of the form
\[\mathrm{d}R(t)=F(R(t))\mathrm{d}t+\sum_{i=1}^{d}G_{i}(R(t-))\mathrm{d}Z_{i}(t ),\quad R(0)=x\geq 0,\quad t>0, \tag{1}\]
with deterministic functions \(F,G_{1},...,G_{d}\) and independent Levy processes of infinite variation \(Z_{1},...,Z_{d}\) with regularly varying Laplace exponents. The equation is supposed to have a nonnegative solution which generates an affine term structure model. A precise form of the generator of \(R\) is characterized and a related classification of equations which generate affine models introduced in the spirit of Dai and Singleton [9]. Each class is shown to have its own canonical representation which is an equation with the same drift and the jump diffusion part based on a Levy process taking values in \(\mathbb{R}^{g},1\leq g\leq d\), with independent coordinates being stable processes with stability indices in the range \((1,2]\). Numerical calibration results of canonical representations to the market term structure of interest rates are presented and compared with the classical CIR model. The paper generalizes the classical results on the CIR model from [12], as well as on its extended version from [3] and [4] where \(Z\) was a one-dimensional Levy process.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 Laplace exponents of Levy processes
* 2.1.1 Projections of the noise
* 2.2 Preliminary characterization of generating equations
* 2.2.1 Problem formulation
* 2.2.2 One-dimensional generating equations
* 2.2.3 Non-uniqueness in the multidimensional case
* 3
Classification of generating equations * 3.1 Main results * 3.1.1 Proofs * 3.2 Characterization of regularly varying Laplace exponents * 3.3 Generating equations on a plane * 3.4 An example in 3D
* 4 Applications
* 4.1 Calibration of canonical models to market data
* 4.1.1 Remarks on computational methodology
* 5 Appendix
## 1 Introduction
The study of continuous state branching processes with immigration (CBI) by Kawazu and Watanabe [16] revealed attractive analytical properties of affine processes which motivated Filipovic to bring them, in the pioneering paper [14], in the field of finance. Affine processes are widely used in various areas of mathematical finance. They appear in term structure models, by credit risk modelling and are applied within the stochastic volatility framework. Solid fundamentals of affine processes in finance were laid down by Filipovic [14] and by Duffie, Filipovic and Schachermeyer [10]. The results obtained in these papers settled a reference point for further research and proved the usefulness and strength of the Markovian approach. Missing questions on regularity and existence of cadlag versions were answered by Cuchiero, Filipovic and Teichmann [7] and Cuchiero and Teichmann [8].
The systematic study of affine processes in finance was motivated by classical stochastic short rate models, like CIR (Cox, Ingersoll, Ross) [12], Vasicek [19] and model with diffusion factors of Dai and Singleton [9], and resulted in discovering new stochastic equations, also with jumps; see, among others, [14], Duffie and Garlean [11], Barndorff-Nielsen and Shephard [2], Jiao, Ma and Scotti [15]. Nevertheless, the full description of affine processes representable in terms of stochastic equations is far from being clear. This is because the Markovian description of affine processes based on generators does not, in general, allow encoding the form of a possible underlying stochastic equation. The framework based on stochastic dynamics offers, however, unquestionable advantages like discretization schemes enabling Monte Carlo simulations which are essential for example for pricing exotic, i.e. path-dependent, derivatives. A comprehensive treatment of simulating schemes for affine processes and pricing methods can be found in [1]. Stochastic equations allow also identifying the number of random sources in the model which is of some use by calibration and hedging. In this paper we focus on recovering from the Markovian setting those affine processes which are given by stochastic equations driven by a multidimensional Levy process with independent coordinates. Specifically, we focus on the equation
\[\mathrm{d}R(t)=F(R(t))\mathrm{d}t+\sum_{i=1}^{d}G_{i}(R(t-)) \mathrm{d}Z_{i}(t),\quad R(0)=x,\quad t>0, \tag{1.1}\]
where \(x\) is a nonnegative constant, \(F\), \(\{G_{i}\}_{i=1,2,\ldots,d}\) are deterministic functions and \(\{Z_{i}\}_{i=1,2,\ldots,d}\) are independent Levy processes and martingales. A solution \(R(t),t\geq 0\), if nonnegative, will be identified here with the short rate process which defines the bank account process by
\[B(t):=e^{\int_{0}^{t}R(s)ds},\quad t\geq 0.\]
Related to the savings account are zero coupon bonds. Their prices form a family of stochastic processes \(P(t,T),t\in[0,T]\), parametrized by their maturity times \(T\geq 0\). The price of a bond with maturity \(T\) at time \(T\) is equal to its nominal value, typically assumed, also here, to be \(1\), that is \(P(T,T)=1\). The family of bond prices is supposed to have the _affine structure_, which means that
\[P(t,T)=e^{-A(T-t)-B(T-t)R(t)},\quad 0\leq t\leq T, \tag{1.2}\]
for some smooth deterministic functions \(A\), \(B:[0,+\infty)\to\mathbb{R}\). Hence, the only source of randomness in the affine model (1.2) is the short rate process \(R\) given by (1.1). As the resulting market constituted by \((B(t),\{P(t,T)\}_{T\geq 0})\) should exclude arbitrage, the discounted bond prices
\[\hat{P}(t,T):=B^{-1}(t)P(t,T)=e^{-\int_{0}^{t}R(s)ds-A(T-t)-B(T-t)R(t)},\quad 0 \leq t\leq T,\]
are supposed to be local martingales for each \(T\geq 0\). This requirement affects in fact our starting equation (1.1). Thus the functions \(F\), \(\{G_{i}\}_{i=1,...,d}\) and the noise \(Z=(Z_{1},...,Z_{d})\) should be chosen such that (1.1) has a nonnegative solution for any \(x\geq 0\) and such that, for some functions \(A\), \(B:[0,+\infty)\to\mathbb{R}\) and each \(T\geq 0,\ \hat{P}(t,T)\) is a local martingale on \([0,T]\). If this is the case, (1.1) will be called to _generate an affine model_ or to be a _generating equation_, for short.
The description of all generating equations with one-dimensional noise is well known, see Section 2.2.2 for a brief summary. This paper deals with (1.1) in the case \(d>1\). The multidimensional setting makes the description of generating equations more involved due to the fact that two apparently different generating equations may have solutions which are Markov processes with identical generators. For brevity, we will call such solutions 'identical' or 'the same solutions'. The resulting bond markets are then the same, so such equations can be viewed as equivalent. This phenomenon does not appear in the one-dimensional case, but was a central point in the study of a multi-factor affine models by Dai and Singleton [9]. Recall, in the class of affine models considered in [9] the short rate is an affine function of \(N\) factors \((Y_{1},...,Y_{N}):=Y\), which are given by a diffusion equation of the form
\[\mathrm{d}Y(t)=H(Y(t))\mathrm{d}t+\Sigma\sqrt{\mathrm{diag}(A+BY(t))}\mathrm{ d}W(t), \tag{1.3}\]
where \(H\) is a specific affine function, \(\Sigma,B\) are \(N\times N\) matrices, \(A\) is a vector in \(\mathbb{R}^{N}\) and the value of \(\mathrm{diag}(v)\) is the diagonal \(N\times N\) matrix with the coordinates of \(v\in\mathbb{R}^{N}\) on the diagonal. Above \(W\) stands for the Wiener process in \(\mathbb{R}^{N}\). By particular choices of parameters, one may recognize in (1.3) many specific models used in practice, for details see [9]. The question of characterization of equations (1.3) which generate affine models was handled in [9], see also [6], by classifying the structure of factors. The classification is based on the parameter \(m:=\mathrm{rank}(B)\) interpreted as a degree of dependence of the conditional variances on the number of factors. Each equation (1.3) which generates an affine model is classified as a member of one of \(N+1\) disjoint subfamilies
\[\mathbb{A}_{m}(N),\quad m=0,1,...,N,\]
of equations. All equations within a chosen subfamily provide the same short rate and the short rates differ across subfamilies. Moreover, each subfamily is shown to have its own _canonical representation_ for which (1.3) simplifies, i.e. the diffusion matrix in (1.3) is diagonal. Although our setting based on equation (1.1) differs, our approach of characterizing generating equations has much in common with that of Dai and Singleton. The main results of the paper, i.e. Theorem 3.1, Corollary 3.2 and Proposition 3.3 imply that under mild assumptions any generating
equation (1.1) has the same solution as that of the following equation
\[\mathrm{d}R(t)=(aR(t)+b)\mathrm{d}t+\sum_{k=1}^{g}d_{k}^{1/\alpha_{k}}R(t-)^{1/ \alpha_{k}}\mathrm{d}Z_{k}^{\alpha_{k}}(t), \tag{1.4}\]
with some \(1\leq g\leq d\) and parameters \(a\in\mathbb{R}\), \(b\geq 0\), \(d_{k}>0\), \(k=1,2,...,g\), driven by independent stable processes \(\{Z_{k}^{\alpha_{k}}\}\) with indices \(\{\alpha_{k}\}\) such that \(2\geq\alpha_{1}>\alpha_{2}>...>\alpha_{g}>1\). All generating equations having the same solutions as (1.4) form a set which we denote by
\[\mathbb{A}_{g}(a,b;\alpha_{1},\alpha_{2},...,\alpha_{g};\eta_{1},...,\eta_{g}), \tag{1.5}\]
where \(\eta_{i}:=\frac{\Gamma(2-\alpha_{i})}{\alpha_{i}(\alpha_{i}-1)}d_{i},i=1,...,g\), \(\Gamma(\cdot)\) is the Gamma function. We call (1.4) a _canonical representation_ of (1.5). By changing values of the parameters in (1.5) one can thus split all generating equations into disjoint subfamilies with a tractable canonical representation for each of them.
The number and structure of generating equations which form (1.5) depend on the noise dimension in (1.1). As one may expect, the set (1.5) is getting larger as \(d\) increases. In Section 3.3 we determine all generating equations on a plane by formulating concrete conditions for \(F,G\) and \(Z_{1},Z_{2}\) in (1.1). For \(d=2\) the class \(\mathbb{A}_{1}(a,b;\alpha_{1};\eta_{1})\) consists of a wide variety of generating equations while \(\mathbb{A}_{2}(a,b;\alpha_{1},\alpha_{2};\eta_{1},\eta_{2})\) turns out to be a singleton. The passage to the case \(d=3\) makes, however, \(\mathbb{A}_{2}(a,b;\alpha_{1},\alpha_{2};\eta_{1},\eta_{2})\) a non-singleton. This phenomenon is discussed in Section 3.4.
A tractable form of canonical representations is supposed to be an advantage for applications. One finds in (1.4) with \(g=1,\alpha_{1}=2\) the classical CIR equation and may expect that additional stable noise components improve the model of bond market. For \(g=2,\alpha_{1}=2\) and \(1<\alpha_{2}\leq 2\) equation (1.4) becomes the alpha-CIR equation studied in [15]. It was shown in [15] that empirical behaviour of the European sovereign bond market is closer to that implied by the alpha-CIR equation than by the CIR equation due to the permanent overestimation of the short rates by the latter one. The alpha-CIR equation allows also reconciling low interest rates with large fluctuations related to the presence of jump part whose tail fatness is controlled by the parameter \(\alpha_{2}\). In the last part of the paper we focus on the calibration of canonical representations to market data. Into account are taken the spot rates of European Central Bank implied by the \(AAA\) - ranked bonds, Libor rates and six-month swap rates. We compute numerically the fitting error for (1.1) in the Python programming language with \(g\) in the range from \(1\) up to \(5\). This illustrates, in particular, the influence of \(g\) on the reduction of fitting error which is always less than in the CIR model. The freedom of choice of stability indices makes the canonical model curves more flexible, hence with shapes better adjusted to the market curves. The effect is especially visible for market data after March 2022 when the curves started to change their shapes.
The structure of the paper is as follows. In Section 2 we discuss the Laplace exponents of Levy processes, in particular, the Laplace exponents of the projections of \(Z\) along \(G\), defined as the processes
\[Z^{G(x)}(t):=\sum_{i=1}^{d}G_{i}(x)Z_{i}(t),\quad t\geq 0,\quad x\geq 0, \tag{1.6}\]
which play a central role in the sequel. The second part of Section 2 is based on the preliminary characterization of generating equations, i.e. Proposition 2.2, which is a version of the result from [14] characterizing the generator of a Markovian short rate. This leads to a precise formulation of
the problem studied in the paper. Further we describe one dimensional generating equations and discuss the non-uniqueness of generating equations in the multidimensional case. In Example 2.4 we show two different equations with the same solutions. Section 3 is concerned with the classification of generating equations. Section 3.1 contains the main results of the paper which provide a precise description of the generator of (1.1). This makes more specific the, rather abstract, result from [14] and motivates introducing the classification of generating equations. The required assumption on the Laplace exponent of the noise to vary regularly at zero is reformulated in terms of Levy measure in Section 3.2. Section 3.3 and Section 3.4 are devoted to generating equations on a plane and an example in the three-dimensional case, respectively. In Section 4 we discuss the calibration of canonical representations.
## 2 Preliminaries
In this section we recall some facts on Levy processes needed in the sequel and present a version of the result on generators of Markovian affine processes [14], see Proposition 2.2, which is used for a precise formulation of the problem considered in the paper. We explain the meaning of the projections of the noise (1.6) and show in Example 2.4 two different generating equations having the same projections, hence identical solutions. For illustrative purposes we keep referring to the one-dimensional case where the forms of generating equations are well known, see Section 2.2.2 below. For the sake of notational convenience we often use a scalar product notation \(\langle\cdot,\cdot\rangle\) in \(\mathbb{R}^{d}\) and write (1.1) in the form
\[\mathrm{d}R(t)=F(R(t))\mathrm{d}t+\langle G(R(t-)),\mathrm{d}Z(t)\rangle, \quad R(0)=x\geq 0,\qquad t>0, \tag{2.1}\]
where \(G:=(G_{1},G_{2},...,G_{d}):[0,+\infty)\longrightarrow\mathbb{R}^{d}\) and \(Z:=(Z_{1},Z_{2},...,Z_{d})\) is a Levy process in \(\mathbb{R}^{d}\).
### Laplace exponents of Levy processes
Let \(Z\) be an \(\mathbb{R}^{d}\)-valued Levy process with characteristic triplet \((a,Q,\nu(\mathrm{d}y))\). Recall, \(a\in\mathbb{R}^{d}\) describes the drift part of \(Z\), \(Q\) is a non-negative, symmetric, \(d\times d\) covariance matrix, characterizing the coordinates' covariance of the Wiener part \(W\) of \(Z\), and \(\nu(\mathrm{d}y)\) is a measure on \(\mathbb{R}^{d}\setminus\{0\}\) describing the jumps of \(Z\). It is called the Levy measure of \(Z\) and satisfies the condition
\[\int_{\mathbb{R}^{d}}(\mid y\mid^{2}\wedge\,1)\ \nu(\mathrm{d}y)<+\infty. \tag{2.2}\]
Recall, \(Z\) admits a representation as a sum of four independent processes of the form
\[Z(t)=at+W(t)+\int_{0}^{t}\int_{\{\mid y\mid\leq 1\}}y\tilde{\pi}(\mathrm{d}s, \mathrm{d}y)+\int_{0}^{t}\int_{\{\mid y\mid>1\}}y\pi(\mathrm{d}s,\mathrm{d}y), \tag{2.3}\]
called the Levy-Ito decomposition of \(Z\). Above \(\pi(\mathrm{d}s,\mathrm{d}y)\) and \(\tilde{\pi}(\mathrm{d}s,\mathrm{d}y):=\pi(\mathrm{d}s,\mathrm{d}y)-\mathrm{d}s \nu(\mathrm{d}y)\) stand for the jump measure and the compensated jump measure of \(Z\), respectively. If
\[\int_{\{\mid y\mid<1\}}\mid y\mid\nu(\mathrm{d}y)=+\infty, \tag{2.4}\]
then \(Z\) is of infinite variation. If (2.4) does not hold and \(Z\) has no Wiener part, the variation of \(Z\) is finite. The coordinates of \(Z\) are independent if and only if \(Q\) is diagonal and \(\nu(\mathrm{d}y)\) is concentrated on axes.
We consider the case when \(Z\) is a martingale and call it a Levy martingale for short. Its drift and the Levy measure are such that
\[\int_{\{|y|>1\}}\mid y\mid\ \nu(\mathrm{d}y)<+\infty,\quad a+\int_{\{|y|>1\}}y\ \nu(\mathrm{d}y)=0. \tag{2.5}\]
Consequently, the characteristic triplet of \(Z\) is
\[\left(-\int_{\{|y|>1\}}y\ \nu(\mathrm{d}y),\ Q,\ \nu(\mathrm{d}y)\right), \tag{2.6}\]
and (2.3) takes the form
\[Z(t)=W(t)+X(t),\qquad X(t):=\int_{0}^{t}\int_{\mathbb{R}^{d}}y\ \tilde{\pi}( \mathrm{d}s,\mathrm{d}y),\quad t\geq 0,\]
where \(W\) and \(X\) are independent. The martingale \(X\) will be called the jump part of \(Z\). Its Laplace exponent \(J_{X}\), defined by the equality
\[\mathbb{E}\left[e^{-\langle\lambda,X(t)\rangle}\right]=e^{tJ_{X}(\lambda)}, \tag{2.7}\]
has the following representation
\[J_{X}(\lambda)=\int_{\mathbb{R}^{d}}(e^{-\langle\lambda,y\rangle}-1+\langle \lambda,y\rangle)\nu(\mathrm{d}y), \tag{2.8}\]
and is finite for \(\lambda\in\mathbb{R}^{d}\) satisfying
\[\int_{|y|>1}e^{-\langle\lambda,y\rangle}\nu(\mathrm{d}y)<+\infty.\]
By the independence of \(X\) and \(W\) we see that
\[\mathbb{E}\left[e^{-\langle\lambda,Z(t)\rangle}\right]=\mathbb{E}\left[e^{- \langle\lambda,W(t)\rangle}\right]\cdot\mathbb{E}\left[e^{-\langle\lambda,X(t )\rangle}\right],\]
so the Laplace exponent \(J_{Z}\) of \(Z\) equals
\[J_{Z}(\lambda)=\frac{1}{2}\langle Q\lambda,\lambda\rangle+J_{X}(\lambda). \tag{2.9}\]
**Example 2.1** (\(\alpha\)-stable martingales with \(\mathbf{1}<\alpha<\mathbf{2}\)): _A real valued stable martingale \(Z_{t}^{\alpha},t\geq 0\) with index \(\alpha\in(1,2)\) and positive jumps only is a Levy process without Wiener part with Levy measure of the form_
\[\nu(\mathrm{d}v):=\frac{1}{v^{\alpha+1}}\mathbf{1}_{\{v>0\}}\mathrm{d}v.\]
_Its Laplace exponent is given by_
\[J_{Z^{\alpha}}(\lambda) =\int_{0}^{+\infty}\left(e^{-\lambda v}-1+\lambda v\right)\frac{ 1}{v^{\alpha+1}}\mathrm{d}v\] \[=c_{\alpha}\lambda^{\alpha},\quad\lambda\geq 0, \tag{2.10}\]
_with_
\[c_{\alpha}:=\frac{\Gamma(2-\alpha)}{\alpha(\alpha-1)}, \tag{2.11}\]
_where \(\Gamma\) stands for the Gamma function. Analogously one defines an \(\alpha\)-stable process with negative jumps only._
Note that the case of Levy martingale with the stability index \(\alpha=2\) corresponds to the case when \(Z^{\alpha}\) is a Wiener process without drift and with vanishing Levy measure.
#### 2.1.1 Projections of the noise
For equation (2.1) we consider the _projections_ of \(Z\) along \(G\) given by
\[Z^{G(x)}(t):=\langle G(x),Z(t)\rangle,\qquad x,t\geq 0. \tag{2.12}\]
As linear transformations of \(Z\), the projections form a family of Levy processes parametrized by \(x\geq 0\). If \(Z\) is a martingale, then \(Z^{G(x)}\) is a real-valued Levy martingale for any \(x\geq 0\). It follows from the identity
\[\mathbb{E}\left[e^{-\gamma\cdot Z^{G(x)}(t)}\right]=\mathbb{E}\left[e^{- \langle\gamma G(x),Z(t)\rangle}\right],\quad\gamma\in\mathbb{R},\]
and (2.9) that the Laplace exponent of \(Z^{G(x)}\) equals
\[J_{Z^{G(x)}}(\gamma)=J_{Z}(\gamma G(x))=\frac{1}{2}\gamma^{2} \langle QG(x),G(x)\rangle+\int_{|y|>0}\left(e^{-\gamma\langle G(x),y\rangle}-1 +\gamma\langle G(x),y\rangle\right)\nu(\mathrm{d}y). \tag{2.13}\]
Formula (2.13) can be written in a simpler form by using the Levy measure \(\nu_{G(x)}(\mathrm{d}v)\) of \(Z^{G(x)}\), which is the _image_ of the Levy measure \(\nu(dy)\) under the linear transformation \(y\mapsto\langle G(x),y\rangle\). This measure is given by
\[\nu_{G(x)}(A):=\nu\{y\in\mathbb{R}^{d}:\langle G(x),y\rangle\in A \},\quad A\in\mathcal{B}(\mathbb{R}). \tag{2.14}\]
From (2.13) we obtain that
\[J_{Z^{G(x)}}(\gamma)=\frac{1}{2}\gamma^{2}\langle QG(x),G(x) \rangle+\int_{|v|>0}\left(e^{-\gamma v}-1+\gamma v\right)\nu_{G(x)}(\mathrm{d} v). \tag{2.15}\]
Thus the characteristic triplet of the projection \(Z^{G(x)}\) has the form
\[\left(-\int_{|v|>1}y\ \nu_{G(x)}(\mathrm{d}v),\ \langle QG(x),G(x)\rangle,\ \nu_{G(x)}( \mathrm{d}v)\mid_{v\neq 0}\right). \tag{2.16}\]
Above we used the restriction \(\nu_{G(x)}(\mathrm{d}v)\mid_{v\neq 0}\) by cutting off zero which may be an atom of \(\nu_{G(x)}(\mathrm{d}v)\).
### Preliminary characterization of generating equations
In Proposition 2.2 below we provide a preliminary characterization for (2.1) to be a generating equation. Note that the independence of coordinates of \(Z\) is not assumed here. The central role here play the noise projections (2.12). The result is deduced from Theorem 5.3 in [14], where the generator of a general non-negative Markovian short rate process for affine models was characterized.
**Proposition 2.2**: _Let \(Z\) be a Levy martingale with characteristic triplet (2.6) and \(Z^{G(x)}\) be its projection (2.12) with the Levy measure \(\nu_{G(x)}(\mathrm{d}v)\) given by (2.14)._
1. _Equation (_1.1_) generates an affine model if and only if the following conditions are satisfied:_ 1. _For each_ \(x\geq 0\) _the support of_ \(\nu_{G(x)}\) _is contained in_ \([0,+\infty)\) _which means that_ \(Z^{G(x)}\) _has positive jumps only, i.e. for each_ \(t\geq 0\)_, with probability one,_ \[\triangle Z^{G(x)}(t):=Z^{G(x)}(t)-Z^{G(x)}(t-)=\langle G(x),\triangle Z(t) \rangle\geq 0.\] (2.17)
_._
2. _The jump part of_ \(Z^{G(0)}\) _has finite variation, i.e._ \[\int_{(0,+\infty)}v\ \nu_{G(0)}({\rm d}v)<+\infty.\] (2.18)
3. _The characteristic triplet (_2.16_) of_ \(Z^{G(x)}\) _is linear in_ \(x\)_, i.e._ \[\frac{1}{2}\langle QG(x),G(x)\rangle =cx,\quad x\geq 0,\] (2.19) \[\nu_{G(x)}({\rm d}v)\mid_{(0,+\infty)} =\nu_{G(0)}({\rm d}v)\mid_{(0,+\infty)}+x\mu({\rm d}v),\quad x\geq 0,\] (2.20) _for some_ \(c\geq 0\) _and a measure_ \(\mu({\rm d}v)\) _on_ \((0,+\infty)\) _satisfying_ \[\int_{(0,+\infty)}(v\wedge v^{2})\mu({\rm d}v)<+\infty.\] (2.21)
4. _The function_ \(F\) _is affine, i.e._ \[F(x)=ax+b,\ \mbox{where}\ a\in\mathbb{R},\ b\geq\int_{(1,+\infty)}(v-1)\nu_{G(0)}({ \rm d}v).\] (2.22)
5. _Equation (_1.1_) generates an affine model if and only if the generator of_ \(R\) _is given by_ \[\mathcal{A}f(x)=cxf^{\prime\prime}(x)+\Big{[}ax+b+\int_{(1,+\infty)}(1-v)\{ \nu_{G(0)}({\rm d}v)+x\mu({\rm d}v)\}\Big{]}f^{\prime}(x)\] \[\qquad\qquad\qquad\qquad+\int_{(0,+\infty)}[f(x+v)-f(x)-f^{\prime} (x)(1\wedge v)]\{\nu_{G(0)}({\rm d}v)+x\mu({\rm d}v)\}.\] (2.23) _for_ \(f\in\mathcal{L}(\Lambda)\cup C_{c}^{2}(\mathbb{R}_{+})\)_, where_ \(\mathcal{L}(\Lambda)\) _is the linear hull of_ \(\Lambda:=\{f_{\lambda}:=e^{-\lambda x},\lambda\in(0,+\infty)\}\) _and_ \(C_{c}^{2}(\mathbb{R}_{+})\) _stands for the set of twice continuously differentiable functions with compact support in_ \([0,+\infty)\)_. The constants_ \(a,b,c\) _and the measures_ \(\nu_{G(0)}({\rm d}v),\mu({\rm d}v)\) _are those from part (A)._
The pool of Proposition 2.2 is postponed to Appendix.
Note that conditions (2.19)-(2.20) describe the distributions of the noise projections. In the sequel we use an equivalent formulation of (2.19)-(2.20) involving the Laplace exponents of (2.12). Taking into account (2.15) we obtain the following.
**Remark 2.3**: _The conditions (2.19) and (2.20) are equivalent to the following decomposition of the Laplace exponent of \(Z^{G}\):_
\[J_{Z^{G(x)}}(b)=cb^{2}x+J_{\nu_{G(0)}}(b)+xJ_{\mu}(b),\quad b,x\geq 0, \tag{2.24}\]
_where_
\[J_{\mu}(b):=\int_{0}^{+\infty}(e^{-bv}-1+bv)\mu({\rm d}v),\quad J_{\nu_{G(0)} }(b):=\int_{0}^{+\infty}(e^{-bv}-1+bv)\nu_{G(0)}({\rm d}v). \tag{2.25}\]
#### 2.2.1 Problem formulation
In virtue of part \((A)\) of Proposition 2.2 we see that the drift \(F\) of a generating equation is an affine function while the function \(G\) and the noise \(Z\) must provide projections \(Z^{G(x)},x\geq 0\) with particular distributions. Their characteristic triplets are characterized by a constant \(c\geq 0\) carrying information on the variance of the Wiener part and two measures \(\nu_{G(0)}(\mathrm{d}v)\), \(\mu(\mathrm{d}v)\) describing jumps. A pair \((G,Z)\) for which the projections \(Z^{G(x)}\) satisfy (2.18)-(2.21) will be called _a generating pair_. Note that the concrete forms of the measures \(\nu_{G(0)}(\mathrm{d}v)\), \(\mu(\mathrm{d}v)\) are, however, not specified. As for \(Z\) with independent coordinates of infinite variation necessarily \(G(0)=0\), see Proposition 3.5, and, consequently, \(\nu_{G(0)}(\mathrm{d}v)\) vanishes, our goal is to determine the measure \(\mu(\mathrm{d}v)\) in this case.
Having the required form of \(\mu(\mathrm{d}v)\) at hand one knows the distributions of the noise projections \(Z^{G(x)}\) and, by part \((B)\) of Proposition 2.2, also the generator of the solution of (2.1). The generating pairs \((G,Z)\) can not be, however, uniquely determined, except the one-dimensional case. This issue is discussed in Section 2.2.2 and Section 2.2.3 below. For this reason we construct canonical representations - generating equations with noise projections corresponding to a given form of the measure \(\mu(\mathrm{d}v)\).
#### 2.2.2 One-dimensional generating equations
Let us summarize known facts on generating equations in the case \(d=1\). If \(Z=W\) is a Wiener process, the only generating equation is the classical CIR equation
\[\mathrm{d}R(t)=(aR(t)+b)\mathrm{d}t+C\sqrt{R(t)}\mathrm{d}W(t), \tag{2.26}\]
with \(a\in\mathbb{R}\), \(b,C\geq 0\), see [12]. The case with a general one-dimensional Levy process \(Z\) was studied in [3], [4] and [5] with the following conclusion. If the variation of \(Z\) is infinite and \(G\not\equiv 0\), then \(Z\) must be an \(\alpha\)-stable process with index \(\alpha\in(1,2]\), with either positive or negative jumps only, and (1.1) has the form
\[\mathrm{d}R(t)=(aR(t)+b)\mathrm{d}t+C\cdot R(t-)^{1/\alpha}\mathrm{d}Z^{ \alpha}(t), \tag{2.27}\]
with \(a\in\mathbb{R},b\geq 0\) and \(C\) such that it has the same sign as the jumps of \(Z^{\alpha}\). Clearly, for \(\alpha=2\) equation (2.27) becomes (2.26). If \(Z\) is of finite variation then the noise enters (1.1) in the additive way, that is
\[\mathrm{d}R(t)=(aR(t)+b)\mathrm{d}t+C\ \mathrm{d}Z(t). \tag{2.28}\]
Here \(Z\) can be chosen as an arbitrary process with positive jumps, \(a\in\mathbb{R},C\geq 0\) and
\[b\geq C\int_{0}^{+\infty}y\ \nu(\mathrm{d}y),\]
where \(\nu(\mathrm{d}y)\) stands for the Levy measure of \(Z\). The variation of \(Z\) is finite, so is the right side above. Recall, (2.28) with \(Z\) being a Wiener process is the well known Vasicek equation, see [19]. Then the short rate is a Gaussian process, hence it takes negative values with positive probability. This drawback is eliminated by the jump version of the Vasicek equation (2.28), where the solution never falls below zero.
It follows that the triplet \((c,\nu_{G(0)}(\mathrm{d}v),\mu(\mathrm{d}v))\) from Proposition 2.2 takes for the equations above the following forms
1. \(c\geq 0,\ \nu_{G(0)}(\mathrm{d}v)\equiv 0,\ \mu(\mathrm{d}v)\equiv 0\);
This case corresponds to the classical CIR equation (2.26) where \(c=\frac{1}{2}C^{2}\).
* \(c=0,\ \nu_{G(0)}({\rm d}v)\equiv 0,\ \mu({\rm d}v)-\alpha\)-stable, \(\alpha\in(1,2)\);
In this case (2.1) becomes the generalized CIR equation with \(\alpha\)-stable noise (2.27).
* \(c=0,\ \nu_{G(0)}({\rm d}v)-\)any measure on \((0,+\infty)\) of finite variation, \(\mu({\rm d}v)\equiv 0\);
Here (2.1) becomes the generalized Vasicek equation (2.28).
Note the one to one correspondence between the triplets \((c,\nu_{G(0)}({\rm d}v),\mu({\rm d}v))\) and generating pairs \((G,Z)\) which holds up to multiplicative constants.
#### 2.2.3 Non-uniqueness in the multidimensional case
In the case \(d>1\) one should not expect a one to one correspondence between the triplets \((c,\nu_{G(0)}({\rm d}v),\mu({\rm d}v))\) and the generating equations (2.1). The reason is that the distribution of the noise projections \(Z^{G(x)}\) does not determine the pair \((G,Z)\) in a unique way. Our illustrating example below shows two different equations driven by Levy processes with independent coordinates which provide the same short rate \(R\).
**Example 2.4**: _Let us consider the following two equations_
\[{\rm d}R(t)=\langle G(R(t-)),{\rm d}Z(t)\rangle,\quad R(0)=R_{0}, \quad t\geq 0, \tag{2.29}\] \[d\bar{R}(t)=\langle\bar{G}(\bar{R}(t-),{\rm d}\bar{Z}(t))\rangle,\quad\bar{R}(0)=R_{0},\quad t\geq 0, \tag{2.30}\]
_where_
\[G(x):=2^{-1/\alpha}\cdot(x^{1/\alpha},x^{1/\alpha}),\quad Z:=(Z_{1}^{\alpha},Z _{2}^{\alpha}),\]
_and_
\[\bar{G}(x):=(x^{1/\alpha},x^{1/\alpha}),\quad\bar{Z}:=(\bar{Z}_{1},\bar{Z}_{2 }),\]
_with a fixed index \(\alpha\in(1,2)\). We assume that the coordinates of \(Z\) and \(\bar{Z}\) are independent. Above \(Z_{1}^{\alpha},Z_{2}^{\alpha}\) stand for \(\alpha\)-stable martingales like in Example 2.1 and \(\bar{Z}_{1},\bar{Z}_{2}\) are martingales with Levy measures_
\[\nu_{1}({\rm d}v)=\frac{{\rm d}v}{v^{\alpha+1}}{\bf 1}_{E}(v),\quad\nu_{2}({ \rm d}v)=\frac{{\rm d}v}{v^{\alpha+1}}{\bf 1}_{[0,+\infty)\setminus E}(v),\]
_respectively, where \(E\) is a Borel subset of \([0,+\infty)\) such that_
\[|E|=\int_{E}{\rm d}v>0,\quad\mbox{and}\quad|[0,+\infty)\setminus E|=\int_{[0, +\infty)\setminus E}{\rm d}v>0.\]
_The projections related to (2.29) and (2.30) take the forms_
\[Z^{G(x)}(t)=\langle G(x),Z(t)\rangle=x^{1/\alpha}2^{-1/\alpha}(Z _{1}^{\alpha}(t)+Z_{2}^{\alpha}(t)),\quad x,t\geq 0,\] \[\bar{Z}^{\bar{G}(x)}(t)=\langle\bar{G}(x),\bar{Z}(t)\rangle=x^{1/ \alpha}(\bar{Z}_{1}(t)+\bar{Z}_{2}(t)),\quad x,t\geq 0.\]
_Since both processes \(2^{-1/\alpha}(Z_{1}^{\alpha}+Z_{2}^{\alpha})\) and \(\bar{Z}_{1}+\bar{Z}_{2}\) are \(\alpha\)-stable and have the same finite dimensional distributions, we obtain that_
\[Z^{G(x)}=\bar{Z}^{\bar{G}(x)},\]
_in the sense of distribution. Moreover, the Levy measure of \(Z^{G(x)}\) has the form_
\[x\cdot\frac{{\rm d}v}{v^{\alpha+1}}{\bf 1}_{\{v>0\}},\quad x\geq 0,\]
so it follows from (2.20) that \((G,Z)\) is a generating pair and that the solutions of (2.29) and (2.30) are identical._
_Note that the triplet \((c,\nu_{G(0)},\mu({\rm d}v))\) from Proposition 2.2 is, for both pairs, of the form_
\[c=0,\ \nu_{G(0)}({\rm d}v)\equiv 0,\ \mu({\rm d}v)-\alpha\mbox{-stable},\]
_so it coincides with the triplet \((b)\) in Section 2.2.2. Consequently, the solutions of (2.29) and (2.30) are the same as the solution of the equation_
\[dR(t)=(R(t-))^{1/\alpha}{\rm d}Z^{\alpha}(t),\quad R(0)=R_{0},\quad t\geq 0,\]
_with a one-dimensional \(\alpha\)-stable process \(Z^{\alpha}\)._
_It follows, in particular, that the noise coordinates of a generating equation do not need to be stable processes._
## 3 Classification of generating equations
### Main results
This section deals with equation (2.1) in the case when the coordinates of the martingale \(Z\) are independent. In view of Proposition 2.2 we are interested in characterizing possible distributions of projections \(Z^{G}\) over all generating pairs \((G,Z)\). By (2.17) the jumps of the projections are necessarily positive. As the coordinates of \(Z\) are independent, they do not jump together. Consequently, we see that, for each \(x\geq 0\) and \(t\geq 0\)
\[\triangle Z^{G(x)}(t)=\langle G(x),\triangle Z(t)\rangle>0\]
holds if and only if, for some \(i=1,2,...,d\),
\[G_{i}(x)\triangle Z_{i}(t)>0,\quad\triangle Z_{j}(t)=0,j\neq i. \tag{3.1}\]
Condition (3.1) means that \(G_{i}(x)\) and \(\triangle Z_{i}(t)\) are of the same sign. We can consider only the case when both are positive, i.e.
\[G_{i}(x)\geq 0,\quad i=1,2,...,d,\ x\geq 0,\qquad\triangle Z_{i}(t)\geq 0, \quad t>0,\]
because the opposite case can be turned into this one by replacing \((G_{i},Z_{i})\) with \((-G_{i},-Z_{i})\), \(i=1,...,d\). The Levy measure \(\nu_{i}({\rm d}y)\) of \(Z_{i}\) is thus concentrated on \((0,+\infty)\) and, in view of (2.9), the Laplace exponent of \(Z_{i}\) takes the form
\[J_{i}(b):=\frac{1}{2}q_{ii}b^{2}+\int_{0}^{+\infty}(e^{-bv}-1+bv)\nu_{i}({\rm d }v),\quad b\geq 0,\ i=1,2,...,d, \tag{3.2}\]
with \(q_{ii}\geq 0\). Recall, \(q_{ii}\) stands on the diagonal of \(Q\) - the covariance matrix of the Wiener part of \(Z\). We will assume that \(J_{i},i=1,2,...,d\) are _regularly varying at zero_. Recall, this means that
\[\lim_{x\to 0^{+}}\frac{J_{i}(bx)}{J_{i}(x)}=\psi_{i}(b),\quad b>0,\qquad i=1,2,...,d,\]
for some function \(\psi_{i}\). In fact \(\psi_{i}\) is a power function, i.e.
\[\psi_{i}(b)=b^{\alpha_{i}},\quad b>0,\]
with some \(-\infty<\alpha_{i}<+\infty\) and \(J_{i}\) is called to vary regularly with index \(\alpha_{i}\). A characterization of regularly varying Laplace exponent in terms of the corresponding Levy measure is presented in Section 3.2.
The distribution of noise projections are described by the following result.
**Theorem 3.1**: _Let \(Z_{1},...,Z_{d}\) be independent coordinates of the Levy martingale \(Z\) in \(\mathbb{R}^{d}\). Assume that \(Z_{1},...,Z_{d}\) satisfy_
\[\triangle Z_{i}(t)\geq 0\mbox{ a.s. for }t>0\mbox{ and }Z_{i}\mbox{ is of infinite variation} \tag{3.3}\]
_or_
\[\triangle Z_{i}(t)\geq 0\mbox{ a.s. for }t>0\mbox{ and }G(0)=0. \tag{3.4}\]
_Further, let us assume that for all \(i=1,\ldots,d\) the Laplace exponent (3.2) of \(Z_{i}\) varies regularly at zero and the components of the function \(G\) satisfiy_
\[G_{i}(x)\geq 0,\ x\in[0,+\infty),\quad G_{i}\mbox{ is continuous on }[0,+\infty).\]
_Then (2.1) generates an affine model if and only if \(F(x)=ax+b\), \(a\in\mathbb{R},b\geq 0\), and the Laplace exponent \(J_{Z^{G(x)}}\) of \(Z^{G(x)}=\langle G(x),Z\rangle\) is of the form_
\[J_{Z^{G(x)}}(b)=x\sum_{k=1}^{g}\eta_{k}b^{\alpha_{k}},\quad\eta_{k}>0,\quad \alpha_{k}\in(1,2],\quad k=1,2,\ldots,g, \tag{3.5}\]
_with some \(1\leq g\leq d\) and \(\alpha_{k}\neq\alpha_{j}\) for \(k\neq j\)._
Theorem 3.1 allows determining the form of the measure \(\mu({\rm d}v)\) in Proposition 2.2.
**Corollary 3.2**: _Let the assumptions of Theorem 3.1 be satisfied. If equation (2.1) generates an affine model then the function \(J_{\mu}\) defined in (2.25) takes the form_
\[J_{\mu}(b)=\sum_{k=l}^{g}\eta_{k}b^{\alpha_{k}},\quad l\in\{1,2\},\quad\eta_{k }>0,\quad\alpha_{k}\in(1,2),\quad k=l,l+1,\ldots,g, \tag{3.6}\]
_with \(1\leq g\leq d\), \(2>\alpha_{l}>...>\alpha_{g}>1\) (for the case \(l=2,g=1\) we set \(J_{\mu}\equiv 0\), which means that \(\mu({\rm d}v)\) disappears). Above \(l=2\) if \(\alpha_{1}=2\) and \(l=1\) otherwise. This means that \(\mu({\rm d}v)\) is a weighted sum of \(g+1-l\) stable measures with indices \(\alpha_{l},...,\alpha_{g}\in(1,2)\), i.e._
\[\mu({\rm d}v)=\tilde{\mu}({\rm d}v):=\frac{d_{l}}{v^{1+\alpha_{l}}}{\bf 1}_{ \{v>0\}}{\rm d}v+...+\frac{d_{g}}{v^{1+\alpha_{g}}}{\bf 1}_{\{v>0\}}{\rm d}v, \tag{3.7}\]
_with \(d_{i}=\eta_{i}/c_{\alpha_{i}},i=l,...,g\), where \(c_{\alpha_{i}}\) is given by (2.11)._
Note that each generating equation can be identified by the numbers \(a,b\) appearing in the formula for the function \(F\) and \(\alpha_{1},...,\alpha_{g};\eta_{1},...,\eta_{g}\) from (3.5). Since \(\nu_{G(0)}({\rm d}v)=0\), see Proposition 3.5 in the sequel, the related generator of \(R\) takes, by (2.23), the form
\[{\cal A}f(x)=cxf^{\prime\prime}(x) +\Big{[}x\Big{(}a+\int_{(1,+\infty)}(1-v)x\tilde{\mu}({\rm d}v) \Big{)}+b\Big{]}f^{\prime}(x)\] \[+\int_{(0,+\infty)}[f(x+v)-f(x)-f^{\prime}(x)(1\wedge v)]x\tilde {\mu}({\rm d}v), \tag{3.8}\]
with \(\tilde{\mu}\) in (3.7). Recall, the constant \(c\) above comes from the condition
\[\frac{1}{2}\langle QG(x),G(x)\rangle=cx,\qquad x\geq 0, \tag{3.9}\]
and, in view of Remark 2.3, \(c=\eta_{1}\) if \(\alpha_{1}=2\) and \(c=0\) otherwise. The class of processes with generator of the form (3.8) will be denoted by
\[\mathbb{A}_{g}(a,b;\alpha_{1},\alpha_{2},...,\alpha_{g};\eta_{1},...,\eta_{g}), \tag{3.10}\]
All generating equations with \(d\)-dimensional noise \(Z\) satisfying assumptions of Theorem 3.1 are thus splitted into \(d\) disjoint subfamilies providing different short rates. Any two equations from (3.10) with fixed parameters provide the same short rate, hence the same bond prices. For any class (3.10) we construct below a _canonical representation_, which is an equation with the generator required in (3.10) but with reduced noise dimension from \(d\) to \(g\) and stable noise coordinates. This construction allows interpreting the parameter \(g\) in (3.10) as a minimal number of random factors necessary to obtain the short rate corresponding to (3.10) and \(\alpha_{1},\alpha_{2},...,\alpha_{g}\) are the stability indices of the noise coordinates. This idea of classifying is similar to that of Dai and Singleton applied for multi-factor affine short rates in [9].
**Proposition 3.3** (Canonical representation of \(\mathbb{A}_{g}(a,b;\alpha_{1},\alpha_{2},...,\alpha_{g};\eta_{1},...,\eta_{g})\)): _Let \(R\) be the solution of (2.1) with \(F,G,Z\) satisfying the assumptions of Theorem 3.1. Let \(\tilde{Z}=(\tilde{Z}_{1}^{\alpha_{1}},\tilde{Z}_{2}^{\alpha_{2}},...,\tilde{Z }_{g}^{\alpha_{g}})\) be a Levy martingale with independent stable coordinates with indices \(\alpha_{k},k=1,2,...,g\), respectively, and \(\tilde{G}(x)=(d_{1}^{1/\alpha_{1}}x^{1/\alpha_{1}},...,d_{g}^{1/\alpha_{g}}x^ {1/\alpha_{g}})\), \(x\geq 0\), where \(d_{k}:=\eta_{k}/c_{\alpha_{k}}\) and \(c_{\alpha_{k}}\) are given by (2.11), \(k=1,2,...,g\). Then_
\[J_{Z^{G(x)}}(b)=J_{\tilde{Z}^{\tilde{G}(x)}}(b),\quad b,x\geq 0.\]
_Consequently, if \(\tilde{R}\) is the solution of the equation_
\[\mathrm{d}\tilde{R}(t)=(a\tilde{R}(t)+b)\mathrm{d}t+\sum_{k=1}^{g}d_{k}^{1/ \alpha_{k}}\tilde{R}(t-)^{1/\alpha_{k}}\mathrm{d}\tilde{Z}_{k}(t), \tag{3.11}\]
_then the generators of \(R\) and \(\tilde{R}\) are equal._
Equation (3.11) will be called the _canonical representation_ of the class \(\mathbb{A}_{g}(a,b;\alpha_{1},\alpha_{2},...,\alpha_{g};\eta_{1},...,\eta_{g})\).
**Proof:** By (3.5) we need to show that
\[J_{\tilde{Z}^{\tilde{G}(x)}}(b)=x\sum_{k=1}^{g}\eta_{k}b^{\alpha_{k}},\quad b,x\geq 0.\]
Recall, the Laplace exponent of \(\tilde{Z}_{k}^{\alpha_{k}}\) equals \(J_{k}(b)=c_{\alpha_{k}}b^{\alpha_{k}},k=1,2,...,g\). By independence and the form of \(\tilde{G}\) we have
\[J_{\tilde{Z}^{\tilde{G}(x)}}(b)=\sum_{k=1}^{g}J_{k}(b\tilde{G}_{k}(x))=\sum_{ k=1}^{g}c_{\alpha_{k}}b^{\alpha_{k}}d_{k}x=x\sum_{k=1}^{g}\eta_{k}b^{\alpha_{k}}, \quad b,x\geq 0,\]
as required. The second part of the thesis follows from Proposition 2.2(B). \(\Box\)
Clearly, in the case \(d=1\) the noise dimension can not be reduced, so \(g=d=1\) and \(\mathbb{A}_{1}(a,b;2;\eta_{1})\) corresponds to the classical CIR equation (2.26) while \(\mathbb{A}_{1}(a,b;\alpha;\eta_{1}),\alpha\in(1,2)\) to its generalized version (2.27). Both classes are singletons and (2.26), (2.27) are their canonical representations. The alpha-CIR equation from [15] is a canonical representation of the class \(\mathbb{A}_{2}(a,b;2,\alpha;\eta_{1},\eta_{2})\) with \(\alpha\in(1,2)\).
#### 3.1.1 Proofs
The proofs of Theorem 3.1 and Corollary 3.2 are preceded by two auxiliary results, i.e. Proposition 3.4 and Proposition 3.5. The first one provides some useful estimation for the function
\[J_{\rho}(b):=\int_{0}^{+\infty}(e^{-bv}-1+bv)\rho(\mathrm{d}v),\quad b\geq 0, \tag{3.12}\]
where the measure \(\rho(\mathrm{d}v)\) on \((0,+\infty)\) satisfies
\[0<\int_{0}^{+\infty}\left(v^{2}\wedge v\right)\rho\left(\mathrm{d}v\right)<+\infty. \tag{3.13}\]
The second result shows that if all components of \(Z\) are of infinite variation then \(G(0)=0\).
**Proposition 3.4**: _Let \(J_{\rho}\) be a function given by (3.12) where the measure \(\rho\) satisfies (3.13). Then the function \((0,+\infty)\ni b\mapsto J_{\rho}(b)/b\) is strictly increasing and \(\lim_{b\to 0+}J_{\rho}(b)/b=0\), while the function \((0,+\infty)\ni b\mapsto J_{\rho}(b)/b^{2}\) is strictly decreasing and \(\lim_{b\to+\infty}J_{\rho}(b)/b^{2}=0\). This yields, in particular, that, for any \(b_{0}>0\),_
\[\frac{J_{\rho}\left(b_{0}\right)}{b_{0}^{2}}b^{2}<J_{\rho}(b)<\frac{J_{\rho} \left(b_{0}\right)}{b_{0}}b,\quad b\in(0,b_{0})\,. \tag{3.14}\]
**Proof:** Let us start from the observation that the function
\[t\mapsto\frac{(1-e^{-t})t}{e^{-t}-1+t},\quad t\geq 0,\]
is strictly decreasing, with limit \(2\) at zero and \(1\) at infinity. This implies
\[(e^{-t}-1+t)<(1-e^{-t})t<2(e^{-t}-1+t),\quad t\in(0,+\infty), \tag{3.15}\]
and, consequently,
\[\int_{0}^{+\infty}(e^{-bv}-1+bv)\rho(\mathrm{d}v)<\int_{0}^{+\infty}(1-e^{-bv })bv\ \rho(\mathrm{d}v)<2\int_{0}^{+\infty}(e^{-bv}-1+bv)\rho(\mathrm{d}v),\quad b>0.\]
This means, however, that
\[J_{\rho}(b)<bJ_{\rho}^{\prime}(b)<2J_{\rho}(b),\quad b>0.\]
So, we have
\[\frac{1}{b}<\frac{J_{\rho}^{\prime}(b)}{J_{\rho}(b)}=\frac{d}{db}\ln J_{\rho} (b)<\frac{2}{b},\quad b>0,\]
and integration over some interval \([b_{1},b_{2}]\), where \(b_{2}>b_{1}>0\), yields
\[\ln b_{2}-\ln b_{1}<\ln J_{\rho}\left(b_{2}\right)-\ln J_{\rho}\left(b_{1} \right)<2\ln b_{2}-2\ln b_{1}\]
which gives that
\[\frac{J_{\rho}\left(b_{2}\right)}{b_{2}}>\frac{J_{\rho}\left(b_{1}\right)}{b_ {1}},\quad\frac{J_{\rho}\left(b_{2}\right)}{b_{2}^{2}}<\frac{J_{\rho}\left(b_{ 1}\right)}{b_{1}^{2}}.\]
To see that \(\lim_{b\to 0+}J_{\rho}\left(b\right)/b=0\) it is sufficient to use de l'Hopital's rule, (3.13) and dominated convergence
\[\lim_{b\to 0+}\frac{J_{\rho}\left(b\right)}{b}=\lim_{b\to 0+}J_{\rho}^{ \prime}\left(b\right)=\lim_{b\to 0+}\int_{0}^{+\infty}(1-e^{-bv})v\ \rho(\mathrm{d}v)=0.\]
To see that \(\lim_{b\rightarrow+\infty}J_{\rho}\left(b\right)/b^{2}=0\) we also use de l'Hopital's rule, (3.13) and dominated convergence. If \(\int_{0}^{+\infty}v\ \rho\left(\mathrm{d}v\right)<+\infty\), then we have
\[\lim_{b\rightarrow+\infty}\frac{J_{\rho}\left(b\right)}{b^{2}}=\lim_{b \rightarrow+\infty}\frac{J_{\rho}^{\prime}\left(b\right)}{2b}=\frac{\int_{0}^ {+\infty}v\rho\left(\mathrm{d}v\right)}{+\infty}=0.\]
If \(\int_{0}^{+\infty}v\ \rho\left(\mathrm{d}v\right)=+\infty\) then we apply de l'Hopital's rule twice and obtain
\[\lim_{b\rightarrow+\infty}\frac{J_{\rho}\left(b\right)}{b^{2}}=\lim_{b \rightarrow+\infty}\frac{J_{\rho}^{\prime}\left(b\right)}{2b}=\lim_{b \rightarrow+\infty}\frac{J_{\rho}^{\prime\prime}\left(b\right)}{2}=\frac{1} {2}\lim_{b\rightarrow+\infty}\int_{0}^{+\infty}e^{-bv}v^{2}\ \rho(\mathrm{d}v)=0.\]
\(\Box\)
**Proposition 3.5**: _If \(\left(G,Z\right)\) is a generating pair and all components of \(Z\) are of infinite variation then \(G(0)=0\)._
**Proof:** Let \(\left(G,Z\right)\) be a generating pair. Since the components of \(Z\) are independent, its characteristic triplet (2.6) is such that \(Q=\left\{q_{i,j}\right\}\) is a diagonal matrix, i.e.
\[q_{ii}\geq 0,\quad q_{i,j}=0,\qquad i\neq j,\quad i,j=1,2,...,d,\]
and the support of \(\nu(\mathrm{d}y)\) is contained in the positive half-axes of \(\mathbb{R}^{d}\), see [18] p.67. On the \(i^{th}\) positive half-axis
\[\nu(\mathrm{d}y)=\nu_{i}(dy_{i}),\qquad y=(y_{1},y_{2},...,y_{d}), \tag{3.16}\]
for \(i=1,2,...,d\). The \(i^{th}\) coordinate of \(Z\) is of infinite variation if and only if its Laplace exponent (3.2) is such that \(q_{ii}>0\) or
\[\int_{0}^{1}y_{i}\nu_{i}(\mathrm{d}y_{i})=+\infty, \tag{3.17}\]
see [7, Lemma 2.12]. It follows from (2.19) that
\[\frac{1}{2}\langle QG(x),G(x)\rangle=\frac{1}{2}\sum_{j=1}^{d}q_{jj}G_{j}^{2}( x)=cx,\]
so if \(q_{ii}>0\) then \(G_{i}(0)=0\). If it is not the case, using (3.16) and (2.18) we see that the integral
\[\int_{(0,+\infty)}v\nu_{G(0)}(\mathrm{d}v) =\int_{\mathbb{R}_{+}^{d}}\langle G(0),y\rangle\nu(\mathrm{d}y)\] \[=\sum_{j=1}^{d}\int_{(0,+\infty)}G_{j}(0)y_{j}\ \nu_{j}(\mathrm{d}y_{j})=\sum_{j=1}^{d}G_{j}(0)\int_{(0,+\infty)}y_{j}\ \nu_{j}(\mathrm{d}y_{j}),\]
is finite, so if (3.17) holds then \(G_{i}(0)=0\). \(\Box\)
**Proof of Theorem 3.1:** By assumption (3.3) and Proposition 3.5 or by assumption (3.4) we have \(G(0)=0\), so it follows from Remark 2.3 that
\[J_{Z^{G(x)}}(b)=J_{1}(bG_{1}(x))+J_{2}(bG_{2}(x))+...+J_{d}(bG_{d}(x))=x\tilde{ J}_{\mu}(b),\quad b,x\geq 0, \tag{3.18}\]
where \(\tilde{J}_{\mu}(b)=cb^{2}+J_{\mu}(b)\), \(c\geq 0\) and \(J_{\mu}(b)\) is given by (2.25). This yields
\[\frac{J_{1}\left(b\cdot G_{1}(x)\right)}{J_{1}\left(G_{1}(x)\right)}\cdot\frac{J _{1}\left(G_{1}(x)\right)}{x}+\ldots+\frac{J_{d}\left(b\cdot G_{d}(x)\right)}{ J_{d}\left(G_{d}(x)\right)}\cdot\frac{J_{d}\left(G_{d}(x)\right)}{x}=\tilde{J}_{\mu}(b), \tag{3.19}\]
where in the case \(G_{i}(x)=0\) we set \(\frac{J_{i}\left(b\cdot G_{i}(x)\right)}{J_{i}\left(G_{i}(x)\right)}\cdot \frac{J_{i}\left(G_{i}(x)\right)}{x}=0\). Without loss of generality we may assume that \(J_{1}\), \(J_{2}\),\(\ldots\),\(J_{d}\) are non-zero (thus positive for positive arguments). By assumption, \(J_{i}\), \(i=1,2,\ldots,d\) vary regularly at \(0\) with some indices \(\alpha_{i}\), \(i=1,2,\ldots,d\), so for \(b>0\)
\[\lim_{y\to 0+}\frac{J_{i}\left(b\cdot y\right)}{J_{i}(y)}=b^{\alpha_{i}}. \tag{3.20}\]
Assume that
\[\alpha_{1}=\ldots=\alpha_{i(1)}>\alpha_{i(1)+1}=\ldots=\alpha_{i(2)}>\ldots \ldots>\alpha_{i(g-1)+1}=\ldots=\alpha_{i(g)}=\alpha_{d},\]
where \(i(g)=d\). Let us denote \(i_{0}=0\) and
\[\eta_{k}(x):=\frac{J_{i(k-1)+1}\left(G_{i(k-1)+1}(x)\right)+\ldots+J_{i(k)} \left(G_{i(k)}(x)\right)}{x},\quad k=1,2,\ldots,g. \tag{3.21}\]
We can rewrite equation (3.19) in the form
\[\sum_{k=1}^{g}\left(\sum_{i=i(k-1)+1}^{i(k)}\frac{J_{i}\left(b\cdot G_{i}(x) \right)}{J_{i}\left(G_{i}(x)\right)}\cdot\frac{J_{i}\left(G_{i}(x)\right)}{x} \right)=\tilde{J}_{\mu}(b). \tag{3.22}\]
By passing to the limit as \(x\to 0+\), from (3.20) and (3.22) we get
\[b^{\alpha_{i(1)}}\left(\lim_{x\to 0+}\eta_{1}(x)\right)+\ldots+b^{\alpha_{i(g)}} \left(\lim_{x\to 0+}\eta_{g}(x)\right)=\tilde{J}_{\mu}(b), \tag{3.23}\]
thus
\[\tilde{J}_{\mu}(b)=\sum_{k=1}^{g}\eta_{k}b^{\alpha_{i(k)}}, \tag{3.24}\]
provided that the limits \(\eta_{k}:=\lim_{x\to 0+}\eta_{k}(x)\), \(k=1,2,\ldots,g\), exist. Thus it remains to prove that for \(k=1,2,\ldots,g\) the limits \(\lim_{x\to 0+}\eta_{k}(x)\) indeed exist and that \(\alpha_{i(k)}\in(1,2]\).
First we will prove that \(\lim_{x\to 0+}\eta_{g}(x)\) exists. Assume, by contrary, that this is not true, so
\[\limsup_{x\to 0+}\eta_{g}(x)-\liminf_{x\to 0+}\eta_{g}(x)\geq\delta>0. \tag{3.25}\]
It follows from (3.18) that
\[\frac{J_{1}(G_{1}(x))+J_{2}(G_{2}(x))+...+J_{d}(G_{d}(x))}{x}=\sum_{k=1}^{g} \eta_{k}(x)=\tilde{J}_{\mu}(1). \tag{3.26}\]
Let now \(b_{0}\in(0,1)\) be small enough so that
\[\tilde{J}_{\mu}(1)b_{0}^{\alpha_{i(g-1)}-\alpha_{i(g)}}<\frac{\delta}{6}. \tag{3.27}\]
Let us set in (3.22) \(b=b_{0}\) and then divide both sides of (3.22) by \(b_{0}^{\alpha_{i(g)}}\). It follows from (3.26) that each term \(\frac{J_{i}\left(G_{i}(x)\right)}{x}\), \(i=1,2,\ldots,d\), is bounded by \(\tilde{J}_{\mu}(1)\). From this and (3.20) for \(x>0\) sufficiently close to \(0\) we have
\[\eta_{g}(x)-\frac{\delta}{6}\leq\frac{1}{b_{0}^{\alpha_{i(g)}}}\left(\sum_{i=i \left(g-1\right)+1}^{i\left(g\right)}\frac{J_{i}\left(b_{0}\cdot G_{i}(x) \right)}{J_{i}\left(G_{i}(x)\right)}\cdot\frac{J_{i}\left(G_{i}(x)\right)}{x} \right)\leq\eta_{g}(x)+\frac{\delta}{6}\]
and
\[\frac{1}{b_{0}^{\alpha_{i(g)}}}\sum_{k=1}^{g-1}\left(\sum_{i=i\left( k-1\right)+1}^{i\left(k\right)}\frac{J_{i}\left(b_{0}\cdot G_{i}(x)\right)}{J_ {i}\left(G_{i}(x)\right)}\cdot\frac{J_{i}\left(G_{i}(x)\right)}{x}\right) \leq\sum_{k=1}^{g-1}2b_{0}^{\alpha_{i(k)}-\alpha_{i(g)}}\eta_{k}(x)\] \[\leq 2b_{0}^{\alpha_{i(g-1)}-\alpha_{i(g)}}\tilde{J}_{\mu}(1)\]
thus from (3.22), two last estimates and (3.27)
\[\eta_{g}(x)-\frac{\delta}{6}\leq\frac{\tilde{J}_{\mu}(b_{0})}{b_{0}^{\alpha_{ i(g)}}}\leq\eta_{g}(x)+\frac{\delta}{6}+2\tilde{J}_{\mu}(1)b_{0}^{\alpha_{i(g-1)}- \alpha_{i(g)}}<\eta_{g}(x)+\frac{\delta}{2}.\]
But this contradicts (3.25) since we must have
\[\limsup_{x\to 0+}\eta_{g}(x)\leq\frac{\tilde{J}_{\mu}(b_{0})}{b_{0}^{\alpha_{ i(g)}}}+\frac{\delta}{6},\quad\liminf_{x\to 0+}\eta_{g}(x)\geq\frac{\tilde{J}_{ \mu}(b_{0})}{b_{0}^{\alpha_{i(g)}}}-\frac{\delta}{2}.\]
Having proved the existence of the limits \(\lim_{x\to 0+}\eta_{g}(x)\),..., \(\lim_{x\to 0+}\eta_{g-m+1}(x)\) we can proceed similarly to prove the existence of the limit \(\lim_{x\to 0+}\eta_{g-m}(x)\). Assume that \(\lim_{x\to 0+}\eta_{g-m}(x)\) does not exist, so
\[\limsup_{x\to 0+}\eta_{g-m}(x)-\liminf_{x\to 0+}\eta_{g-m}(x)\geq\delta>0. \tag{3.28}\]
Let \(b_{0}\in(0,1)\) be small enough so that
\[\tilde{J}_{\mu}(1)b_{0}^{\alpha_{i(g-m-1)}-\alpha_{i(g-m)}}<\frac{\delta}{8}. \tag{3.29}\]
Let us set in (3.22) \(b=b_{0}\) and then divide both sides of (3.22) by \(b_{0}^{\alpha_{i(g-m)}}\). For \(x>0\) sufficiently close to \(0\) we have
\[\eta_{g-m}(x)-\frac{\delta}{8}\leq\frac{1}{b_{0}^{\alpha_{i(g-m)} }}\sum_{i=i\left(g-m-1\right)+1}^{i\left(g-m\right)}\frac{J_{i}\left(b_{0} \cdot G_{i}(x)\right)}{J_{i}\left(G_{i}(x)\right)}\cdot\frac{J_{i}\left(G_{i }(x)\right)}{x}\leq\eta_{g-m}(x)+\frac{\delta}{8},\]
\[\frac{1}{b_{0}^{\alpha_{i(g-m)}}}\sum_{k=1}^{g-m-1}\left(\sum_{i= i\left(k-1\right)+1}^{i\left(k\right)}\frac{J_{i}\left(b_{0}\cdot G_{i}(x) \right)}{J_{i}\left(G_{i}(x)\right)}\cdot\frac{J_{i}\left(G_{i}(x)\right)}{x} \right) \leq\sum_{k=1}^{g-m-1}2b_{0}^{\alpha_{i(k)}-\alpha_{i(g-m)}}\eta_{k}(x)\] \[\leq 2b_{0}^{\alpha_{i(g-m-1)}-\alpha_{i(g-m)}}\tilde{J}_{\mu}(1)\]
and
\[\sum_{k=g-m+1}^{g}\frac{b_{0}^{\alpha_{i(k)}}\eta_{k}}{b_{0}^{ \alpha_{i(g-m)}}}-\frac{\delta}{8} \leq\frac{1}{b_{0}^{\alpha_{i(g-m)}}}\sum_{k=g-m+1}^{g}\sum_{i= i\left(k-1\right)+1}^{i\left(k\right)}\frac{J_{i}\left(b_{0}\cdot G_{i}(x) \right)}{J_{i}\left(G_{i}(x)\right)}\cdot\frac{J_{i}\left(G_{i}(x)\right)}{x}\] \[\leq\sum_{k=g-m+1}^{g}\frac{b_{0}^{\alpha_{i(k)}}\eta_{k}}{b_{0}^ {\alpha_{i(g-m)}}}+\frac{\delta}{8}\]
thus from (3.22), last three estimates and (3.29)
\[\eta_{g-m}(x)-\frac{\delta}{4} \leq\frac{J_{\mu}(b_{0})}{b_{0}^{\alpha_{i(g-m)}}}-\sum_{k=g-m+1}^{ g}\frac{b_{0}^{\alpha_{i(k)}}\eta_{k}}{b_{0}^{\alpha_{i(g-m)}}}\] \[\leq\eta_{g-m}(x)+\frac{\delta}{4}+2\tilde{J}_{\mu}(1)b_{0}^{ \alpha_{i(g-1)}-\alpha_{i(g)}}<\eta_{g-m}(x)+\frac{\delta}{2}.\]
But this contradicts (3.28).
Now we are left with the proof that for \(k=1,2,\ldots,g\), \(\alpha_{i(k)}\in(1,2]\). Since the Laplace exponent of \(Z_{i}\) is given by (3.2), by Proposition 3.4 we necessarily have that \(J_{i}\) varies regularly with index \(\alpha_{i}\in[1,2],i=1,2,...,d\). Thus it remains to prove that \(\alpha_{i}>1,i=1,2,...,d\). If it was not true we would have \(\alpha_{i(g)}=1\) in (3.24) and \(\eta_{g}>0\). Then
\[\lim_{b\to 0+}\tilde{J}_{\mu}(b)/b=\lim_{b\to 0+}J_{\mu}(b)/b=\eta_{g}>0,\]
but, again, by Proposition 3.4 it is not possible. \(\Box\)
**Proof of Corollary 3.2 :** From Remark 2.3 and Theorem 3.1 we know that
\[J_{Z^{G(x)}}(b)=xcb^{2}+xJ_{\mu}(b)=x\sum_{k=1}^{g}\eta_{k}b^{ \alpha_{k}},\]
where \(1\leq g\leq d\), \(\eta_{k}>0\), \(\alpha_{k}\in(1,2]\), \(\alpha_{k}\neq\alpha_{j}\), \(k,j=1,2,\ldots,g\), \(c\geq 0\). Without loss of generality we may assume that \(2\geq\alpha_{1}>\alpha_{2}>\ldots>\alpha_{g}>1\). Thus, since the Laplace exponent is nonnegative, \(xJ_{\mu}(b)\) is of the form
\[xJ_{\mu}(b)=x\sum_{k=1}^{g}\eta_{k}b^{\alpha_{k}},\qquad\text{ if }c=0, \tag{3.30}\]
or
\[xJ_{\mu}(b)=x\left[(\eta_{1}-c)b^{2}+\sum_{k=2}^{g}\eta_{k}b^{ \alpha_{k}}\right],\qquad\text{if }0<c\leq\eta_{1}\text{ and }\alpha_{1}=2. \tag{3.31}\]
In the case (3.30) we need to show that \(\alpha_{1}<2\). If it was not true, we would have
\[\lim_{b\to+\infty}\frac{J_{\mu}(b)}{b^{2}}=\eta_{1}>0,\]
but this contradicts Proposition 3.4. In the same way we prove that \(\eta_{1}=c\) in (3.31). This proves the required representation (3.6). \(\Box\)
### Characterization of regularly varying Laplace exponents
In this section we reformulate the assumption that \(J_{i},i=1,...,d\), vary regularly at zero in terms of the behaviour of the Levy measures of \(Z_{i},i=1,...,d\). As our considerations are componentwise, we write for simplicity \(\nu(\mathrm{d}v):=\nu_{i}(\mathrm{d}v)\) for the Levy measure of \(Z_{i}\) and \(J:=J_{i}\) for its Laplace exponent.
**Proposition 3.6**: _Let \(\nu({\rm d}v)\) be such that_
\[\int_{0}^{+\infty}(y^{2}\wedge y)\ \nu(dy)<+\infty. \tag{3.32}\]
_Let \(\tilde{\nu}({\rm d}v)\) be the measure_
\[\tilde{\nu}({\rm d}v):=v^{2}\nu({\rm d}v),\]
_and \(\tilde{F}\) its cumulative distribution function, i.e._
\[\tilde{F}(v):=\tilde{\nu}((0,v))=\int_{0}^{v}u^{2}\nu({\rm d}u),\quad v\geq 0.\]
_Then, for \(\alpha\in(1,2)\), the following conditions are equivalent_
\[\lim_{x\to 0^{+}}\frac{J(bx)}{J(x)}=b^{\alpha},\quad b\geq 0, \tag{3.33}\]
\[\lim_{y\to+\infty}\frac{\tilde{F}(by)}{\tilde{F}(y)}=b^{2-\alpha},\quad b\geq 0.\]
_If, additionally, \(\nu({\rm d}v)\) has a density function \(g(v)\) such that_
\[\int_{0}^{+\infty}v^{2}g(v)\nu({\rm d}v)=+\infty, \tag{3.34}\]
_then (3.33) is equivalent to the condition_
\[\lim_{y\to+\infty}\frac{g(by)}{g(y)}=b^{-\alpha-1},\quad b>0.\]
**Proof:** Under (3.32) the function \(J\) given by (3.12) is well defined for \(b\geq 0\), twice differentiable and
\[J^{\prime}(b)=\int_{0}^{+\infty}v(1-e^{-bv})\nu({\rm d}v),\quad J^{\prime \prime}(b)=\int_{0}^{+\infty}v^{2}e^{-bv}\nu({\rm d}v),\quad b\geq 0,\]
see [17], Lemma 8.1 and Lemma 8.2. This implies that
\[\lim_{x\to 0^{+}}\frac{J(bx)}{J(x)} =b\cdot\lim_{x\to 0^{+}}\frac{J^{\prime}(bx)}{J^{\prime}(x)}=b^{2} \cdot\lim_{x\to 0^{+}}\frac{J^{\prime\prime}(bx)}{J^{\prime\prime}(x)}\] \[=b^{2}\cdot\lim_{x\to 0^{+}}\frac{\int_{0}^{+\infty}e^{-bxv}v^{2} \nu({\rm d}v)}{\int_{0}^{+\infty}e^{-xv}v^{2}\nu({\rm d}v)}.\]
Consequently, by (3.33)
\[\lim_{x\to 0^{+}}\frac{\int_{0}^{+\infty}e^{-bxv}v^{2}\nu({\rm d}v)}{ \int_{0}^{+\infty}e^{-xv}v^{2}\nu({\rm d}v)}=b^{\alpha-2}. \tag{3.35}\]
Notice, that the left side is a quotient of two transforms of the measure \(\tilde{\nu}({\rm d}v)\). By the Tauberian theorem, see Theorem 1, Sec. XIII.5 in [13], we have that (3.35) holds if and only if
\[\frac{\tilde{F}(by)}{\tilde{F}(y)}\underset{y\to+\infty}{\longrightarrow}b^{2 -\alpha},\quad b\geq 0.\]
If \(\nu({\rm d}v)\) has a density \(g(v)\) satisfying (3.34) then
\[\lim_{y\to+\infty}\frac{\tilde{F}(by)}{\tilde{F}(y)} =\lim_{y\to+\infty}\frac{\int_{0}^{by}u^{2}g(u){\rm d}u}{\int_{0}^ {y}u^{2}g(u){\rm d}u}=\lim_{y\to+\infty}\frac{b\cdot(by)^{2}g(by)}{y^{2}g(y)}\] \[=b^{3}\cdot\lim_{y\to+\infty}\frac{g(by)}{g(y)}.\]
It follows that
\[\lim_{y\to+\infty}\frac{g(by)}{g(y)}=b^{-\alpha-1}.\]
which proves the result. \(\Box\)
**Remark 3.7**: _By general characterization of regularly varying functions we see that the functions \(\tilde{F}\) and \(g\) from Proposition 3.6 must be of the forms_
\[\tilde{F}(b)=b^{2-\alpha}L(b),\quad b\geq 0,\]
\[g(b)=b^{-\alpha-1}\tilde{L}(b),\quad b\geq 0,\]
_where \(L\) and \(\tilde{L}\) are slowly varying functions at \(+\infty\), i.e._
\[\frac{L(by)}{L(y)}\underset{y\to+\infty}{\longrightarrow}1,\quad\frac{\tilde{ L}(by)}{\tilde{L}(y)}\underset{y\to+\infty}{\longrightarrow}1.\]
### Generating equations on a plane
In this section we characterize all equations (2.1), with \(d=2\), which generate affine models by a direct description of the classes \(\mathbb{A}_{1}(a,b;\alpha_{1};\eta_{1})\) and \(\mathbb{A}_{2}(a,b;\alpha_{1},\alpha_{2};\eta_{1},\eta_{2})\). Our analysis requires an additional regularity assumption that the components of \(G\) are strictly positive outside zero and
\[\frac{G_{2}(\cdot)}{G_{1}(\cdot)}\in C^{1}(0,+\infty). \tag{3.36}\]
Then \(\mathbb{A}_{1}(a,b;\alpha_{1};\eta_{1})\) consists of the following equations
\[\bullet\quad{\rm d}R(t)=(aR(t)+b){\rm d}t+c_{0}R(t)^{1/\alpha_{1}}\Big{(}G_{1 }{\rm d}Z_{1}(t)+G_{2}{\rm d}Z_{2}(t)\Big{)},\]
where \(c_{0}=(\frac{\eta_{1}}{c_{\alpha_{1}}})^{\frac{1}{\alpha_{1}}}\), \(G_{1},G_{2}\) are positive constants and \(G_{1}Z_{1}(t)+G_{2}Z_{2}(t)\) is an \(\alpha_{1}\)-stable process,
\[\bullet\quad{\rm d}R(t)=(aR(t)+b){\rm d}t+G_{1}(R(t-)){\rm d}Z_{1}(t)+\left( \frac{\eta_{1}R(t-)-c_{1}G_{1}^{\alpha_{1}}(R(t-))}{c_{2}}\right)^{1/\alpha_{ 1}}{\rm d}Z_{2}(t),\]
where \(c_{1},c_{2}>0\), \(G_{1}(\cdot)\) is any function such that
\[G_{1}(x)>0,\quad\frac{\eta_{1}x-c_{1}G_{1}^{\alpha_{1}}(x)}{c_{2}}>0,\qquad x>0,\]
and \(Z_{1},Z_{2}\) are stable processes with index \(\alpha_{1}\).
The class \(\mathbb{A}_{2}(a,b;\alpha_{1},\alpha_{2};\eta_{1},\eta_{2})\) is a singleton.
The classification above follows directly from the following result.
**Theorem 3.8**: _Let \(G(x)=(G_{1}(x),G_{2}(x))\) be continuous functions such that \(G_{1}(x)>0,G_{2}(x)>0,x>0\) and (3.36) holds. Let \(Z(t)=(Z_{1}(t),Z_{2}(t))\) have independent coordinates of infinite variation with Laplace exponents varying regularly at zero with indices \(\alpha_{1},\alpha_{2}\), respectively, where \(2\geq\alpha_{1}\geq\alpha_{2}>1\)._
* _If_ \(\tilde{J}_{\mu}\) _is of the form_ \[\tilde{J}_{\mu}(b)=\eta_{1}b^{\alpha_{1}},\quad b\geq 0,\] (3.37) _with_ \(\eta_{1}>0,1<\alpha_{1}\leq 2\)_, then_ \((G,Z)\) _is a generating pair if and only if one of the following two cases holds:_
* \[G(x)=c_{0}\ x^{1/\alpha_{1}}\cdot\left(\begin{array}{c}G_{1}\\ G_{2},\end{array}\right),\quad x\geq 0,\] (3.38) _where_ \(c_{0}=(\frac{\eta_{1}}{c_{\alpha_{1}}})^{\frac{1}{\alpha_{1}}},G_{1}>0,G_{2}>0\) _and the process_ \[G_{1}Z_{1}(t)+G_{2}Z_{2}(t),\quad t\geq 0,\] _is_ \(\alpha_{1}\)_-stable._
* \(G(x)\) _is such that_ \[c_{1}G_{1}^{\alpha_{1}}(x)+c_{2}G_{2}^{\alpha_{1}}(x)=\eta_{1}x,\quad x\geq 0,\] (3.39) _with some constants_ \(c_{1},c_{2}>0\)_, and_ \(Z_{1},Z_{2}\) _are_ \(\alpha_{1}\)_-stable processes._
* _If_ \(\tilde{J}_{\mu}\) _is of the form_ \[\tilde{J}_{\mu}(b)=\eta_{1}b^{\alpha_{1}}+\eta_{2}b^{\alpha_{2}},\quad b\geq 0,\] (3.40) _with_ \(\eta_{1},\eta_{2}>0,2\geq\alpha_{1}>\alpha_{2}>1\) _then_ \((G,Z)\) _is a generating pair if and only if_ \[G_{1}(x)=\left(\frac{\eta_{1}}{c_{1}}x\right)^{1/\alpha_{1}},\quad G_{2}(x)= \left(\frac{\eta_{2}}{d_{2}}x\right)^{1/\alpha_{2}},\quad x\geq 0,\] (3.41) _with some_ \(c_{1},d_{2}>0\) _and_ \(Z_{1}\) _is_ \(\alpha_{1}\)_-stable,_ \(Z_{2}\) _is_ \(\alpha_{2}\)_-stable._
**Proof:** In view of Theorem 3.1 the generating pairs \((G,Z)\) are such that
\[J_{1}(bG_{1}(x))+J_{2}(bG_{2}(x))=x\tilde{J}_{\mu}(b),\quad b,x\geq 0, \tag{3.42}\]
where \(\tilde{J}_{\mu}\) takes the form (3.37) or (3.40). We deduce from (3.42) the form of \(G\) and characterize the noise \(Z\). First let us consider the case when
\[\left(\frac{G_{2}(x)}{G_{1}(x)}\right)^{\prime}=0,\qquad x>0. \tag{3.43}\]
Then \(G(x)\) can be written in the form
\[G(x)=g(x)\cdot\left(\begin{array}{c}G_{1}\\ G_{2},\end{array}\right),\quad x\geq 0,\]
with some function \(g(x)\geq 0,x\geq 0\), and constants \(G_{1}>0,G_{2}>0\). Equation (2.1) amounts then to
\[dR(t) =F(R(t))+g(R(t-))\left(G_{1}dZ_{1}(t)+G_{2}dZ_{2}(t)\right)\] \[=F(R(t))+g(R(t-))d\tilde{Z}(t),\quad t\geq 0,\]
which is an equation driven by the one dimensional Levy process \(\tilde{Z}(t):=G_{1}Z_{1}(t)+G_{2}Z_{2}(t)\). It follows that \(\tilde{Z}\) is \(\alpha_{1}\)-stable with \(\alpha_{1}\in(1,2]\) and that \(g(x)=c_{0}x^{1/\alpha_{1}},c_{0}>0\). Notice that \(Z^{G(x)}(t)=c_{0}x^{\frac{1}{\alpha_{1}}}\tilde{Z}\), so \(J_{Z^{G(x)}}(b)=c_{\alpha_{1}}(c_{0}x^{\frac{1}{\alpha_{1}}}b)^{\alpha_{1}}= xc_{0}^{\alpha_{1}}c_{\alpha_{1}}b^{\alpha_{1}}\) and \(c_{0}=(\frac{\eta_{1}}{c^{\alpha_{1}}})^{\frac{1}{\alpha_{1}}}\). Hence (3.37) holds and this proves \((Ia)\).
If (3.43) is not satisfied, then
\[\left(\frac{G_{2}(x)}{G_{1}(x)}\right)^{\prime}\neq 0,\quad x\in(\underline{x}, \bar{x}), \tag{3.44}\]
for some interval \((\underline{x},\bar{x})\subset(0,+\infty)\). In the rest of the proof we consider this case and prove \((Ib)\) and \((II)\).
\((Ib)\) From the equation
\[J_{1}(bG_{1}(x))+J_{2}(bG_{2}(x))=x\eta_{1}b^{\alpha_{1}},\quad b\geq 0,\ x\geq 0, \tag{3.45}\]
we explicitly determine unknown functions. Inserting \(b/G_{1}(x)\) for \(b\) yields
\[J_{1}(b)+J_{2}\left(b\frac{G_{2}(x)}{G_{1}(x)}\right)=\eta_{1}\frac{x}{G_{1}^ {\alpha_{1}}(x)}b^{\alpha_{1}},\quad b\geq 0,\quad x>0. \tag{3.46}\]
Differentiation over \(x\) yields
\[J_{2}^{\prime}\left(b\frac{G_{2}(x)}{G_{1}(x)}\right)\cdot b\left(\frac{G_{2} (x)}{G_{1}(x)}\right)^{\prime}=\eta_{1}\left(\frac{x}{G_{1}^{\alpha_{1}}(x)} \right)^{\prime}b^{\alpha_{1}},\quad b\geq 0,\quad x>0.\]
Using (3.44) and dividing by \(\left(\frac{G_{2}(x)}{G_{1}(x)}\right)^{\prime}\) leads to
\[J_{2}^{\prime}\left(b\frac{G_{2}(x)}{G_{1}(x)}\right)\cdot b=\eta_{1}\frac{ \left(\frac{x}{G_{1}^{\alpha_{1}}(x)}\right)^{\prime}}{\left(\frac{G_{2}(x)}{ G_{1}(x)}\right)^{\prime}}\cdot b^{\alpha_{1}},\quad b\geq 0,\quad x\in(\underline{x}, \bar{x}).\]
By inserting \(b\frac{G_{1}(x)}{G_{2}(x)}\) for \(b\) one computes the derivative of \(J_{2}\):
\[J_{2}^{\prime}(b)=\eta_{1}\frac{\left(\frac{x}{G_{1}^{\alpha_{1}}(x)}\right) ^{\prime}\left(\frac{G_{1}(x)}{G_{2}(x)}\right)^{\alpha_{1}-1}}{\left(\frac{ G_{2}(x)}{G_{1}(x)}\right)^{\prime}}\cdot b^{\alpha_{1}-1},\quad b>0,\quad x\in( \underline{x},\bar{x}).\]
Fixing \(x\) and integrating over \(b\) provides
\[J_{2}(b)=c_{2}b^{\alpha_{1}},\quad b>0, \tag{3.47}\]
with some \(c_{2}\geq 0\). Actually \(c_{2}>0\) as \(Z_{2}\) is of infinite variation and \(J_{2}\) can not disappear.
By the symmetry of (3.45) the same conclusion holds for \(J_{1}\), i.e.
\[J_{1}(b)=c_{1}b^{\alpha_{1}},\quad b>0, \tag{3.48}\]
with \(c_{1}>0\). Using (3.47) and (3.48) in (3.45) gives us (3.39). This proves \((Ib)\).
\(II)\) Solving the equation
\[J_{1}(bG_{1}(x))+J_{2}(bG_{2}(x))=x(\eta_{1}b^{\alpha_{1}}+\eta_{2}b^{\alpha_{2} }),\quad b,x\geq 0, \tag{3.49}\]
in the same way as we solved (3.45) yields that
\[J_{1}(b)=c_{1}b^{\alpha_{1}}+c_{2}b^{\alpha_{2}},\quad J_{2}(b)=d_{1}b^{ \alpha_{1}}+d_{2}b^{\alpha_{2}},\quad b\geq 0, \tag{3.50}\]
with \(c_{1},c_{2},d_{1},d_{2}\geq 0\), \(c_{1}+c_{2}>0,d_{1}+d_{2}>0\). From (3.49) and (3.50) we can specify the following conditions for \(G\):
\[c_{1}G_{1}^{\alpha_{1}}(x)+d_{1}G_{2}^{\alpha_{1}}(x)=\eta_{1}x, \tag{3.51}\] \[c_{2}G_{1}^{\alpha_{2}}(x)+d_{2}G_{2}^{\alpha_{2}}(x)=\eta_{2}x. \tag{3.52}\]
We will show that \(c_{1}>0,c_{2}=0,d_{1}=0,d_{2}>0\) by excluding the opposite cases.
If \(c_{1}>0,c_{2}>0\), one computes from (3.51)-(3.52) that
\[G_{1}(x)=\left(\frac{1}{c_{1}}(\eta_{1}x-d_{1}G_{2}^{\alpha_{1}}(x))\right)^{ \frac{1}{\alpha_{1}}}=\left(\frac{1}{c_{2}}(\eta_{2}x-d_{2}G_{2}^{\alpha_{2}}( x))\right)^{\frac{1}{\alpha_{2}}},\quad x\geq 0. \tag{3.53}\]
This means that, for each \(x\geq 0\), the value \(G_{2}(x)\) is a solution of the following equation of the \(y\)-variable
\[\left(\frac{1}{c_{1}}(\eta_{1}x-d_{1}y^{\alpha_{1}})\right)^{\frac{1}{\alpha_ {1}}}=\left(\frac{1}{c_{2}}(\eta_{2}x-d_{2}y^{\alpha_{2}})\right)^{\frac{1}{ \alpha_{2}}}, \tag{3.54}\]
with \(y\in\left[0,\left(\frac{\gamma_{1}x}{d_{1}}\right)^{\frac{1}{\alpha_{1}}} \wedge\left(\frac{\gamma_{2}x}{d_{2}}\right)^{\frac{1}{\alpha_{2}}}\right]\). If \(d_{1}=0\) or \(d_{2}=0\) we compute \(y=y(x)\) from (3.54) and see that \(d_{1}y^{\alpha_{1}}\) or \(d_{2}y^{\alpha_{2}}\) must be negative either for \(x\) sufficiently close to \(0\) or \(x\) sufficiently large. Now we need to exclude the case \(d_{1}>0,d_{2}>0\). However, in the case \(c_{1},c_{2},d_{1},d_{2}>0\) equation (3.54) has no solutions because, for sufficiently large \(x>0\), the left side of (3.54) is strictly less then the right side. This inequality follows from Proposition 3.9 proven below.
So, we proved that \(c_{1}\cdot c_{2}=0\) and similarly one proves that \(d_{1}\cdot d_{2}=0\). The case \(c_{1}=0,c_{2}>0,d_{1}>0,d_{2}=0\) can be rejected because then \(J_{1}\) would vary regularly with index \(\alpha_{2}\) and \(J_{2}\) with index \(\alpha_{1}\), which is a contradiction. It follows that \(c_{1}>0,c_{2}=0,d_{1}=0,d_{2}>0\) and in this case we obtain (3.41) from (3.51) and (3.52). \(\Box\)
**Proposition 3.9**: _Let \(a,b,c,d>0\), \(\gamma\in(0,1)\), \(2\geq\alpha_{1}>\alpha_{2}>1\). Then for sufficiently large \(x>0\) the following inequalities are true_
\[\left(ax-(bx-cz)^{\gamma}\right)^{\frac{1}{\gamma}}-dz>0,\qquad z\in\Big{[}0, \frac{b}{c}x\Big{]}, \tag{3.55}\]
\[\left(bx-cy^{\alpha_{1}}\right)^{\frac{1}{\alpha_{1}}}<\left(ax-dy^{\alpha_{2} }\right)^{\frac{1}{\alpha_{2}}},\quad y\in\Big{[}0,\Big{(}\frac{b}{c}x\Big{)}^ {\frac{1}{\alpha_{1}}}\wedge\Big{(}\frac{a}{d}x\Big{)}^{\frac{1}{\alpha_{2}}} \Big{]}. \tag{3.56}\]
**Proof:** First we prove (3.55) and write it in the equivalent form
\[ax\geq(dz)^{\gamma}+(bx-cz)^{\gamma}=:h(z). \tag{3.57}\]
Since
\[h^{\prime}(z)=\gamma\Big{(}d^{\gamma}z^{\gamma-1}-c(bx-cz)^{\gamma-1}\Big{)},\]
\[h^{\prime\prime}(z)=\gamma(\gamma-1)\Big{(}d^{\gamma}z^{\gamma-2}+c^{2}(bx-cz)^{ \gamma-2}\Big{)}<0,\quad z\in\Big{[}0,\frac{b}{c}x\Big{]},\]
the function \(h\) is concave and attains its maximum at point
\[z_{0}:=\theta x:=\frac{bc^{\frac{1}{\gamma-1}}}{d^{\frac{\gamma}{\gamma-1}}+c ^{\frac{\gamma}{\gamma-1}}}x\in\Big{[}0,\frac{b}{c}x\Big{]},\]
which is a root of \(h^{\prime}\). It follows that
\[h(z)\leq h(\theta x) =(\theta x)^{\gamma}+(bx-c\theta x)^{\gamma}\] \[=(\theta^{\gamma}+(b-c\theta)^{\gamma})x^{\gamma}<ax,\]
provided that \(x\) is sufficiently large and (3.55) follows. (3.56) follows from (3.55) by setting \(\gamma=\alpha_{2}/\alpha_{1}\), \(z=y^{\alpha_{1}}\). \(\Box\)
### An example in 3D
In Section 3.3 we proved that in the case \(d=2\) the set \(\mathbb{A}_{2}(a,b;\alpha_{1},\alpha_{2};\eta_{1},\eta_{2})\) is a singleton. Here we show that this property breaks down when \(d=3\). In the example below we construct a family of generating pairs \((G,Z)\) such that
\[J_{Z^{G(x)}}(b)=x\left(\eta_{1}b^{\alpha_{1}}+\eta_{2}b^{\alpha_{2}}\right), \quad b\geq 0, \tag{3.58}\]
with \(\eta_{1},\eta_{2}>0,2\geq\alpha_{1}>\alpha_{2}>1\) and such that the related generating equations differ from the canonical representation of \(\mathbb{A}_{2}(a,b;\alpha_{1},\alpha_{2};\eta_{1},\eta_{2})\).
**Example 3.10**: _Let us consider a process \(Z(t)=(Z_{1}(t),Z_{2}(t),Z_{3}(t))\) with independent coordinates such that \(Z_{1}\) is \(\alpha_{1}\)-stable, \(Z_{2}\) is \(\alpha_{2}\)-stable, \(Z_{3}\) is a sum of an \(\alpha_{1}\)- and \(\alpha_{2}\)-stable processes. Then_
\[J_{1}(b)=\gamma_{1}b^{\alpha_{1}},\quad J_{2}(b)=\gamma_{2}b^{\alpha_{2}}, \quad J_{3}(b)=\gamma_{3}b^{\alpha_{1}}+\tilde{\gamma}_{3}b^{\alpha_{2}},\quad b \geq 0,\]
_where \(\gamma_{1}>0,\gamma_{2}>0,\gamma_{3}>0,\tilde{\gamma}_{3}>0\). We are looking for non-negative functions \(G_{1},G_{2},G_{3}\) solving the equation_
\[J_{1}(bG_{1}(x))+J_{2}(bG_{2}(x))+J_{3}(bG_{3}(x))=x\left(\eta_{1}b^{\alpha_{1 }}+\eta_{2}b^{\alpha_{2}}\right),\quad x,b\geq 0. \tag{3.59}\]
_It follows from (3.59) that_
\[\gamma_{1}b^{\alpha_{1}}(G_{1}(x))^{\alpha_{1}}+\gamma_{2}b^{\alpha_{2}}(G_{2 }(x))^{\alpha_{2}}+\gamma_{3}b^{\alpha_{1}}(G_{3}(x))^{\alpha_{1}}+\tilde{ \gamma}_{3}b^{\alpha_{2}}(G_{3}(x))^{\alpha_{2}}=x\left[\eta_{1}b^{\alpha_{1} }+\eta_{2}b^{\alpha_{2}}\right],\quad x,b\geq 0,\]
_and, consequently,_
\[b^{\alpha_{1}}\left[\gamma_{1}G_{1}^{\alpha_{1}}(x)+\gamma_{3}G_{3}^{\alpha_{ 1}}(x)\right]+b^{\alpha_{2}}\left[\gamma_{2}G_{2}^{\alpha_{2}}(x)+\tilde{ \gamma}_{3}G_{3}^{\alpha_{2}}(x)\right]=x\left[\eta_{1}b^{\alpha_{1}}+\eta_{2 }b^{\alpha_{2}}\right],\quad x,b\geq 0.\]
_Thus we obtain the following system of equations_
\[\gamma_{1}G_{1}^{\alpha_{1}}(x)+\gamma_{3}G_{3}^{\alpha_{1}}(x)=x \eta_{1},\] \[\gamma_{2}G_{2}^{\alpha_{2}}(x)+\tilde{\gamma}_{3}G_{3}^{\alpha_{ 2}}(x)=x\eta_{2},\]
_which allows us to determine \(G_{1}\) and \(G_{2}\) in terms of \(G_{3}\), that is_
\[G_{1}(x)=\left(\frac{1}{\gamma_{1}}\left(x\eta_{1}-\gamma_{3}G_{3}^ {\alpha_{1}}(x)\right)\right)^{\frac{1}{\alpha_{1}}} \tag{3.60}\] \[G_{2}(x)=\left(\frac{1}{\gamma_{2}}\left(x\eta_{2}-\tilde{\gamma} _{3}G_{3}^{\alpha_{2}}(x)\right)\right)^{\frac{1}{\alpha_{2}}}. \tag{3.61}\]
_The positivity of \(G_{1},G_{2},G_{3}\) means that \(G_{3}\) satisfies_
\[0\leq G_{3}(x)\leq\left(\frac{\eta_{1}}{\gamma_{3}}x\right)^{\frac{1}{\alpha_ {1}}}\wedge\left(\frac{\eta_{2}}{\tilde{\gamma}_{3}}x\right)^{\frac{1}{\alpha_ {2}}},\quad x\geq 0. \tag{3.62}\]
_It follows that \((G,Z)\) with any \(G_{3}\) satisfying (3.62) and \(G_{1},G_{2}\) given by (3.60), (3.61) constitutes a generating pair._
## 4 Applications
Motivated by the form of canonical representations (3.11) we focus now on the equation
\[dR(t)=(aR(t)+b)\mathrm{d}t+\sum_{i=1}^{g}d_{i}^{1/\alpha_{i}}R(t-)^{1/\alpha_{ i}}\mathrm{d}Z^{\alpha_{i}}(t),\quad R(0)=R_{0},\ t>0, \tag{4.1}\]
where \(a\in\mathbb{R},b\geq 0,d_{i}>0\) and \(Z^{\alpha_{i}}\) is an \(\alpha_{i}\)-stable process with \(2\geq\alpha_{1}>\alpha_{2}>...>\alpha_{g}>1\) and \(g\geq 1\). By Proposition 3.3, (4.1) is the canonical representation of the class \(\mathbb{A}_{g}(a,b;\alpha_{1},...,\alpha_{g};\eta_{1},...,\eta_{g})\) where
\[\eta_{i}:=c_{\alpha_{i}}\cdot d_{i}, \tag{4.2}\]
and \(c_{\alpha_{i}}\) is given by (2.11). After characterizing bond prices in the resulted affine model we investigate the flexibility of fitting of (4.1) to risk-free market curves. Our numerical implementations show better performance of (4.1) in comparison to the standard CIR equation (2.26).
Let us start with recalling the concept of pricing based on the semigroup
\[\mathcal{Q}_{t}f(x):=\mathbb{E}[e^{-\int_{0}^{t}R(s)ds}f(R(t))\mid R(0)=x], \quad t\geq 0, \tag{4.3}\]
which was developed in [14]. The formula provides the price at time \(0\) of the claim \(f(R(t))\) paid at time \(t\) given \(R(0)=x\). By Theorem 5.3 in [14] for \(f_{\lambda}(x):=e^{-\lambda x},\lambda\geq 0\) we know that
\[\mathcal{Q}_{t}f_{\lambda}(x)=e^{-\rho(t,\lambda)-\sigma(t,\lambda)x},\quad x \geq 0, \tag{4.4}\]
where \(\sigma(\cdot,\cdot)\) satisfies the equation
\[\frac{\partial\sigma}{\partial t}(t,\lambda)=\mathcal{R}(\sigma(t,\lambda)), \quad\sigma(0,\lambda)=\lambda,\]
and \(\rho(\cdot,\cdot)\) is given by
\[\rho(t,\lambda)=\int_{0}^{t}\mathcal{F}(\sigma(s,\lambda))ds.\]
The functions \(\mathcal{R},\mathcal{F}\) depend on the generator of \(R\), which for (4.1) takes the form
\[\mathcal{A}f(x)=cxf^{\prime\prime}(x) +\Big{[}x\Big{(}a+\int_{(1,+\infty)}(1-v)x\tilde{\mu}(\mathrm{d}v )\Big{)}+b\Big{]}f^{\prime}(x)\] \[+\int_{(0,+\infty)}[f(x+v)-f(x)-f^{\prime}(x)(1\wedge v)]x\tilde{ \mu}(\mathrm{d}v),\]
where
\[\tilde{\mu}({\rm d}v):=\frac{d_{l}}{v^{1+\alpha_{l}}}dv+...+\frac{d_{g}}{v^{1+ \alpha_{g}}}dv,\quad v>0. \tag{4.5}\]
Recall, if \(\alpha_{1}=2\), then \(c=d_{1}/2\) and \(l=2\). Otherwise \(c=0\) and \(l=1\). Then
\[\mathcal{R}(\lambda) :=-c\lambda^{2}+\Big{[}a+\int_{(1,+\infty)}(1-v)\tilde{\mu}({\rm d }v)\Big{]}\lambda+1+\int_{0}^{+\infty}(1-e^{-\lambda v}-\lambda(1\wedge v)) \tilde{\mu}({\rm d}v),\] \[\mathcal{F}(\lambda) :=b\lambda. \tag{4.6}\]
Using (4.5) yields
\[\mathcal{R}(\lambda) =-c\lambda^{2}+\Big{[}a+\int_{(1,+\infty)}(1-v)\tilde{\mu}({\rm d }v)\Big{]}\lambda+1-\int_{0}^{+\infty}(e^{-\lambda v}-1+\lambda v)\tilde{\mu} ({\rm d}v)\] \[\quad-\lambda\int_{(1,+\infty)}(1-v)\tilde{\mu}({\rm d}v)=-c \lambda^{2}+a\lambda+1-\sum_{i=l}^{g}\eta_{k}\lambda^{\alpha_{k}}\] \[=1+a\lambda-\sum_{i=1}^{g}\eta_{k}\lambda^{\alpha_{k}}. \tag{4.7}\]
Application of the pricing procedure above for \(f_{\lambda}\) with \(\lambda=0\) allows us to obtain from (4.4) the prices of zero-coupon bonds. Using the closed form formula (4.7) leads to the following result.
**Theorem 4.1**: _The zero-coupon bond prices in the affine model generated by (4.1) are equal_
\[P(t,T)=e^{-A(T-t)-B(T-t)R(t)}, \tag{4.8}\]
_where \(B\) and \(A\) are such that_
\[B^{\prime}(v) =1+aB(v)-\sum_{i=1}^{g}\eta_{i}B^{\alpha_{i}}(v),\quad B(0)=0, \tag{4.9}\] \[A^{\prime}(v) =bB(v),\quad A(0)=0, \tag{4.10}\]
_with \(\{\eta_{i}\}\) given by (4.2)._
In the case when \(g=1\) and \(\alpha_{1}=2\) equation (4.9) becomes a Riccati equation and its explicit solution provides bond prices for the classical CIR equation. In the opposite case (4.9) can be solved by numerical methods which exploit the tractable form of the function \(\mathcal{R}\) given by (4.7). Note that \(\mathcal{R}\) is continuous, \(\mathcal{R}(0)=1\) and \(\lim_{\lambda\to+\infty}\mathcal{R}(\lambda)=-\infty\). Thus \(\lambda_{0}:=\inf\{\lambda>0:\mathcal{R}(\lambda)=0\}\) is a positive number and
\[\mathcal{R}(\lambda_{0})=0,\quad\mathcal{R}^{\prime}(\lambda_{0})<0. \tag{4.11}\]
The function
\[\mathcal{G}(x):=\int_{0}^{x}\frac{1}{\mathcal{R}(y)}dy,\quad x\in[0,\lambda_{0 }), \tag{4.12}\]
is strictly increasing and its behaviour near \(\lambda_{0}\) can be estimated by substituting \(z=\frac{1}{\lambda_{0}-y}\) in (4.12) and using the inequality
\[(\lambda_{0}-h)^{\alpha}\geq\lambda_{0}^{\alpha}-\alpha\lambda_{0}^{\alpha-1}h,\quad h\in(0,\lambda_{0}),\quad\alpha\in(1,2).\]
For the case when \(\alpha_{1}=2\) this yields for \(x\in[0,\lambda_{0})\)
\[\mathcal{G}(x) =\int_{1/\lambda_{0}}^{1/(\lambda_{0}-x)}\frac{1}{\mathcal{R}( \lambda_{0}-\frac{1}{z})}\cdot\frac{1}{z^{2}}\mathrm{d}z\] \[=\int_{1/\lambda_{0}}^{1/(\lambda_{0}-x)}\frac{1}{z^{2}+a\lambda_ {0}z^{2}-az-\eta_{1}(\lambda_{0}z-1)^{2}-\sum_{i=2}^{g}\eta_{i}z^{2}(\lambda_{ 0}-\frac{1}{z})^{\alpha_{i}}}\ \mathrm{d}z\] \[\geq\int_{1/\lambda_{0}}^{1/(\lambda_{0}-x)}\frac{1}{z^{2}+a \lambda_{0}z^{2}-az-\eta_{1}(\lambda_{0}z-1)^{2}-\sum_{i=2}^{g}\eta_{i}z^{2}( \lambda_{0}^{\alpha_{i}}-\alpha_{i}\lambda_{0}^{\alpha_{i}-1}\frac{1}{z})}\ \mathrm{d}z\] \[=\int_{1/\lambda_{0}}^{1/(\lambda_{0}-x)}\frac{1}{z^{2}(1+a \lambda_{0}-\eta_{1}\lambda_{0}^{2}-\sum_{i=2}^{g}\eta_{i}\lambda_{0}^{\alpha _{i}})+z(2\eta_{1}\lambda_{0}-a+\sum_{i=2}^{g}\alpha_{i}\eta_{i}\lambda_{0}^{ \alpha_{i}-1})-\eta_{1}}\ \mathrm{d}z\] \[=\int_{1/\lambda_{0}}^{1/(\lambda_{0}-x)}\frac{1}{\mathcal{R}( \lambda_{0})z^{2}-\mathcal{R}^{\prime}(\lambda_{0})z-\eta_{1}}\ \mathrm{d}z. \tag{4.13}\]
It follows from (4.13) and (4.11) that
\[\lim_{x\to\lambda_{0}^{-}}\mathcal{G}(x)=+\infty,\]
so \(\mathcal{G}\) is invertible and \(\mathcal{G}^{-1}\) exists on \([0,+\infty)\). Writing (4.9) as
\[B^{\prime}(v)=\mathcal{R}(B(v)),\quad B(0)=0,\]
we see that
\[\frac{d}{dv}\mathcal{G}(B(v))=\frac{1}{\mathcal{R}(B(v))}B^{\prime}(v)=1,\]
and consequently
\[\mathcal{G}(B(v))=v,\quad v\geq 0.\]
Representing \(B(\cdot)\) as the inverse of \(\mathcal{G}(\cdot)\) enables its numerical computation. Hence, with \(\mathcal{G}^{-1}(\cdot)\) at hand we can derive bond prices, spot rates and swap rates in the model generated by (4.1). The dependence of \(\mathcal{G}^{-1}(\cdot)\) on the parameters \(a,\alpha_{1},...,\alpha_{g},\eta_{1},...,\eta_{g}\) plays a central role in the problem of fitting the model to real data. In what follows we present the results of calibration of (4.1) to market quotes of spot rates, Libor and swap rates.
### Calibration of canonical models to market data
Our first calibration procedure is concerned with the spot yield curves of European Central Bank (ECB) computed from the zero coupon AAA-rated bonds. The maturity grip consists of 33 points starting from 3 months and ending with 30 years. This set was, however, restricted to 13 points to speed up computations. All maturities less than 5 years were included to save rapid changes of the curves near zero. A glance at the historical data from 2016 to 2023 reveals significant changes in the shape of curves appearing after March 2022. The classical CIR model could be fitted relatively well to previous curves but performed much worse for the newer ones.
In both cases, however, the addition of new stable noise components resulted in reduction of the calibration error. For a calibration based on maturities \(T_{1}<...<T_{M}\) the fitting error measures a relative distance of the model spot rates
\[y(T_{i}):=\frac{1}{T_{i}}\left(\frac{1}{P(0,T_{i})}-1\right),\quad i=1,2,...,M, \tag{4.14}\]
from the empirical ones \(\hat{y}(T_{i}),i=1,2,...,M\). It is given by the formula
\[Error(a,b,\alpha_{1},...,\alpha_{g},d_{1},...,d_{g}):=\sum_{i=1}^{M}\frac{(y( T_{i})-\hat{y}(T_{i}))^{2}}{\hat{y}^{2}(T_{i})}. \tag{4.15}\]
For the curve from 10.01.2018 we can see that a good fitting of the CIR model can be substantially improved by replacing the Wiener process by a stable noise with index \(\alpha=1.58\). The effect is strongly apparent especially for small maturities, see Fig. 1. The increase of the number of noise components causes further decrease of the fitting error but in a lesser extent, see Tab. 1, where GCIR(g) stands for the generalized CIR equation (4.1) with \(g\) components.
For the data from 8.04.2022 the CIR model turned out to be the most efficient among one dimensional models, though the fitting error is much greater then in the previous example, see Tab. 2 and Fig. 2. Models with higher noise dimension provide, however, better results starting from the gr
\begin{table}
\begin{tabular}{|c|c|l|} \hline Model & Calibration error \(\times 100\) & Stability indices \\ \hline CIR & 0.95141785 & \(\alpha=2\) \\ \hline GCIR(1) & 0.44735953 & \(\alpha=1.58\) \\ \hline GCIR(2) & 0.44505444 & \(\alpha_{1}=2\), \(\alpha_{2}=1.53\) \\ \hline GCIR(3) & 0.44148324 & \(\alpha_{1}=2\), \(\alpha_{2}=1.91\), \(\alpha_{3}=1.42\) \\ \hline GCIR(4) & 0.43932515 & \(\alpha_{1}=2\), \(\alpha_{2}=1.45\), \(\alpha_{3}=1.44\), \(\alpha_{4}=1.29\) \\ \hline GCIR(5) & 0.43918035 & \(\alpha_{1}=2\), \(\alpha_{2}=1.315\), \(\alpha_{3}=1.311\), \(\alpha_{4}=1.308\), \(\alpha_{5}=1.23\) \\ \hline \end{tabular}
\end{table}
Table 1: Error reduction - calibration to the ECB rates from 10.01.2018.
Figure 1: Calibration to the ECB curves from 10.01.2018. View for all/small maturities.
Our second calibration procedure was based on Libor and 6-months swap rates with maturities resp. \(\{T_{i}\},i=1,...,M_{1}\) and \(\{U_{i}\},i=1,...,M_{2}\). The term structure of interest rates for maturities below one year are represented by Libor quotes while swap rates correspond to selected maturities from 1 year up to 30 years. A direct extention of (4.15) leads to the calibration error of the form
\[Error(a,b,\alpha_{1},...,\alpha_{g},d_{1},...,d_{g}):=\sum_{i=1}^{M_{1}}\frac{( L(T_{i})-\widehat{L}(T_{i}))^{2}}{\widehat{L}^{2}(T_{i})}+\sum_{i=1}^{M_{2}} \frac{(S(U_{i})-\widehat{S}(U_{i}))^{2}}{\widehat{S}^{2}(U_{i})},\]
where Libor rates \(L(T_{i})\) are defined like (4.14) and swap rates by
\[S(U_{i})=\frac{1-P(0,U_{i})}{\frac{1}{2}\sum_{k=1}^{i}P(0,U_{k})},\quad i=1,...,M_{2}.\]
The best one dimensional model for the data from 14.12.2017 was CIR, but, again, multivariate models generated better results. The passage from \(g=1\) to \(g=2\), i.e. to the \(\alpha\)-CIR model with \(\alpha=1.16\), gave the highest error reduction, which was particularly effective for the swap rates. All of them were pushed closer the empirical swap curve. The results are presented in Fig.3 and Tab. 3.
Figure 2: Calibration to the ECB curves from 8.04.2022. View for all/small/large maturities.
#### 4.1.1 Remarks on computational methodology
Our computation were performed in the Python programming language. The calibration error was minimized with the use of the Nelder-Mead algorithm which turned out to be most effective among all available algorithms for local minimization in the Python library. The computation time of calibration which depends, of course, on the number of noise components, lied in the range 100-13.000 seconds but often did not exceed 800 seconds. This stays in a strong contrast to the CIR model for which the closed form formulas shorten the calibration to the 2 second limit. We suspect that global optimization algorithms would provide even better fit, but they were to slow for the data with more than several maturities.
\begin{table}
\begin{tabular}{|c|c|l|} \hline Model & Calibration error \(\times 100\) & Stability indices \\ \hline CIR & 24.10280133 & \(\alpha=2\) \\ \hline GCIR(2) & 0.83059934 & \(\alpha_{1}=2\), \(\alpha_{2}=1.99\) \\ \hline GCIR(3) & 0.83055904 & \(\alpha_{1}=2\), \(\alpha_{2}=1.17\), \(\alpha_{3}=1.14\) \\ \hline GCIR(4) & 0.83050323 & \(\alpha_{1}=2\), \(\alpha_{2}=1.35\), \(\alpha_{3}=1.25\), \(\alpha_{4}=1.21\) \\ \hline GCIR(5) & 0.83049801 & \(\alpha_{1}=2\), \(\alpha_{2}=1.53\), \(\alpha_{3}=1.48\), \(\alpha_{4}=1.35\), \(\alpha_{5}=1.23\) \\ \hline \end{tabular}
\end{table}
Table 2: Error reduction - calibration to the ECB rates from 8.04.2022.
Figure 3: Calibration to the Libor and swap curves from 14.12.2017
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model & Calibration error \(\times 100\) & Libor error \(\times 100\) & Swap error \(\times 100\) & Stability indices \\ \hline CIR & 1.42225593 & 0.84831146 & 0.57394447 & \(\alpha=2\) \\ \hline GCIR(2) & 1.37316050 & 1.00280671 & 0.37035379 & \(\alpha_{1}=2\), \(\alpha_{2}=1.16\) \\ \hline GCIR(3) & 1.37309034 & 1.00987818 & 0.36321216 & \(\alpha_{1}=2\), \(\alpha_{2}=1.94\), \(\alpha_{3}=1.15\) \\ \hline GCIR(4) & 1.37308709 & 1.00989365 & 0.36319344 & \(\alpha_{1}=2\), \(\alpha_{2}=1.99\), \(\alpha_{3}=1.54\), \(\alpha_{4}=1.15\) \\ \hline \end{tabular}
\end{table}
Table 3: Error reduction - calibration to the Libor and swap rates from 14.12.2017.
Appendix
**Proof of Proposition 2.2:**\((A)\) It was shown in [14, Theorem 5.3] that the generator of a general positive Markovian short rate generating an affine model is of the form
\[\mathcal{A}f(x)= cxf^{\prime\prime}(x)+(\beta x+\gamma)f^{\prime}(x) \tag{5.1}\] \[+\int_{(0,+\infty)}\Big{(}f(x+y)-f(x)-f^{\prime}(x)(1\wedge y) \Big{)}(m(\mathrm{d}y)+x\mu(\mathrm{d}y)),\quad x\geq 0,\]
for \(f\in\mathcal{L}(\Lambda)\cup C_{c}^{2}(\mathbb{R}_{+})\), where \(\mathcal{L}(\Lambda)\) is the linear hull of \(\Lambda:=\{f_{\lambda}:=e^{-\lambda x},\lambda\in(0,+\infty)\}\) and \(C_{c}^{2}(\mathbb{R}_{+})\) stands for the set of twice continuously differentiable functions with compact support in \([0,+\infty)\). Above \(c,\gamma\geq 0\), \(\beta\in\mathbb{R}\) and \(m(\mathrm{d}y)\), \(\mu(\mathrm{d}y)\) are nonnegative Borel measures on \((0,+\infty)\) satisfying
\[\int_{(0,+\infty)}(1\wedge y)m(\mathrm{d}y)+\int_{(0,+\infty)}(1\wedge y^{2}) \mu(\mathrm{d}y)<+\infty. \tag{5.2}\]
The generator of the short rate process given by (2.1) equals
\[\mathcal{A}_{R}f(x)= f^{\prime}(x)F(x)+\frac{1}{2}f^{\prime\prime}(x)\langle QG(x ),G(x)\rangle\] \[+\int_{\mathbb{R}^{d}}\Big{(}f(x+\langle G(x),y\rangle)-f(x)-f^{ \prime}(x)\langle G(x),y\rangle\Big{)}\nu(\mathrm{d}y)\] \[= f^{\prime}(x)F(x)+\frac{1}{2}f^{\prime\prime}(x)\langle QG(x),G (x)\rangle\] \[+\int_{\mathbb{R}}\Big{(}f(x+v)-f(x)-f^{\prime}(x)v\Big{)}\nu_{G (x)}(\mathrm{d}v)\]
where \(f\) is a bounded, twice continuously differentiable function.
By Proposition 5.1 below, the support of the measure \(\nu_{G(x)}\) is contained in \([-x,+\infty)\), thus it follows that
\[\mathcal{A}_{R}f(x)= f^{\prime}(x)F(x)+\frac{1}{2}f^{\prime\prime}(x)\langle QG(x ),G(x)\rangle\] \[+\int_{(0,+\infty)}\Big{(}f(x+v)-f(x)-f^{\prime}(x)(1\wedge v) \Big{)}\nu_{G(x)}(\mathrm{d}v)\] \[+f^{\prime}(x)\int_{(0,+\infty)}\Big{(}(1\wedge v)-v\Big{)}\nu_{G (x)}(\mathrm{d}v)\] \[+\int_{(-\infty,0)}\Big{(}f(x+v)-f(x)-f^{\prime}(x)v\Big{)}\nu_{ G(x)}(\mathrm{d}v)\] \[= \frac{1}{2}f^{\prime\prime}(x)\langle QG(x),G(x)\rangle+f^{ \prime}(x)\left[F(x)+\int_{(1,+\infty)}\Big{(}1-v\Big{)}\nu_{G(x)}(\mathrm{d }v)\right]\] \[+\int_{(0,+\infty)}\Big{(}f(x+v)-f(x)-f^{\prime}(x)(1\wedge v) \Big{)}\nu_{G(x)}(\mathrm{d}v)\] \[+\int_{[-x,0)}\Big{(}f(x+v)-f(x)-f^{\prime}(x)v\Big{)}\nu_{G(x)}( \mathrm{d}v). \tag{5.3}\]
Comparing (5.3) with (5.1) applied to a function \(f_{\lambda}\) with \(\lambda>0\) such that \(f_{\lambda}(x)=e^{-\lambda x}\) for \(x\geq 0\), we get
\[cx\lambda^{2}-(\beta x+\gamma)\lambda\] \[+\int_{(0,+\infty)}\Big{(}e^{-\lambda y}-1+\lambda(1\wedge y)\Big{)} (m({\rm d}y)+x\mu({\rm d}y))\] \[-\frac{1}{2}\lambda^{2}\langle QG(x),G(x)\rangle+\left[F(x)+\int_{ (1,+\infty)}\Big{(}1-v\Big{)}\nu_{G(x)}({\rm d}v)\right]\lambda\] \[-\int_{(0,+\infty)}\Big{(}e^{-\lambda v}-1+\lambda(1\wedge v) \Big{)}\nu_{G(x)}({\rm d}v)\] \[=\int_{[-x,0)}\Big{(}e^{-\lambda v}-1+\lambda v\Big{)}\nu_{G(x)}( {\rm d}v),\quad\lambda>0,x\geq 0. \tag{5.4}\]
Comparing the left and the right sides of (5.4) we see that the left side grows no faster than a quadratic polynomial of \(\lambda\) while the right side grows faster that \(de^{\lambda y}\) for some \(d,y>0\), unless the support of the measure \(\nu_{G(x)}({\rm d}v)\) is contained in \([0,+\infty)\). It follows that \(\nu_{G(x)}({\rm d}v)\) is concentrated on \([0,+\infty)\), hence \((a)\) follows, and
\[cx\lambda^{2}-(\beta x+\gamma)\lambda\] \[-\frac{1}{2}\lambda^{2}\langle QG(x),G(x)\rangle+\left[F(x)+\int_ {(1,+\infty)}\Big{(}1-v\Big{)}\nu_{G(x)}({\rm d}v)\right]\lambda\] \[=\int_{(0,+\infty)}\Big{(}e^{-\lambda y}-1+\lambda(1\wedge y) \Big{)}\left(\nu_{G(x)}({\rm d}y)-m({\rm d}y)-x\mu({\rm d}y)\right),\quad \lambda>0,x\geq 0. \tag{5.5}\]
Dividing both sides of the last equality by \(\lambda^{2}\) and using the estimate
\[\frac{e^{-\lambda y}-1+\lambda(1\wedge y)}{\lambda^{2}}\leq\left(\frac{1}{2}y ^{2}\right)\wedge\left(\frac{e^{-\lambda}-1+\lambda}{\lambda^{2}}\right)\]
we get that that the left side of (5.5) converges to \(cx-\frac{1}{2}\langle QG(x),G(x)\rangle\) as \(\lambda\to+\infty\), while the right side converges to \(0\). This yields (2.19), i.e.
\[cx= \frac{1}{2}\langle QG(x),G(x)\rangle,\quad x\geq 0. \tag{5.6}\]
Next, fixing \(x\geq 0\) and comparing (5.3) with (5.1) applied to a function from the domains of both generators and such that \(f(x)=f^{\prime}(x)=f^{\prime\prime}(x)=0\) we get
\[\int_{(0,+\infty)}f(x+y)(m({\rm d}y)+x\mu({\rm d}y))=\int_{(0,+\infty)}f(x+v) \nu_{G(x)}({\rm d}v)\]
for any such a function, which yields
\[\nu_{G(x)}({\rm d}v)\mid_{(0,+\infty)}=m({\rm d}v)+x\mu({\rm d}v),\quad x\geq 0. \tag{5.7}\]
This implies also
\[\beta x+\gamma= F(x)+\int_{(1,+\infty)}\Big{(}1-v\Big{)}\nu_{G(x)}({\rm d}v), \quad x\geq 0. \tag{5.8}\]
\((b)\) Setting \(x=0\) in (5.7) yields
\[\nu_{G(0)}(\mathrm{d}v)\mid_{(0,+\infty)}=m(\mathrm{d}v). \tag{5.9}\]
To prove (2.18), by (5.2) and (5.9), we need to show that
\[\int_{(1,+\infty)}v\nu_{G(0)}(\mathrm{d}v)<+\infty. \tag{5.10}\]
It is true if \(G(0)=0\) and for \(G(0)\neq 0\) the following estimate holds
\[\int_{(1,+\infty)}v\nu_{G(0)}(\mathrm{d}v) =\int_{\mathbb{R}^{d}}\langle G(0),y\rangle\mathbf{1}_{[1,+\infty )}(\langle G(0),y\rangle)\nu(\mathrm{d}y)\] \[\leq\mid G(0)\mid\int_{\mathbb{R}^{d}}\mid y\mid\mathbf{1}_{[1/ |G(0)|,+\infty)}(\mid y\mid)\nu(\mathrm{d}y),\]
and (5.10) follows from (2.5).
\((c)\) (2.20) follows from (5.7) and (5.9). To prove (2.21) we use (2.20), (2.18) and the following estimate for \(x\geq 0\):
\[\int_{0}^{+\infty}(v^{2}\wedge v)\nu_{G(x)}(\mathrm{d}v) =\int_{\mathbb{R}^{d}}(\mid\langle G(x),y\rangle\mid^{2}\wedge \langle G(x),y\rangle)\nu(\mathrm{d}y)\] \[\leq\Big{(}\mid G(x)\mid^{2}\vee\mid G(x)\mid\Big{)}\int_{ \mathbb{R}^{d}}(\mid y\mid^{2}\wedge\mid y\mid)\nu(\mathrm{d}y)<+\infty,\]
In the last line we used (2.2) and (2.5).
\((d)\) It follows from (5.8) and (2.20) that
\[\beta x+\gamma =F(x)+\int_{(1,+\infty)}(1-v)\nu_{G(x)}(\mathrm{d}v)\] \[=F(x)+\int_{(1,+\infty)}(1-v)\nu_{G(0)}(\mathrm{d}v)+x\int_{(1,+ \infty)}(1-v)\mu(\mathrm{d}v),\quad x\geq 0.\]
Consequently, (2.22) follows with
\[a:=\Big{(}\beta-\int_{(1,+\infty)}(1-v)\mu(\mathrm{d}v)\Big{)},\ b:=\Big{(} \gamma-\int_{(1,+\infty)}(1-v)\nu_{G(0)}(\mathrm{d}v)\Big{)},\]
and \(b\geq\int_{(1,+\infty)}(v-1)\nu_{G(0)}(\mathrm{d}v)\) because \(\gamma\geq 0\).
\((B)\) We use (5.8), (2.22) and (5.7) to write (5.1) in the form
\[\mathcal{A}f(x)=cxf^{\prime\prime}(x)+\Big{[}ax+b+\int_{(1,+\infty)}(1-v)\nu_ {G(x)}(\mathrm{d}v)\Big{]}f^{\prime}(x)\]
\[+\int_{(0,+\infty)}[f(x+v)-f(x)-f^{\prime}(x)(1\wedge v)]\nu_{G(x)}(\mathrm{ d}v)\}.\]
In view of (5.7) and (5.9) we see that (2.23) is true.
**Proposition 5.1**: _Let \(G:[0,+\infty)\to\mathbb{R}^{d}\) be continuous. If the equation (2.1) has a non-negative strong solution for any initial condition \(R(0)=x\geq 0\), then_
\[\forall x\geq 0\quad\nu\{y\in\mathbb{R}^{d}:x+\langle G(x),y\rangle<0\}=0. \tag{5.11}\]
_In particular, the support of the measure \(\nu_{G(x)}(\mathrm{d}v)\) is contained in \([-x,+\infty)\)._
**Proof:** Let us assume to the contrary, that for some \(x\geq 0\)
\[\nu\{y\in\mathbb{R}^{d}:x+\langle G(x),y\rangle<0\}>0.\]
Then there exists \(c>0\) such that
\[\nu\{y\in\mathbb{R}^{d}:x+\langle G(x),y\rangle<-c\}>0.\]
Let \(A\subseteq\{y\in\mathbb{R}^{d}:x+\langle G(x),y\rangle<-c\}\) be a Borel set separated from zero. By the continuity of \(G\) we have that for some \(\varepsilon>0\):
\[\tilde{x}+\langle G(\tilde{x}),y\rangle<-\frac{c}{2},\quad\tilde{x}\in[(x- \varepsilon)\lor 0,x+\varepsilon],\quad y\in A. \tag{5.12}\]
Let \(Z^{2}\) be a Levy processes with characteristics \((0,0,\nu^{2}(dy))\), where \(\nu^{2}(dy):=\mathbf{1}_{A}(y)\nu(dy)\) and \(Z^{1}\) be defined by \(Z(t)=Z^{1}(t)+Z^{2}(t)\). Then \(Z^{1},Z^{2}\) are independent and \(Z^{2}\) is a compound Poisson process. Let us consider the following equations
\[dR(t)=F(R(t))dt+\langle G(R(t-)),dZ(t)\rangle,\quad R(0)=x,\] \[dR^{1}(t)=F(R^{1}(t))dt+\langle G(R^{1}(t-)),dZ^{1}(t)\rangle, \quad R^{1}(0)=x.\]
For the exit time \(\tau_{1}\) of \(R^{1}\) from the set \([(x-\varepsilon)\lor 0,x+\varepsilon]\) and the first jump time \(\tau_{2}\) of \(Z^{2}\) we can find \(T>0\) such that \(\mathbb{P}(\tau_{1}>T,\tau_{2}<T)=\mathbb{P}(\tau_{1}>T)\mathbb{P}(\tau_{2}<T)>0\). On the set \(\{\tau_{1}>T,\tau_{2}<T\}\) we have \(R(\tau_{2}-)=R^{1}(\tau_{2}-)\) and therefore
\[R(\tau_{2})=R^{1}(\tau_{2}-)+\langle G(R^{1}(\tau_{2}-)),\triangle Z^{2}(\tau _{2})\rangle<-\frac{c}{2}.\]
In the last inequality we used (5.12). This contradicts the positivity of \(R\). \(\Box\)
|
2308.03888 | Deep neural networks from the perspective of ergodic theory | The design of deep neural networks remains somewhat of an art rather than
precise science. By tentatively adopting ergodic theory considerations on top
of viewing the network as the time evolution of a dynamical system, with each
layer corresponding to a temporal instance, we show that some rules of thumb,
which might otherwise appear mysterious, can be attributed heuristics. | Fan Zhang | 2023-08-04T10:55:56Z | http://arxiv.org/abs/2308.03888v1 | # Deep neural networks from the perspective of ergodic theory
###### Abstract
The design of deep neural networks remains somewhat of an art rather than precise science. By tentatively adopting ergodic theory considerations on top of viewing the network as the time evolution of a dynamical system, with each layer corresponding to a temporal instance, we show that some rules of thumb, which might otherwise appear mysterious, can be attributed heuristics.
## I Introduction and motivation
Artificial neural networks have demonstrated great potential in their ability to learn existing knowledge, and interpolate or even slightly extrapolate to new situations. They however lack the ability to understand causations and other logical relations that indicate general intelligence. No matter, much of human activities are experience-based, and so the present incarnation of artificial intelligence algorithms are sufficient to revolutionize society. In particular, one can highlight medicine as one area where vast amounts experiential enigmas persist, and a doctor's effectiveness is largely restricted by their capacity to learn rather than the ability to understand.
Furthermore, human bodies are complex systems whereby heterogeneity creates self-organization, which an artificial neural network, as also a complex dynamical system, may be able to (intentionally or unintentionally) emulate. Thereby, a network's learning power may well be tunable to arise out of emergent phenomena that share traits with the processes underlying human body functions. In this way, a network could serve as a crude simulation, and subsequently stands a better chance at accurately and efficiently learning and storing medical knowledge, providing better interpolations and extrapolations than the usually naive and linear manner with which humans try to carry out such tasks.
Despite such great potential and practical successes, a theory of deep learning is still lacking however, so we are in want of general guidelines as to what architecture might perform well. In such a pursuit, it is beneficial to be able to examine deep neural networks from as many perspectives as possible, thereby acquiring a more complete picture. In this brief note, we advocate also adopting the ergodic theory approach by showing how it could offer simple intuitive (although arguably hand-wavy at its present state of deployment) explanations to some properties that we have observed in the behavior of the deep neural networks. We begin by relating some understood aspects of the networks to ergodicity concepts, which could then serve as the conduits connecting the two disciplines.
### As fitting functions to training data
We are just now beginning to understand some aspects of the behavior of deep neural networks. For example, one's experiences with lower dimensional non-convex optimization problems might suggest that there would be an enormous number of local minima, on the cost function surface in parameter space, that trap our optimization procedure and keep it away from the optimal solution. This is in fact not a problem [1], because the local minima are replaced with saddle points in very high dimensions. The Hessian at critical points have many eigenvalues in high parameter dimensions, and it is more likely that they take on both positive and negative values giving us saddle points, so we are less likely to be trapped, and could instead slip out along the downwardly curving directions. But more fundamental than not being trapped, is that a good-enough solution is at all on that surface to begin with. In other words,
\(\mathcal{C}_{1}:\)_the network needs to be sufficiently flexible so the surface extends a large range of cost values._
Another aspect of neural networks that people had expected to be a problem is statistical sample complexity, i.e., having too many parameters, even more than the training data sets, should lead to overfitting, so while fitting to the (possibly noisy) training data may be perfect, interpolation, let alone extrapolation, would not work as the overfitted function may oscillate wildly in-between the training data points on which they are pinned. This behavior is not observed in reality. One explanation is proposed by [2], which rigorously proved that having many parameters being unimportant for the fitting quality is key to the effectiveness of overfitted linear regression. An intuitive understanding being offered in literature (see e.g., [3]) is that there are then many different solutions that work rather well, differing only in the unimportant parameters, thus it becomes much easier to find them. However, we note that for any fitting problem, one could always introduce completely spurious parameters that don't appear in the actual fitting procedure (we just vacuously append them to the formal parameter set to be fitted), thus are unimportant to the extreme, but this should not change at all the quality of output of that fitting procedure. So this explanation would make sense only if the original procedure is not overfitting to begin with, or in other words, so many of the fitting parameters in the linear regression are unimportant or spurious that the surviving important ones are so few in numbers that there is no overfitting in reality. This is thus a rather trivial explanation - the overfitting problem is not problematic if for the problem at hand the fitting architecture is nothing but superficially over-parameterized. This does not appear to be the case for deep neural networks though, for changing the network structure, especially the number of neurons, appears to alter the fitting quality. Therefore, an alternative explanation for why overfitting does not present a catastrophic snag may be needed.
To this end, we first note that condition \(\mathcal{C}^{1}\) already helps, as overfitting often occurs in regression problems when the fitting curve is just sufficiently flexible to hit all the training data points if we strong-arm it, but not enough to do so smoothly if noises are present or the choice of basis functions are not appropriate. This rigidity means the fitting curves have to take a large detour in order to make the correct directional changes in order to hit the next train
ing point needing to be fitted, much like how a fast aircraft needs to make a very large divagation in order to make a turn. When we do interpolation or extrapolation, we have to sample from within those large detours and thus yield terrible results. When the parameter number goes much larger than training point population however, the curve becomes extremely flexible, so it could possibly effortlessly (smoothly) transit between training data points, without having to take up some very contorted shape that shoots off to large extremes in the intervening values. This is the case with deep neural networks that are complex and flexible (see e.g., [4; 5] for neutral networks as universal approximators). This is not enough though. If the fitting quality to the training data is the only criteria in the cost function, then there is in principle no guarantee that the flexible fitting curve won't still whip around like crazy. That is, the smooth curves exist, but are not necessarily the ones we get if we do not deliberately look for them. To cure this, one typically introduces these so-called regularizers into the cost functions to penalize undesired parameters that could cause instabilities of the output, as one rather contrived approach to achieve \(\mathcal{C}_{2}:\)_the network needs to be sufficiently wellposed so the output doesn't depend so extremely sensitively on initial data that infinitesimally separated initial data points leads to wildly diverging outcomes._
### As dynamical systems
The conditions \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are qualitative and difficult to translate into some enforceable quantitative criteria on network design. To make further progress, we enlist the dynamical systems view of neural networks (see e.g., [6]), whereby each layer of neurons corresponds to a time instance of a dynamical system, so updates propagating through the network according to some given initial data behaves like a time evolution of the system within a state space having the same dimension as the number of neurons in each layer, where the number of layers becomes the number of discrete1 time steps. This dynamical systems perspective had already yielded important insight, e.g, the Lemma 1 of [8], which is a well-established result applicable to the numerical methods for solving differential equation, essentially imposes a necessary condition for \(\mathcal{C}_{2}\). On the other hand, the same paper also recognizes the importance for the forward propagation to not be too lossy (i.e., the Lyapunov exponents cannot all be too negative, or the state space volume shrinks very quickly, losing the ability to distinguish information in the initial data), essentially offering a caution on what might definitely violate \(\mathcal{C}_{1}\).
Footnote 1: Previous literature would typically go to the continuous limit and adopt differential equation results in order to glean some insight into the behavior and properties of the neural networks (see in particular [7], where some criteria for neural ODEs to develop chaotic behavior is worked out). This strategy is not suitable for us however, since the chaotic nature or lack thereof, differs quite drastically for discrete and continuous systems, not least because the requirement for no-self-intersection or uniqueness of trajectories is so very much relaxed for discrete evolutions. The (spatial) differentiability also effectively restricts \(K_{\alpha}^{[1]\beta}\) of Eq. (3) below to tridiagonal or other nearly diagonal (depending on the order of derivatives and the finite difference scheme adopted) forms that limit the dependence of a neuronβs time evolution to only its nearest neighbours.
In this note, we further enlist the powerful mathematical tool of ergodic theory2 to help tighten the discussion, inching a little more towards necessary and sufficient conditions. In this language, \(\mathcal{C}_{1}\) translates to a desirability to have ergodicity3, while \(\mathcal{C}_{2}\) means we should avoid mixing4. This preference to wedge in-between ergodicity and mixing is easiest to understand in regard to classification tasks, where we need ergodicity to be able to move any initial point almost anywhere else in state space to achieve segregation (see e.g., Fig. 3 in [6]), but not mixing that causes neighbourhoods to all get mangled together that there is no segregation possible, and classification ends up having to be done point-wise while interpolation becomes impossible.
Footnote 2: The usual ergodic theory studies measure-invariant or state space volume preserving evolutions, or in other words, there is no dissipation in the system, which may not be the case for deep neural networks. But dissipation may not matter directly, because the sum of _all_ Lyapunov exponents determines whether we have dissipation, but chaos (\(\exists\) at least one positive exponent), ergodicity, weakly-mixing (only constant functions are the eigenfunctions corresponding to the exponents as eigenvalues) and entropy (sum of all positive exponents) are really more about the positive ones. Also, as discussed in the main text in the last paragraph, a well-designed neural network should not be too dissipative.
Footnote 3: No invariant subsets that trap orbits thus prevent effective migration in e.g., the classification problem. Beginning from any set of initial data, the bundle of trajectories will eventually cover almost all of the allowed state space (besides perhaps sets of zero measure, these donβt matter for statistical considerations, but if we are aiming to train the neural network to search for very special properties that nearly never occurs in a big data set, these will be relevant and so neural networks cannot produce master pieces of exceptional qualities, they are however efficient at charming out mediocircity). We can see this from the celebrated Birkhoffβs ergodicity theorem that underlies statistical mechanics, which states that spatial average equals temporal average along the dynamical evolution, so any macrostate with non-vanishing measure must eventually be reached.
Footnote 4: Stronger than ergodicity, requiring that memory of the initial data be completely lost as we take many steps, in the sense that the trajectories emerging from any initial data set will have to spread out over the entire state space completely randomly according to the probability distribution of the overall state space, with conditional probability conditioning on the initial data adding no further information. For mixing systems, trajectories from any two initially disjoint initial data sets get meshed together thoroughly and permanently after sufficiently long evolution. Note mere ergodicity could also mesh two bundles of trajectories out of two disjoint sets of initial data, but the meshing can be transitory, meaning the two bundles of trajectories can separate again, and then repeat in the meshing and separating cycle (cf., the claim that merely ergodic systems are not even apparently random) [9]. With mixing though, the meshing is permanent with no subsequent re-separation. In a way, one can visualize the two bundles as meeting (if they meet at all) transversely over and over again with an ergodic-but-not-mixing system, but they intersect and merge tangentially with a mixing system.
We will hereafter refer to this dedicate balanced wildness of the dynamical system's orbits as being on the edge of chaos, taking the view that strong mixing is a hallmark for chaotic systems5[9; 10], while merely ergodic systems are usually not chaotic. We caution that this relationship between chaos and ergodicity concepts is not rigorous, and is but a pragmatism that is useful for practical applications. We will adopt it in this note understanding this lack of rigor. We will similarly be loose with terminologies in the interest of brevity, especially when migrating concepts defined for infinite time system to finite ones, and those defined for
measure-preserving systems to more general cases.
## II Network spectroscopy
As always with dynamical systems, turning to spectral considerations tends to simplify computations. The relevant quantities for our considerations, in the case of deep neural networks of finite depth, are the finite time Lyapunov exponents, which measure the tendency of the dynamics to locally drive nearby trajectories apart (note the qualifying "finite time" does not mean they are gauges of the cumulative divergence across the entire network depth, they are still "per layer" quantities as the division by \([j]\) in Eq. 5 below shows), and can be seen as the average local Lyapunov exponents across the available depth of the network. Once we have these numbers, the various dynamical system qualities can be assessed, e.g., whether the system is dissipative (volume preserving in the Liouville sense) depends on whether the sum of the exponents is negative, and particularly relevant for us, the system is chaotic and likely practically mixing if there exists at least one positive exponent (becomes hyperchaotic if there are more than one).
To this end, we first recast the neural network into the dynamical systems language (see e.g., [11]). For definitiveness, we assume a basic multi-layer perceptron style network. If we see any particular configuration of a layer (labelled by \([j]\in 0,\cdots N-1\)) in the deep neural network as being a point in a state variable space, with each neuron within occupying a dimension (labelled by \(\alpha\)), then the value being carried by that neuron \(y_{i}^{[j]\alpha}\), as corresponding to some input state \(y_{i}^{[0]\beta}\) (\(i\) indexes the training set), gives the \(c\)th coordinate of that point in the state variable space. We can subsequently regard the layers as time steps in an evolution in this state space, where the discrete evolution is written as
\[y_{i}^{[j+1]\alpha}= \tilde{f}^{[j]\alpha}\left(y_{i}^{[j]\beta}\right) \tag{1}\] \[= y_{i}^{[j]\alpha}+f^{\alpha}\left(y_{i}^{[j]\beta},\mathbf{u}^{[ j]}\right)\Delta t\,, \tag{2}\]
where usually
\[f^{\alpha}\left(y_{i}^{[j]\beta},\mathbf{u}^{[j]}\right)=\sigma_{\beta}^{ \alpha}\left(K_{\delta}^{[j]\beta}y_{i}^{[j]\delta}+\xi^{[j]\beta}\right)\,, \tag{3}\]
with \(\mathbf{u}^{[j]}\) representing the weights \(K_{\delta}^{[j]\beta}\) and bias \(\xi^{[j]\alpha}\) connecting the \([j]\)th layer to the \([j+1]\)th note these parameters become independent of the input once training is complete). The \(\sigma_{\beta}^{\alpha}\) in Eq. (3) is an activation function for the neurons that's usually of a diagonal form. The expression (1) is a more abstract representation of the dynamical process, which absorbs the layer dependent variations in the parameters \(\mathbf{u}^{[j]}\) into the \([j]\) label of \(\tilde{f}^{[j]\alpha}\).
Assuming for simplicity that the width of the layers do not change, then one obtains, for each reference trajectory (e.g., starting from the input of a training set labelled by \(i\)), and an end time \(j\) (usually \(N-1\) but we keep the definition general here; we always fix the initial time at \(j=0\) however), a square matrix
\[\mathbb{M}_{i\,\beta}^{[j]\alpha}\equiv\frac{\partial y_{i}^{[j]\alpha}}{ \partial y_{i}^{[0]\beta}}\,. \tag{4}\]
This matrix, when acting on a perturbation vector of the initial data \(\delta y_{i}^{[0]\beta}\), yields the leading order changes in the state \(y_{i}^{[j]\alpha}\) at the \([j]\)th layer. Therefore the exponential rate of growth of the perturbation would be related to the logarithm of its eigenvalues, rescaled by the number of time steps \([j]\). However, there is no guarantee that \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\) is diagonalizable or even square in the more general cases, so one instead goes to the singular values \(\mu_{i}^{[j]\alpha}\) of \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\) in its singular value decomposition. The finite time Lyapunov exponents are then defined as6
Footnote 6: The singular values \(\mu_{i}^{[j]\alpha}\) can always be chosen to be positive, since multiplying a negative singular value and the corresponding left singular vector both by minus one will preserve the validity of the decomposition.
\[\lambda_{i}^{[j]\alpha}\equiv\frac{\ln\mu_{i}^{[j]\alpha}}{[j]}\,. \tag{5}\]
To compute such exponents, we begin with the most general expression (1), which after iterations yield
\[y_{i}^{[j]\alpha}=\sigma_{q=0}^{j-1}\tilde{f}^{[q]\alpha}\left(y_{i}^{[0] \beta}\right)\,. \tag{6}\]
So when the \(\tilde{f}^{[q]\alpha}\) are all differentiable (because of the need to back-propagate errors in neural networks, its derivatives usually exist, maybe aside from at isolated points like with the ReLU activation function), we can write
\[\frac{\partial y_{i}^{[j]\alpha}}{\partial y_{i}^{[0]\beta}}=\delta^{\alpha}{}_ {\gamma_{j}}\delta_{\beta}\!\cdot\!\prod_{q=0}^{j-1}\tilde{f}_{\gamma_{q}}^{[q] \gamma_{q+1}}\,, \tag{7}\]
where the local Jacobians are
\[\tilde{f}_{\gamma_{q}}^{[q]\gamma_{q+1}}\equiv\frac{\partial\tilde{f}^{[q] \gamma_{q+1}}}{\partial y_{i}^{[q]\gamma_{q}}}\,. \tag{8}\]
If each layer is identical so all the Jacobians are the same, then the method of powers for singular value decomposition suggests that \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\) is essentially rank one, dominated by the largest singular value when the neural network is deep. This low rank scenario has minimal expressivity, so it is important that the Jacobians are varied. In this case, there is no simple relationship between the singular values of the individual Jacobians and the final \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\). At best, it is possible to approximately almost-diagonalize \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\) (taking it to a bidiagonal form) with a Householder procedure that turns all the Jacobians upper triangular [12]. Then, the bidiagonal form is further fully diagonalized according to the "chasing the bulge" sweep of e.g. [13], using 2-D rotations that suppress the superdiagonal entries. This entire procedure is rather opaque and blends the entries of the Jacobians in nontrivial manners, therefore, without specializing to specific neural networks, we cannot make definitive statements regarding the finite time Lyapunov exponents. Nevertheless, we could perhaps make some vaguely probabilistic statements on how various aspects of the network architecture (e.g., the number of layers, the width of each layer, the rank of each Jacobian etc) would _likely_ affect the singular values of \(\mathbb{M}_{i\,\beta}^{[j]\alpha}\).
### Effect of network architectural traits
#### iii.1.1 Depth of network vs. activation function
A major hurdle in applying ergodic theory to deep neural networks is that the concepts of the former are defined in the infinite evolution time limit, or in other words asymptotically, while the actual networks tend to be of finite depth. For the same local spectral characteristics, the ergodicity (or transitivity in the language of topological dynamics) and mixing properties therefore manifest more fully in deeper networks. As a result, greater depth is desirable for networks whose individual layers don't tend to push nearby trajectories apart, i.e., for dynamics that are on the regular side, meaning barely ergodic and far from mixing. On the other hand, for networks whose individual-layer-driven local dynamics have a strong tendency to cause neighboring trajectories to diverge, or in other words, a system that is highly chaotic and deeply in the mixing regime asymptotically, shallower networks are beneficial, as transitivity may have had time to transpire while mixing hasn't (viewing these concepts through the lens of losing dependence on initial data, then mixing is a more complete amnesia that tends to happen further into the evolution than ergodicity, which is only partial loss of memory).
With the prevailing network designs, because the activation functions (some of which are really just distributions) are more or less binary (switching between active or dormant states) by definition, the outputs of each neuron for nearby trajectories that happen to lay on either side of the threshold tend to be very different7. So we usually have the latter case, thus shallower networks may often be desirable (see e.g., [14]). However, the extent to which this is true is dependent on the activation functions, which is supposed to bring in nonlinearity, thus should naturally relate to how chaotic the network evolution is. More specifically:
Footnote 7: This variability against initial data is a signature of the nonlinearity introduced by the activation functions. A truly linear activation function will lead to a constant \(\mathbb{M}\) matrix, and the entire network becomes a linear regression, which may not be able to fit the training data (e.g., when two inputs in the training set are related by a simple \(1/2\) rescaling, but their corresponding outputs differ not by the same rescaling). On an intuitive level, such variability across finite and not infinitesimal shifts in initial data may also be conducive to generate the ergodic-but-not-mixing behavior discussed in footnote 4, because different bundles of trajectories experiencing different local Jacobians tends to drive the bundles, but not necessarily the individual trajectories within each bundle, to move apart.
* One category consists of the binary step, sigmoid, or tanh activation functions, which all have small derivatives far from the transition region, but a large derivative within. As a result, their Jacobians contain a delta-function style spike that, at occasions, contributes to a very large Frobenius norm to \(\bar{J}\) (we henceforth suppress unimportant indices for brevity) and subsequently \(\mathbb{M}\) (sans chance cancellations). Because the Frobenius norm of \(\mathbb{M}\) is the L2 norm of its singular values, this then implies large singular values, and thus large (more positive) finite time Lyapunov exponents, and consequently deeper protrusion into the mixing regime. In summary, if one would like to adopt these \(S\)-shaped activation functions, shallower networks are needed if one sets the transition region in these functions to be very narrow.
* The recently more popular ReLU, ELU or swish activation functions behave very differently, as there is not spike, but only a (actual or almost) discontinuity in the Jacobians, so there is no large Frobenius norm at individual \(\bar{J}\) level, just jumps in their entries when the state of the relevant layer changes. Once the local Jacobians multiply into the overall \(\mathbb{M}\), the entries in that matrix end up jumping frequently (multiplication of many Heaviside functions jumping at different values). Therefore, when we scan across all possibilities (varying \(i\)), the \(\mathbb{M}_{i}\) matrix changes often and \(\lambda_{i}\) tends to explore large ranges, thus have a high probability to hitting large positive values. This effect is likely less pronounced than that of the previous item where large \(\lambda_{i}\) values are more definitively hit (one can alternatively think of these call-option-payoff shaped functions as being less binary so nearby trajectories don't elicit very different activation function outcomes, implying less chaotic propagations), so we predict that ReLU family functions would be more suitable for networks of greater depth.
#### iii.1.2 Width of layers vs. connectivity
The width \(D\) of the layers of a neural network (number of neurons in each layer, assuming to be a constant for the present discussion for brevity) corresponds to the dimension of the state space of the dynamical system, and \(\mathbb{M}\) is a \(D\times D\) matrix. How the finite time Lyapunov exponents vary with \(D\) is heavily dependent on how the connection matrix \(K_{\beta}^{\alpha}\) in Eq. (3), between neurons in adjacent layers, changes when we scale up the state space dimension:
* First consider increasing \(D\) without changing the connectivity between the layers (i.e., the percentage of neurons in the next layer that a particular neuron is connected to, or in other words the percentage of non-vanishing entries in each row of \(K_{\beta}^{\alpha}\)) or the coupling strength (the typical size of the entries in the weighting matrix \(K\)), then the sparsity of \(\mathbb{M}\) and the typical amplitude of entries in it won't change, resulting in the Frobenius norm scaling as \(D^{1}\) since the number of these entries grow as \(D^{2}\). The number of singular values on the other hand only grows as \(D^{1}\), so the average singular value would have to scale as \(\propto\sqrt{D}\). In other words, without changing other features like connectivity and weight ranges etc, a wider neural network tends to possess more positive finite time Lyapunov exponents, thus more likely to stray into the undesirable deep mixing regime. Therefore, even without heeding limitations on computational resources, wider networks should be made shallower.
* When we renormalize the coupling strengths so the weighting matrix becomes more like a probability with weights summing up to a constant (e.g., \(\sum_{\alpha}K_{\beta}^{\alpha}=1\)), the situation changes. The Frobenius norm now scales as \(D^{0}\), and the average singular value now must scale as \(1/\sqrt{D}\), so the dynamics become less chaotic as the width of the layers increases. With this approach, one could thus simultaneous increase both the width and the depth of the network, without degrading performance. Although there doesn't seem to be a strong incentive to doing so, given the higher drain on computational resources.
* Another way to curtail chaos while increasing \(D\) is to make the neurons in the wider network more sparsely connected (setting a higher proportion of entries in each row of \(K\) to zero). A particularly interesting physical interpretation that is relevant for a subset of such a strategy is related to path dependence, which by definition has a tendency to help retain reliance on initial data, thereby less mixing. Given a dynamical system, path dependence can be folded into a particular type of higher dimensionality, with many new auxiliary state variables that only depend on one other variable, since they are supposed to just passively record the past states of that variable and carry them forward without modifications. For example, if \(y^{[j+1]\alpha}\) depend not only on \(y^{[j]\beta}\) but also on \(y^{[j-1]\gamma}\), then we can define an additional set of variables \(x^{[j]\beta}\) and simply let them be updated by \(x^{[j]\alpha}=y^{[j-1]\alpha}\) (note each \(x\) node only depends on one \(y\) node in the previous step). This way, the \(j+1\)th step of the \(2D\)-dimensional \(y\oplus x\) combined system depends only on its step \(j\) state, and path dependence is formally removed so a dimensionally expanded version of Eq. (1) remains valid. Mapping such a dynamical system into a neural network, we see that the width of the network can be increased in order to simulate the effect of path dependence, so long as the newly added dimensions don't link up with too many of the existing ones. This could explain why pruning networks sometimes helps when the network is otherwise mixing (suffers from e.g., overfitting problems).
## III Conclusion
In this brief note, we advocated for the enlistment of ergodic theory to help intuit behaviors of deep neural networks. In particular, we argued that a highly effective deep neural network would likely operate on the edge of chaos. This is the easiest to intuit in the case of a classification problem. The corresponding dynamical system evolution of the neural network rearranges the data into orderly and well-segregated tiles (the specifics of the tiling depends on the hypothesis function choice), each representing a class, and so given any input data to be processed, the output will land in one of them. For this to work, we need the flow to be sufficiently flexible to contort any _contiguous_ (the problem needs to be interpolatable for training to be useful) but possibly exotic-looking initial shape representing a class in the initial data space, into a regular simply hypercubic tile (assuming the simplest threshold-based hypothesis function). This means we would need (quasi-)ergodicity, so any initial data point within that shape can eventually arrive at the desired tile site (tiles are larger than infinitesimal neighbourhoods of points, thus strictly speaking we don't need full ergodicity, and subsequently the prefix "quasi"). On the other hand, we must also avoid the more strongly chaotic mixing behavior, otherwise the class shape being convected by the flow will be shredded and thoroughly mixed up with sibling shapes, making clean well-segregated tiles in output space impossible. In other words, we need moderation in the wildness of the dynamical system trajectories, and just-on-the-cusp of chaotic regime seems ideal.
We then discussed some network architectural traits that may be advantageous in terms of finding such ergodic-but-not-mixing niche. For our generic deliberation, we have stayed with qualitative properties and intuitive guidelines. However, after the implementation and training of an actual neural network, it should not be prohibitively difficult to compute the actual numerical values of the finite time Lyapunov exponents, as there exists mature and efficient numerical routines for computing singular values.
The largest values of these could then serve as a quality control indicator that informs on whether the result of the training is suitable for interpolation and extrapolation. This is potentially an important application, as the tunable parameters of a neural network are so very numerous, giving it the ability to always fit to any training data we present to it, but real life applications require the trained network to respond to new situations in a moderate and controlled manner, rather than jerks around wildly (i.e., we want to avoid the overfitting problem). Previous approaches to ensuring that this is the case is largely empirical, by testing the trained network against additional datasets not included in the training stage. This wastes precious labelled data, and one can never be sure these tests properly cover all plausible new situations. The finite time Lyapunov exponents and their ergodic theory significances could possible provide a valuable alternative.
One could of course also compute the finite time Lyapunov exponents for the purpose of debugging. For example, if it turns out that the trained network lacks expressivity, then one can imagine that the problem might be that the sum \(\sum_{\alpha}\lambda_{i}^{[N-1]\alpha}\) is too negative, so the dynamics ends up being highly dissipative and collapses onto a fixed point or an attractor, occupying only a corner of the state space, thereby preventing the network, as an approximator function, from taking up a chunk of the codomain.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China grants 12073005, 12021003.
|
2310.10300 | BeatDance: A Beat-Based Model-Agnostic Contrastive Learning Framework
for Music-Dance Retrieval | Dance and music are closely related forms of expression, with mutual
retrieval between dance videos and music being a fundamental task in various
fields like education, art, and sports. However, existing methods often suffer
from unnatural generation effects or fail to fully explore the correlation
between music and dance. To overcome these challenges, we propose BeatDance, a
novel beat-based model-agnostic contrastive learning framework. BeatDance
incorporates a Beat-Aware Music-Dance InfoExtractor, a Trans-Temporal Beat
Blender, and a Beat-Enhanced Hubness Reducer to improve dance-music retrieval
performance by utilizing the alignment between music beats and dance movements.
We also introduce the Music-Dance (MD) dataset, a large-scale collection of
over 10,000 music-dance video pairs for training and testing. Experimental
results on the MD dataset demonstrate the superiority of our method over
existing baselines, achieving state-of-the-art performance. The code and
dataset will be made public available upon acceptance. | Kaixing Yang, Xukun Zhou, Xulong Tang, Ran Diao, Hongyan Liu, Jun He, Zhaoxin Fan | 2023-10-16T11:36:38Z | http://arxiv.org/abs/2310.10300v1 | # BeatDance: A Beat-Based Model-Agnostic Contrastive Learning Framework for Music-Dance Retrieval
###### Abstract
Dance and music are closely related forms of expression, with mutual retrieval between dance videos and music being a fundamental task in various fields like education, art, and sports. However, existing methods often suffer from unnatural generation effects or fail to fully explore the correlation between music and dance. To overcome these challenges, we propose BeatDance, a novel beat-based model-agnostic contrastive learning framework. BeatDance incorporates a Beat-Aware Music-Dance InfoExtractor, a Trans-Temporal Beat Blender, and a Beat-Enhanced Hubness Reducer to improve dance-music retrieval performance by utilizing the alignment between music beats and dance movements. We also introduce the Music-Dance (MD) dataset, a large-scale collection of over 10,000 music-dance video pairs for training and testing. Experimental results on the MD dataset demonstrate the superiority of our method over existing baselines, achieving state-of-the-art performance. The code and dataset will be made public available upon acceptance.
## 1 Introduction
Dance, as a significant art form, not only embodies human beauty and emotion but also serves as a crucial medium for cultural inheritance and communication. In recent years, with the rapid advancement of the Internet, the availability and impact of dance videos have witnessed a remarkable surge, providing audiences with diverse and captivating dance experiences. Consequently, the demand for large scale music-dance retrieval has grown exponentially, holding immense practical value for dance practitioners, encompassing areas such as dance education, art creation, and sports training.
Existing approaches for obtaining music/dance from dance/music can be broadly categorized into generation-based and retrieval-based methods. While generation-based
Figure 1: This figure represents music, dance, and beat visualization from top to bottom. Red dots indicate occurrence of dance beats, while purple vertical lines represent occurrence of music beats. It is evident that there exists a certain degree of correspondence between dance beats and music beats.
methods [16, 21, 37, 32] have shown significant progress in recent years, they encounter certain inherent challenges such as unnatural generation effects and limitations in generating diverse data types. For instance, in mainstream Music2Dance methods, only human key points are generated, neglecting factors like background and clothing. Similarly, in Dance2Music approaches [8, 10, 38], models with better performance often generate MIDI scores, overlooking the richness of human voice, background sound, and other audio details. On the other hand, retrieval-based methods naturally address these issues. Although music-dance retrieval has received comparatively less attention, there have been notable advancements [36, 43], but not fully explored correlation between dance and music.
Generally, dancers synchronize their body movements with the rhythm of music, expressing their emotions and offering audiences a rich artistic experience, where the "beat" in dance and music serves as the most important information, as illustrated in Figure 1. Motivated by this observation, we propose BeatDance, a novel beat-based model-agnostic contrastive learning framework. In BeatDance, the concept of beat alignment between music and dance is fully utilized to enhance the model's focus on individuals. By incorporating temporal human pose information, representing the music beat, the model becomes more attuned to capturing the nuances of dancers' movements and allows for a stronger connection between the rhythmic elements of dance and music. Hence, it the retrieval performance could be significantly increased.
BeatDance comprises three key blocks: the Beat-Aware Music-Dance InfoExtractor, the Trans-Temporal Beat Blender, and the Beat-Enhanced Hubness Reducer. In the InfoExtractor block, pre-trained models and methods are employed to extract rich information, including global features(CLIP [29]/MERT [22]), music beat, and dance beat. The Feature Alignment module is utilized to unify the dimensions of these extracted features. The Beat Blender block involves sending the features to their respective Trans-Temporal Process modules to obtain trans-temporal features. These trans-temporal features are then blended with the global features using the Beat-Enhanced Feature Fusion module, and beat-guided features are obtained through the Beat-Guided Information Extraction module. To address the Hubness problem, the Beat-Enhanced Hubness Reducer block employs a query bank to normalize the similarity matrix during the inference phase, thereby alleviating issues associated with hubness. Additionally, we introduce the Music-Dance dataset(MD dataset), the first large-scale dataset specifically designed for the music-dance retrieval task. This dataset is sourced from Biliili [2], a popular video-sharing platform in China, covering the period from May 2018 to September 2023. It comprises 12,000 curated dance-music pairs with over 100,00 likes, encompassing various dance and music genres. Experimental results on the MD dataset demonstrate the superiority of our method compared to existing baselines, achieving state-of-the-art performance.
Our main contributions are summarized as below:
* We introduce BeatDance, a novel beat-based model-agnostic contrastive learning framework that effectively utilizes the beat alignment information between music and dance to enhance the music-dance retrieval task.
* To facilitate the learning of music-dance correlation, BeatDance incorporates the Beat-Aware Music-Dance InfoExtractor, the Trans-Temporal Beat Blender, and the Beat-Enhanced Hubness Reducer. These modules work synergistically to jointly capture and leverage the relationship between music and dance.
* To evaluate and benchmark existing methods, we present the MD dataset, the first large-scale music-dance retrieval dataset. This dataset encompasses a wide range of dance and music genres, providing a comprehensive evaluation platform. Experimental results on the MD dataset demonstrate the superior performance of our proposed method.
#### 2.0.2 Dance2Music
Generating melodious and harmonious music for a given video is a challenging task, and there are two main categories of methods to address this task: non-symbolic based and symbolic based. Non-symbolic methods generate audio directly in the waveform, which is the original form of audio [8, 10, 38]. However, a second of audio waveform covers a significant amount of data due to its high frequency. Even utilizing intermediate audio representations [6, 18, 40], it is still computationally expensive and prone to generate noise. Symbolic methods adopt a symbolic music modeling approach, such as 1D piano-roll [7] and 2D event-based MIDI-like [14] music representations, etc. [25, 31]. However, harmonious resonance of different timbes of instruments is essential to produce beautiful music, but symbolic methods often simplify the timbre, resulting in relatively monotonous generated music. Moreover, given the wealth of available internet data, performing direct retrieval music from video leads to outstanding outcomes, circumventing above concerns. Consequently, our paper delves into the exploration of dance-music retrieval.
#### 2.0.3 Music-Dance Retrieval
Music-dance retrieval is a highly practical task in retrieval task, and music-dance retrieval can be considered as a sub-task of video-music retrieval. In recent years, video-music retrieval have made significant progress [5, 13, 28, 34]. Typically, those above methods design a music and a video encoders to project raw modalities into a high-dimensional feature space, followed by contrastive learning training. However, video-music retrieval task primarily focus more on the high-level semantic consistency between the two modalities [20, 24], while ignoring the real-time matching requirements between the two modalities. Relatively few to no researchers have paid attention to the field of Music-Dance retrieval [36, 43], and those who have mostly followed the traditional path of video-music retrieval, neglect strong beat correspondence between dance and music, and do not fully explore the correlation between music and dance. Moreover, we find there is no suitable large-scale dataset to benchmark music-dance retrieval methods. In this paper, we propose the BeatDance method and the MD dataset to solve the issue.
## 3 Methodology
### Overview
Our study involves two tasks: Music-Dance retrieval and Dance-Music retrieval, as Fig. 3 shows. For Music-Dance retrieval task, we take a piece of music \(m\) as input, and output the matching sequence of dance \(\{d_{1},d_{2}...d_{n}\}\) from our database. For Dance-Music retrieval task, we take a piece of dance \(d\) as input, and output the matching sequence of music \(\{m_{1},m_{2}...m_{n}\}\) from our database.
To better explore correlation between the music and dance modalities, we propose a Beat-Based Model-Agnostic contrastive learning framework called BeatDance, as Fig. 2 shows. BeatDance consists of three blocks: Beat-Aware Music-Dance InfoExtractor, Trans-Temporal Beat Blender, and Beat-Enhanced Hubness Reducer.
In InfoExtractor block, we aim to acquire richer information and dimension unification. We send music \(m\) and dance \(d\) to it, and then obtain unified: music beat feature \(f^{BM}\), dance beat feature \(f^{BD}\), music global feature \(f^{M}\), dance global feature \(f^{D}\).
\[\begin{split} f^{D},f^{BD}&=InfoExtractor_{d}(d) \\ f^{M},f^{BM}&=InfoExtractor_{m}(m)\end{split} \tag{1}\]
In Beat Blender block, we aim to leverage the strong correspondence between music beat and dance beat to better explore the correlation between Music and Dance. We send unified feature \(f^{BM},f^{BD},f^{M},f^{D}\) to it, and then get beat-enhanced feature \(f_{M_{e}},f_{D_{e}}\) and beat-guided feature \(f_{M_{g}},f_{D_{g}}\).
\[\begin{split} f^{D_{e}},f^{D_{g}}&= BeatBlender_{d}(f^{D},f^{BD})\\ f^{M_{e}},f^{M_{g}}&= BeatBlender_{m}(f^{M},f^{BM}) \end{split} \tag{2}\]
In Hubness Reducer block, we aim to tackle the Hubness problem in retrieval task by constructing a query bank to normalize similarity matrix. Beat-Enhanced Hubness Reducer operates only during inference stage. We send our similarity matrix \(m_{e}\) to it, and get a normalized matrix \(m_{qbnorm}\):
\[m_{qbnorm}=HubnessReducer(m_{e}) \tag{3}\]
Finally, we can get ranked sequence by \(m_{qbnorm}\) for music-to-dance or dance-to-music retrieval task.
### Beat-Aware Music-Dance InfoExtractor
To tackle the challenge of music-dance retrieval, it is crucial to extract powerful features from both the dance video and the music, enabling the identification of their similarities. However, a naive approach would involve directly using global features extracted from CLIP [29] or MERT [22] for retrieval purposes. While this approach seems straightforward, it has limitations. Pretrained CLIP [29] and MERT [22] features are learned separately from other tasks and primarily focus on capturing global representations of images or music. Consequently, they may fail to capture the specific correlation between music and dance, hindering the effectiveness of music-dance retrieval. To overcome these limitations, we introduce the Beat-Aware Music-Dance InfoExtractor.
#### 3.2.1 DanceInfo Extractor
First, we calculate the CLIP [29] features for dance videos \(d\). Then, we evenly divide CLIP [29] feature into \(L\) intervals, and perform averaging operation on each interval. Finally, we obtain a dance feature \(f^{d}\in R^{L\times d_{C}}\), which can represent entire dance, where \(d_{C}\) represents dimension of CLIP feature. We denote process of obtaining CLIP feature as \(\Gamma_{C}\):
\[f^{d}=\Gamma_{C}(d) \tag{4}\]
Second, we obtain human pose by sequence Openpose [4], then we calculate the dance beat \(b^{d}\in R^{F_{d}}\) from pose sequence by Dance Beat Detector [32], where \(F_{d}\) represents frames number of dance video. To put it simple, the main idea of Dance Beat Detector is to consider the moments when the acceleration of movement is 0 as the beat points. We denote Dance Beat Detector as \(\Phi_{d}\):
\[b^{d}=\Phi_{d}(Openpose(d)) \tag{5}\]
#### 3.2.2 MusicInfo Extractor
First, we calculate the MERT [22] features for music \(m\). Then, we execute interval averaging operation as above CLIP features, and obtain a music feature \(f^{m}\in R^{L\times d_{M}}\), which can represent entire music, where \(d_{M}\) represents dimension of MERT [22] feature, We denote process of obtaining CLIP [29] feature as \(\Gamma_{M}\):
\[f^{m}=\Gamma_{M}(m) \tag{6}\]
Second, we directly obtain the dance beat \(b^{d}\in R^{F_{m}}\) by Music Beat Detector from Librosa [23], We denote Music Beat Detector as \(\Phi_{m}\):
\[b^{m}=\Phi_{m}(m) \tag{7}\]
#### 3.2.3 Feature Alignment
Since the dimensions of \(f^{d}\), \(f^{m}\), \(b^{d}\), and \(b^{m}\) are all different, we need to implement a process of unification.
With respect to beat, \(b^{m}\) or \(b^{d}\) can only take two possible values, 0 or 1, where 1 represents the presence of a beat and 0 represents its absence. Since beat is not a feature vector, segmenting and averaging as above methods would result in significant loss of information. To solve this problem, we first align the frame per second(fps) of \(b^{m}\) and \(b^{d}\), and then reshape them into \(f^{bm},f^{bd}\in R^{L\times d_{b}}\), respectively, where \(d_{b}\) is dimension of beat feature. Additionally, We have processed all dance and music data to have equal durations, see Sec. 4.1 for more details.
Figure 2: Overview of BeatDance. We constructed a contrastive learning framework consisting of three blocks: InfoExtractor, Beat Blender, Beat-Enhanced Hubness Reducer. Specifically, given a music \(m\) and a dance \(d\), InfoExtractor first returns aligned global feature \(f^{D}\) and \(f^{M}\), beat feature of dance \(f^{BD}\) and music \(f^{BM}\). Then, Beat Blender processes them and returns beat-enhanced feature of music \(f^{M_{e}}\) and dance \(f^{D_{a}}\), beat-guided feature of music \(f^{M_{g}}\) and dance \(f^{D_{g}}\). Finally, we construct two similarity matrix \(m_{e}\) and \(m_{g}\) between two modality from beat-enhanced feature and beat-guided feature. In training phase, we utilize \(m_{e}\) and \(m_{g}\) to calculate beat-enhance loss \(L_{e}\) and beat-guided loss \(L_{g}\) for contrastive learning; in inference phase, we only send \(m_{e}\), to Beat-Enhanced Hubness Reducer block and obtains normalized \(m_{q_{bnorm}}\), and computed retrieved sequences.
Next, we use a two layers MLP to adjust their feature dimension of \(f^{bm},f^{bd},f^{m},f^{d}\), respectively, obtaining aligned features \(f^{BM},f^{BD},f^{M},f^{D}\in R^{L\times d_{u}}\), we denote this process as \(\zeta\):
\[\begin{split} f^{D}&=\zeta_{D}(f^{d})\\ f^{M}&=\zeta_{M}(f^{m})\\ f^{BD}&=\zeta_{BD}(b^{d})\\ f^{BM}&=\zeta_{BM}(b^{m})\end{split} \tag{8}\]
### Trans-Temporal Beat Blender
As shown in Fig. 2, for both music and dance modalities, we extract two different kinds of features. However, simply concatenating or adding these features may not fully utilize their advantages. Moreover, it is important to consider capturing deep correlation between music and dance. To address these issues, we introduce a novel and efficient fusion block named Trans-Temporal Beat Blender.
#### 3.3.1 Trans-Temporal Processing
Effective extraction of temporally spanning features significantly impacts the final results in both dance and music domains. In recent years, transformers have demonstrated remarkable success in extracting such features. Therefore, we employ four multi-layer transformer architecture to construct the Trans-Temporal Process module for \(f^{D},f^{M},f^{BD},f^{BM}\) respectively, and then obtain respective trans-temporal feature \(f^{D_{t}},f^{M_{t}},f^{BD_{t}},f^{BM_{t}}\in R^{L\times d_{u}}\), we denote this process as \(\eta\).
\[\begin{split} f^{D_{t}}&=\eta_{D}(f^{D})\\ f^{M_{t}}&=\eta_{M}(f^{M})\\ f^{BD_{t}}&=\eta_{BD}(f^{BD})\\ f^{BM_{t}}&=\eta_{BM}(f^{BM})\end{split} \tag{9}\]
#### 3.3.2 Beat-Enhanced Feature Fusion
Due to the relatively weak correlation between music and dance features, it will introduce several challenges in retrieval tasks. However, it has been observed that music beat and dance beat exhibit a strong correspondence, indicating a potential avenue to resolve this problem.
To leverage this, a intuitive way is to use element-wise addition, but it fails to effectively capture cross-impact and non-linear relationships between features. Meanwhile, element-wise multiplication precisely addresses this issue [12], but is highly susceptible to noise interference. Thus, we combine above two method to achieve Beat-Enhanced Feature Fusion:
\[\begin{split} f^{D_{e}}&=MLP([f^{D_{t}}\oplus f^{ BD_{t}},f^{D_{t}}\otimes f^{BD_{t}}])\\ f^{M_{e}}&=MLP([f^{M_{t}}\oplus f^{BM_{t}},f^{M_{t}} \otimes f^{BM_{t}}])\end{split} \tag{10}\]
where \(f^{M_{e}},f^{D_{e}}\in R^{L\times d_{u}}\), and \(MLP\) is used to rectify dimension.
#### 3.3.3 Beat-Guided Information Extraction
On the one hand, after enhance the beat-related information in \(f^{D_{t}}\) and \(f^{M_{t}}\) through Beat-Enhanced Feature Fusion, we next propose to guide the learning of \(f^{D}t\) and \(f^{M_{t}}\) towards the direction containing beat-related information, utilizing the Beat-Guided Information Extraction module.
We utilize a Multi-Head Attention layer to construct Beat-Guided Information Extraction module. In this module, we can consider \(f^{BM_{t}}\) and \(f^{BD_{t}}\) information to be a subset of \(f^{M_{t}}\) and \(f^{D_{t}}\) information, to get beat-guided feature, we can construct Key and Value from \(f^{M_{t}},f^{D_{t}}\), and Query from \(f^{BM_{t}},f^{BD_{t}}\), as XPool [11], we take dance part as example, and music part is similar:
\[\begin{split} Q_{b}&=\operatorname{LN}\left(f^{BD_{t} }\right)W_{Q}\\ K_{d}&=\operatorname{LN}\left(f^{D_{t}}\right)W_{K} \\ V_{d}&=\operatorname{LN}\left(f^{D_{t}}\right)W_{V} \\ head_{i}&=\operatorname{softmax}\left(\frac{Q_{b}K _{d}^{T}}{\sqrt{D_{p}}}\right)V_{d}\\ \end{split} \tag{11}\]
\[f^{D_{g}}=[head_{1},\dots,head_{h}]W_{O} \tag{13}\]
where \(\operatorname{LN}\) is a Layer Normalization layer, and \(W_{Q},W_{K},W_{V},W_{O}\) are projection matrices, and \(h\) is head number, \(f^{D_{g}},f^{M_{g}}\in R^{L\times d_{u}}\).
### Beat-Enhanced Hubness Reducer
Despite the previous block's ability to effectively capture the correlation between music and dance, similar to other retrieval tasks [3], a dance/music may always be reasonably matched to multiple music/dance, the Hubness Problem persists. Hubness problem refers to a phenomenon in which certain samples in high-dimensional data become central hubs, attracting a disproportionate number of nearest neighbors, which can lead to decreased retrieval accuracy, biased results, and difficulties in generalization. To tackle this challenge, we design the Beat-Enhanced Hubness Reducer block based on QBNorm [3]. Additionally, Beat-Enhanced Hubness Reducer only executes during inference phase.
Specifically, we take music-dance retrieval as example. First, we construct a QueryBank set \(S_{QB}\) from music in training/validation/test set. Second, we compute querybank-test similarity matrix \(m_{qb,t}\in R^{N_{qb}\times N_{t}}\) by query bank \(S_{QB}\) and test dances set \(S_{T^{d}}\), where \(N_{t}\) and \(N_{qb}\) represent number of test set and query bank, and then take the intersection of all \(m\in S_{d}QB\)'s top 1 matching dance to construct the Hubness-affecting dance set \(S_{H_{d}}\)
Third, we compute test similarity matrix \(m_{e}\in R^{N_{t}\times N_{t}}\) by test music set \(S_{T^{m}}\) and test dance set \(S_{T^{d}}\), and then select all Hubness-affected music \(m_{H}\), whose top 1 matching dance is in Hubness-affecting dance set \(S_{H_{d}}\), to construct Hubness-affected music set \(S_{H_{m}}\). Additionally, the constructions of all similarity matrix stem from corresponding Beat-Enhanced feature. Fourth, we update test similarity matrix \(m_{e}\):
\[m_{e}(i,j)=\frac{\exp\left(\beta\cdot m_{e}(i,j)\right)}{\mathbf{1}^{T}\exp \left[\beta\cdot m_{qb\cdot d}(j)\right]}\quad\text{ if }music_{i}\in S_{H_{m}} \tag{14}\]
where \(i\) and \(j\) represent the index of \(S_{T^{m}}\) and \(S_{T^{d}}\).
Finally, we rename new matrix as QBNorm similarity matrix \(m_{qnorm}\), and can calculate ranked dance for each music from it. Hubness Reducer for dance-music retrieval is operated similarly.
### Training and Inference
#### 3.5.1 Training
During training stage, we execute contrastive learning, which encourages positive pairs to have a high similarity value, while vice versa. \(f^{D_{e}},f^{M_{e}}\) from the Beat-Enhanced Feature Fusion are used for computing Beat-Enhanced similarity matrix \(m_{e}\). Then, we obtain Beat-Enhanced Loss \(\mathcal{L}_{e}\) from \(m_{e}\) by infoNCE [27] loss, and we perform similar operation to obtain Beat-Guided Loss \(\mathcal{L}_{g}\):
\[\mathcal{L}_{e}^{m\to d}=-\frac{1}{B}\sum_{i=1}^{B}\log\frac{e^{s \left(f_{i}^{M_{e}},f_{i}^{D_{e}}\right)\cdot\lambda}}{\sum_{j=1}^{B}e^{s\left( f_{i}^{M_{e}},f_{j}^{D_{e}}\right)\cdot\lambda}} \tag{15}\]
\[\mathcal{L}_{g}^{m\to d}=-\frac{1}{B}\sum_{i=1}^{B}\log\frac{e^{s \left(f_{i}^{M_{g}},f_{i}^{D_{g}}\right)\cdot\lambda}}{\sum_{j=1}^{B}e^{s\left( f_{i}^{M_{g}},f_{j}^{D_{g}}\right)\cdot\lambda}} \tag{16}\]
\[\mathcal{L}^{m\to d}=\mathcal{L}_{e}^{m\to d}+\beta\times\mathcal{L}_{g}^{m \to d} \tag{17}\]
where \(s(m,d)\) represents cosine similarity, \(B\) is batch size, \(\lambda\) is temperature parameter, \(\beta\) is a weighted hyperparameter. \(\mathcal{L}^{d\to m}\) is computed symmetrically, and \(\mathcal{L}=\mathcal{L}^{m\to d}+\mathcal{L}^{d\to m}\) is used for training our model.
#### 3.5.2 Inference
During inference stage, we only construct similarity matrix \(m_{e}\) through \(f^{D_{e}}\) and \(f^{M_{e}}\). Because Beat-Guided Information Extraction is designed solely to guide \(f^{D_{t}},f^{M_{t}}\) towards the direction that contains \(f^{BD_{t}},f^{BM_{t}}\) information during training phase, thus unnecessary to consider its influence during inference phase. Then, we send \(m_{e}\) to Beat-Enhanced Hubness Reducer to get a normalized matrix \(m_{qbmorm}\). Finally, we can calculate a ranked sequence from \(m_{qbmorm}\) for music-dance or dance-music retrieval task.
## 4 Experiment
### Dataset
To evaluate and benchmark existing methods in the Music-Dance retrieval task, we introduce M-D dataset, which is the first large-scale open-source dataset for this task. Fig. 3 illustrates some examples of this dataset. The dataset is sourced from Bilibili [2], the most popular video-sharing platform among young people in China. To ensure the quality and popularity of the dance videos, we collect videos uploaded between May 2018 and September 2023 in the dance category with over 100,000 likes. This ensures the excellence and popularity of the dataset.
The Music-Dance dataset encompasses various types of dance videos, including dance performances, tutorials, and
Figure 3: BeatDance can effectively capture the underlying correspondence between dance and music. Given a piece of music/dance, the topk retrieved musics/dances exhibit a high degree of semantic similarity, such as in terms of dance/music style, emotional characteristics and etc.. It also demonstrates the strong expressive capability of BeatDance in feature extraction. Additionally, demonstration video of more experimental results can be found at YouTube-URL.
practices in daily life. Through meticulous manual selection, we curate approximately 12,000 high-quality dance performance videos. The dataset is randomly shuffled and split into training, validation, and test sets in an 8:1:1 ratio. Statistical analysis of the dataset reveals that it contains both single-person and group dance performances, covering a wide range of dance genres such as Ballet, Contemporary, Hip-hop, Jazz, Tap, Latin, and more. It also includes a diverse selection of music genres, including Pop, Rock, Hip-hop, Electronic, Jazz, and others. Moreover, in addition to the dance and music video data, we provide dance beats extracted by Openpose [4] and music beats extracted by Librosa [23]. These beats are uniformly sampled at 10 frames per second (fps) and represented as binary values (1 for presence of beat, 0 for absence of beat). To ensure consistency in the analysis and evaluation of beat-based approaches, we consider a consistent 10-second segment from the middle of each dance video in our task. This ensures that all videos in the dataset have the same duration, allowing us to attribute any performance improvements solely to the presence of beats, independent of duration information.
### Evaluation
Similar to other multi-modal retrieval tasks, such as text-video retrieval [11, 41], video-music retrieval [13, 34], we introduce Recall@K (higher is better) and Mean/Median Rank (lower is better) as evaluation metrics. To explore whether our method fully utilizes beats, we also introduce BS@K [32](averaged Beat Similarity between ground truth and top k query results). We take dance-music retrieval as example:
\[BS_{d\to m}=\frac{1}{|B^{m}|}\sum_{t^{m}\in B^{m}}\exp\left\{-\frac{\min_{t^{ d}\in B^{d}}\left\|t^{d}-t^{m}\right\|^{2}}{2\sigma^{2}}\right\} \tag{18}\]
where, \(t^{m}\) represents the moment when music beats occur. Likewise, \(BS_{m\to d}\) is defined symmetrically in music-dance retrieval.
### Implementation Detail
In our experiments, we employ CLIP's ViT-B/32 [29] image encoder and MERT-95M [22] as the base feature extractors. We initialize all encoder parameters using their pre-trained weights. The base features from CLIP [29] and MERT [22] are precomputed, and the interval \(L\) between base features is set to 10. The beat dim \(d_{b}\) is set to 10. The feature unification dimension size is set to \(d_{u}\)=256. We initialize our logit scaling parameter \(\lambda\) using the value from the pre-trained CLIP [29] model. For all transformers, we use a hidden dimension of 256, 6 layers, 4 heads, and a dropout rate of 0.3 (except for Beat-Guided Information Extraction, which uses a dropout rate of 0.3). During training, we set the batch size to 32 and the learning rate for the model parameters to 1e-5. We optimize our model for 150 epochs using the AdamW optimizer with a weight decay of 0.2. The learning rate is decayed using a cosine schedule. We use training set to construct query bank. Loss weight \(\beta\) is set 0.4 for constrastive learning.
### Comparison
To evaluate the performance of our method, we compared it with recent related works. Due to the limited availability of open-source code for video-music retrieval, let alone music-dance retrieval, we only reproduced the classic algorithms MVPt [34] and CBVMR [13] in this field. Additionally, we migrated models from other multimodal retrieval fields, such as XPool [11] and MQVR [41] in text-video retrieval and SCFEM [26] in image-music retrieval. Specifically, for MVPt, since the music encoder(DeepSim) used in MVPt [34] is not open-sourced, we replaced it with MERT [22]. For CBVMR, due to the age of CBVMR, we replace its video encoder and music encoder with CLIP [29] and MERT [22] respectively to ensure fairness. For XPool, we use averaged MERT [22] feature of music instead of the CLIP feature of text. For MQVR, we first obtain MERT [22]/CLIP [29] feature, and then uniformly divide it into 5 intervals, Averaged MERT [22]/CLIP [29] feature of each
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Music \(\Longrightarrow\) Dance} & \multicolumn{2}{c}{Dance \(\Longrightarrow\) Music} \\ \cline{2-5} & Recall@1/10/50/100\(\uparrow\) & MeanR/MedianR\(\downarrow\) & Recall@1/10/50/100\(\uparrow\) & MeanR/MedianR\(\downarrow\) \\ \hline CBVMR & 0.83/6.35/20.71/30.61 & 245.5/333.91 & 1.24/6.11/20.79/31.02 & 236.5/333.64 \\ SCFEM & 0.99/7.76/23.10/35.81 & 196.0/306.05 & 0.91/8.25/23.27/35.31 & 192.0/305.65 \\ MQVR & 1.65/8.91/26.90/39.60 & 152.5/263.80 & 1.24/9.49/26.90/39.11 & 152.0/265.36 \\ MVPt & 1.57/8.25/26.24/38.78 & 162.5/258.15 & 1.23/9.46/27.81/39.42 & 166.0/254.81 \\ XPool & 1.57/9.41/27.72/41.50 & 148.0/248.79 & 1.49/8.83/28.55/41.58 & 148.0/253.80 \\ \hline BeatDance & **2.48/13.12/32.26/44.06** & **128.0/239.81** & **2.97/13.04/32.34/44.55** & **127.0/238.77** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons with state-of-the-art results on M-D dataset for music-to-dance and dance-to-music retrieval. Compared models include: CBVMR [13], XPool [11], SCFEM [26], MQVR [41], MVPt [34].
interval represent one query in multi-query scene in MQVR. For SCFEM, we average the feature obtained by CLIP [29] over the time dimension to replace the original image feature.
As shown in Tab. 1, our proposed BeatDance obviously outperforms all baseline methods, including CB-VMR, SCFEM, MQVR, MVPt, and XPool, by a significant margin across various evaluation metrics. Specifically, in the Music-to-Dance task, BeatDance achieves superior performance compared to other models, with Recall@1/10/50/100 values of 2.48/13.12/32.26/44.06, respectively. Additionally, BeatDance obtains lower MeanR/MedianR values, specifically 128.0/239.81. These results indicate that BeatDance significantly improves the accuracy of retrieving dance videos given music inputs. Similarly, in the Dance-to-Music task, BeatDance continues to outperform the baseline models. It achieves a Recall@1/10/50/100 of 2.97/13.04/32.34/44.55, surpassing all other models. The MeanR/MedianR values for BeatDance in this task are 127.0/238.77, which are lower compared to the baseline models. The superior performance of BeatDance can be attributed to its ability to capture and learn the correlation between music and dance videos more effectively. By considering beat alignment, BeatDance leverages the temporal structure and rhythmic patterns present in both the music and dance modalities. This allows the model to better align and synchronize the representations of music and dance, resulting in improved retrieval performance. The significant improvements achieved by BeatDance across all evaluation metrics establish its superiority over existing methods and position it as the current state-of-the-art (SOTA) approach in the field of music-dance retrieval.
### Ablation Study
#### 4.5.1 Trans-Temporal Processing
To better capture the trans-temporal information in music and dance related feature, we propose Trans-Temporal Processing. As shown in Tab. 2, the introduction of it makes great improvement in Recall@1/10/50/100(+2.55 in average) and Median/Mean Rank(+40.40 in average), which demonstrates its great effectiveness.
#### 4.5.2 Beat-Enhanced Feature Fusion
To better enhance global information with corresponding beat information, we propose Beat-Enhance Fusion. As shown in Tab. 2, the introduction of it makes great improvement in Recall@1/10/50/100(+1.80 in average) and Median/Mean Rank(+18.03 in average), which demonstrates its great effectiveness.
#### 4.5.3 Beat-Guided Information Extraction
To better guided music and dance related feature training direction containing beat information, we propose Beat-Guided Information Extraction. As shown in Tab. 2, the introduction of it makes great improvement in Recall@1/10/50/100(+2.53 in average) and in Median/Mean Rank(+17.34 in average), which demonstrates its great effectiveness.
#### 4.5.4 Beat-Enhanced Hubness Reducer
To address the Hubness problem, we design Beat-Enhanced Hubness Reducer. As shown in Tab. 2, the introduction of it makes great improvement in Recall@1/10/50/100(+0.47 in average) and in Median/Mean Rank(+3.18 in average), which demonstrates its great effectiveness.
#### 4.5.5 Pose Estimatior
In the process of generating Dance Beats, pose estimation plays an important role, and we explore two popular methods Openpose and Mediapipe as our Pose Estimators. From Tab. 2, it can be observed that the performance based on Openpose is improved in Recall@1/10/50/100(+1.28 in average) and in Median/Mean Rank(+3.55 in average), compared to Mediapipe. This is because our dataset includes both multi-person and single-person dances, and in multi-person dances, Mediapipe focuses only on one dancer, neglecting the influence of others.
#### 4.5.6 Fusion Mode
It is well known that beat information is crucial in dance and music. How to effectively integrate beat information with related music and dance features is an important problem. Thus, we also explore other feature fusion methods in addition to BeatDance.In Tab. 3, Beat Loss represents the separate contrastive learning training of global features and beat features after passing through the Trans-Temporal Process module. Beat-Enhanced Process(B) denotes the process in which global features and beats are first processed through the Beat-Enhanced Feature Fusion module, followed by the Trans-Temporal Process module, and then subjected to contrastive learning training. Beat-Enhanced Feature Fusion (A) refers to the process where global features and beats are initially processed through their respective Trans-Temporal Process modules, followed by the Beat-Enhanced Feature Fusion module, and subsequently undergo contrastive learning training. Beat-Guided Information Extraction signifies the process in which global features and beats are processed through their respective Trans-Temporal Process modules, followed by the Beat-Guided Information Extraction module, before undergoing contrastive learning
training. BeatDance represents the BeatDance without utilizing the Beat-Enhanced Hubness Reducer module.As Tab. 3 shows, BeatDance significantly outperforms other fusion methods on all metrics and it can be observed that the standalone use of Beat-Guided Information Extraction and Beat-Enhanced Feature Fusion yields inferior results.
### Model Analysis
#### 4.6.1 Beat Similarity Analysis
BeatDance integrates beat information and global features, naturally enhancing the correspondence between dance and music at the Beat level. BS@K can be a effective metric for evaluating if beat information is effectively utilized. As Tab. 4 shows, the introduction of BeatDance resulted in a improvement in BS@K on all models. It is worth noting that the limited improvement can be attributed to two factors: the minor role of beat in the retrieval task and the inherent limitations of the computational formula18. Even when provided with an beat array consisting entirely of ones, averaged Beat Similarity between it and all ground truth can still reach 69.86%.
#### 4.6.2 Model Agnostic Analysis
It is worth noting that BeatDance is essentially a framework with good generality, which is easy to extend to other models. Therefore, we conduct extra experiments on CBVMR and XPool. As shown in Tab. 4, BeatDance greatly improved efficiency of all models, demonstrating its strong generalizability.
#### 4.6.3 Downstream Task Analysis
To validate the expressive power of feature vectors generated by BeatDance, we introduce three classification tasks: music genre classification, music emotion classification, and music instrument classification. We first employ well-known PANN [17] to assign genre, mood, and instrument labels to music for classification task. There are a total of 7 emotion categories, 23 genre categories, and 18 instrument
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Method & Recall@1/10/50/100\(\uparrow\) & Mean/Median\(\updownarrow\) & BS@1/5 \(\uparrow\) \\ \hline CBVMR & 0.83/6.35/20.71/30.61 & 245.5/333.91 & 85.11/84.97 \\ CBVMR+ & **0.99/8.33/23.93/37.21** & **179.5/276.02** & **85.32/85.13** \\ \hline XPool & 1.57/9.41/27.72/41.50 & 148.0/248.79 & 85.11/85.00 \\ XPool+ & **2.15/10.40/29.21/42.57** & **140.5/239.08** & **85.26/85.04** \\ \hline Baseline & 2.15/11.87/29.29/42.33 & 142.0/256.28 & 85.15/85.16 \\ Baseline+ & **2.56/11.88/31.60/44.22** & **129.0/234.11** & **85.30/85.18** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Exploring BeatDance effect in music-to-dance retrieval.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline Method & Recall@1/10/50/100\(\uparrow\) & MeanR/Median\(\updownarrow\) & BS@1/5 \(\uparrow\) \\ \hline \hline \multirow{2}{*}{
\begin{tabular}{l} Baseline \\ \end{tabular} } & 2.15/11.87/29.29/42.33 & 142.0/256.28 & 2.48/12.05/28.38/41.50 & 145.0/257.73 \\ w/o Trans-Temporal Processing & **2.56**/11.80/27.81/40.43 & 166.0/281.25 & 2.56/11.88/27.48/39.93 & 163.5/284.41 \\ w/o Beat-Enhanced Feature Fusion & 1.98/12.21/29.13/42.41 & 140.0/258.32 & 2.39/11.37/29.62/41.34 & 147.0/260.39 \\ w/o Beat-Guided Information Extraction & 2.15/10.23/28.30/41.91 & 149.5/252.73 & 2.23/10.81/27.48/41.50 & 147.0/253.71 \\ w/o Hubness Reducer & 2.48/12.29/32.01/43.89 & 136.5/240.21 & 2.48/12.71/30.86/44.31 & 130.0/239.58 \\ Openpose\(\rightarrow\)Mediapipe & 2.15/12.05/30.61/43.47 & 135.0/239.94 & 2.89/11.72/28.55/43.14 & 134.0/238.82 \\ \hline Full BeatDance & 2.48/**13.12/32.26/44.06** & **128.0/239.81** & **2.97/13.04/32.34/44.55** & **127.0/238.77** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Effect of each component of BeatDance on M-D datasets for music-to-dance and dance-to-music retrieval.
Figure 4: t-SNE [39] visualization of learned features. 2000 randomly sampled data pairs are chosen. It can be observed that music representations and dance representations exhibit a remarkably high degree of similarity in their distribution.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Method & Generic & Instrument & Mood \\ \hline \hline \multicolumn{1}{c}{CBVMR} & 50.58 & 64.60 & 61.14 \\ \multicolumn{1}{c}{SCFEM} & 53.88 & 69.22 & 61.22 \\ \multicolumn{1}{c}{MVPt} & 54.37 & 67.90 & 62.05 \\ \multicolumn{1}{c}{XPool} & 54.54 & 66.91 & 62.05 \\ \hline \hline \multicolumn{1}{c}{BeatDance} & **57.10** & **70.38** & **63.86** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison with others on classification task, including CBVMR [13], XPool [11], SCFEM [26], MVPt+ [34].
categories. Then, we append two MLP layers to feature extracted by each model for subsequent classification. Accuracy was used as the evaluation metric. As shown in Tab. 5, BeatDance outperformed other models significantly in all three tasks, demonstrating its strong information extraction capabilities.
#### 4.6.4 Feature Distribution Analysis
To explore feature representation capabilities of BeatDance, we randomly select 2000 instances from our dataset and obtain music representations and dance representations after processing them with BeatDance. Firstly, we apply K-Means clustering to assign cluster labels to all representations. Subsequently, we employ t-SNE [39] for dimensionality reduction, projecting the high-dimensional features into a two-dimensional space. Finally, we visualize all 2000 data points. Fig. 3(a) and Fig. 3(b) illustrate the visualizations of the music representations and video representations respectively. Remarkably, it can be observed that the music representations and dance representations exhibit a high degree of similarity in their distributions, providing evidence for the efficient feature representation capabilities of BeatDance.
## 5 Conclusion
In this work, we have introduced BeatDance, a novel beat-based model-agnostic contrastive learning framework designed to better explore correlation between music and dance. In BeatDance, the Beat-Aware Music-Dance InfoExtractor, the Trans-Temporal Beat Blender, and the Beat-Enhanced Hubness Reducer are proposed to jointly facilitate the music-dance retrieval performance. To facilitate future research endeavors, we have also introduced the M-D dataset, the first large-scale open-source dataset specifically curated for the music-dance retrieval task. This dataset encompasses a diverse range of dance and music genres, providing a valuable resource for researchers in this field. Our experimental results have demonstrated the superiority of our proposed method compared to other baselines in the music-dance retrieval domain. We believe that this pioneering work will inspire and encourage more researchers and practitioners to explore and advance the capabilities of music-dance retrieval systems.
|
2301.10455 | Rate-Perception Optimized Preprocessing for Video Coding | In the past decades, lots of progress have been done in the video compression
field including traditional video codec and learning-based video codec.
However, few studies focus on using preprocessing techniques to improve the
rate-distortion performance. In this paper, we propose a rate-perception
optimized preprocessing (RPP) method. We first introduce an adaptive Discrete
Cosine Transform loss function which can save the bitrate and keep essential
high frequency components as well. Furthermore, we also combine several
state-of-the-art techniques from low-level vision fields into our approach,
such as the high-order degradation model, efficient lightweight network design,
and Image Quality Assessment model. By jointly using these powerful techniques,
our RPP approach can achieve on average, 16.27% bitrate saving with different
video encoders like AVC, HEVC, and VVC under multiple quality metrics. In the
deployment stage, our RPP method is very simple and efficient which is not
required any changes in the setting of video encoding, streaming, and decoding.
Each input frame only needs to make a single pass through RPP before sending
into video encoders. In addition, in our subjective visual quality test, 87% of
users think videos with RPP are better or equal to videos by only using the
codec to compress, while these videos with RPP save about 12% bitrate on
average. Our RPP framework has been integrated into the production environment
of our video transcoding services which serve millions of users every day. | Chengqian Ma, Zhiqiang Wu, Chunlei Cai, Pengwei Zhang, Yi Wang, Long Zheng, Chao Chen, Quan Zhou | 2023-01-25T08:21:52Z | http://arxiv.org/abs/2301.10455v1 | # Rate-Perception Optimized Preprocessing for Video Coding
###### Abstract
In the past decades, lots of progress have been done in the video compression field including traditional video codec and learning-based video codec. However, few studies focus on using preprocessing techniques to improve the rate-distortion performance. In this paper, we propose a rate-perception optimized preprocessing (RPP) method. We first introduce an adaptive Discrete Cosine Transform loss function which can save the bitrate and keep essential high frequency components as well. Furthermore, we also combine several state-of-the-art techniques from low-level vision fields into our approach, such as the high-order degradation model, efficient lightweight network design, and Image Quality Assessment model. By jointly using these powerful techniques, our RPP approach can achieve on average, 16.27% bitrate saving with different video encoders like AVC, HEVC, and VVC under multiple quality metrics. In the deployment stage, our RPP method is very simple and efficient which is not required any changes in the setting of video encoding, streaming, and decoding. Each input frame only needs to make a single pass through RPP before sending into video encoders. In addition, in our subjective visual quality test, 87% of users think videos with RPP are better or equal to videos by only using the codec to compress, while these videos with RPP save about 12% bitrate on average. Our RPP framework has been integrated into the production environment of our video transcoding services which serve millions of users every day. Our code and model will be released after the paper is accepted.
## 1 Introduction
In recent years, the demand for online streaming high-definition video is growing rapidly, and is expected to continue to grow in the next following years. These streaming high-definition videos cost huge bandwidth. They spend more than 80% of all consumer Internet traffic [14]. Therefore, it is essential to build a highly efficient video compression system to generate better video quality at a given bandwidth budget. Thus, many video coding standards have been developed during the past decades, such as H.264 [46], H.265 [38], H.266 [8], and AOMedia Video 1(AV1) [11]. These traditional methods are built on many handcrafted modules, such as block partition, Discrete Cosine Transform (DCT) [2], and intra/inter prediction, etc. While these handcrafted methods have achieved good rate-distortion performance, learned video compression methods [10, 28, 29] still attract more and more attention which is inspired by the success of deep neural networks in other fields of image processing. These learned methods claim to achieve comparable or even better performance than traditional codecs. However, most existing learned video compression methods increase the complexity on both the encoder and decoder sides. This computationally heavy de
-coder makes deployment not viable, especially on end-user devices such as mobile phones and laptops. Some studies try to convert the essential components of standard hybrid video encoder designs into a trainable framework in order to end-to-end optimize all the modules in the video encoder [29, 49]. However, few studies have attempted to use preprocessing methods to improve the rate-distortion performance of video compression systems.
In this paper, we propose a rate-perception optimized preprocessor (RPP) that can efficiently optimize the rate and visual quality at the same time in an independent single forward pass. In particular, we introduce the adaptive Discrete Cosine Transform (DCT) loss into the training stage of the RPP. In addition, we also engage the full-reference image quality assessment model: MS-SSIM [45] into the training part to optimize the perceptual quality of the model. At the same time, a light-weight fully convolutional neural network with attention mechanism is designed by us to improve efficiency.
The contributions of our work can be summarized as follows:
* We first introduce the adaptive Discrete Cosine Transform (DCT) loss which can reduce spatial redundancy meanwhile still keeping the important high frequency component for the content. From our experiments, involving the adaptive DCT loss in training can significantly save the bit rate and maintain the visual quality of the video.
* We propose a rate-perception optimized preprocessor (RPP) which is a light-weight fully convolutional neural network with attention mechanism. The RPP model is balanced between perception and distortion by utilizing both adaptive DCT loss and reference-based IQA loss functions. We also introduce the higher-order degradation model into our training stage to enhance the visual quality of the preprocessed frame.
* Our approach can be easily plugged into the preprocess pipeline of any standard video codec, such as AVC, HEVC, AV1 or VVC. Powered by our approach, these standard video codec can achieve better performance in BD-rate without any changes and sacrifices in video encoding and decoding. Compared with state-of-the-art video codec method, our model can reduce the BD-rate by about 16.27% in average under multiple quality metrics. Furthermore, our RPP model are extreme efficient which can achieve 1080p@87FPS during the inference which is far beyond real-time efficiency.
## 2 Related Work
### Image Compression
In the past decades, a lot of traditional image compression methods like JPEG [40], JPEG2000 [13] and BPG [6] have been proposed. These methods have achieved high performance on reducing the image size efficiently by exploiting hand-crafted techniques. One of the most important parts for those hand-crafted designs is the transformation like DCT. The DCT linearly maps the pixels into the frequency domain. One advantage of the DCT is that it can compact energy which makes it easy to reduce the spatial redundancy of the image. After transformation, these methods quantize the corresponding coefficients and then do the entropy coding. Recently, thanks to the DNN, learning-based image compression methods [3, 4, 31] have achieved competitive or better performance than the traditional image compression codes.
### Video Compression
There is a long history of progress for the video compression methods. During past decades, several video coding standards have been proposed and widely used in the real world, such as H.264 [46], H.265 [38], H.266 [8], and AOMedia Video 1(AV1) [11]. With the continuous development of video coding standards, these traditional video compression methods provided strong performance and made significant improvements. These methods are also practical to use with the hardware support in the real-world applications, such as online video streaming, digital tv, etc. In recent years, a lot of DNN based methods have been proposed for every part of the video coding, such as intra prediction and residual coding [10], mode decision [28], entropy coding, etc. Those methods are employed to improve the performance of one specific module of the traditional video codec. Instead of replacing the particular component of the traditional video compression codec, some approaches focus on the end-to-end optimized video compression framework [29, 49]. In addition, A. Chadha _et al._[9] tries to converts the essential components of standard video encoder designs into a trainable framework and jointly optimize a preprocessor with the differentiable framework from the end-to-end manner.
### Metrics
In the past decades, Peak Signal-to-Noise Ratio (PSNR) was the most widely used full-reference method for assessing video fidelity and quality and it continues to play a fundamental role in evaluating video compression algorithms. However, the PSNR has been proven that has a poor correlation with human perception [17, 43]. Thus, a variety of full-reference image quality assessments (IQA) or video quality assessments (VQA) has been proposed [5, 25, 36, 44]. For
example, Structural Similarity (SSIM) index [44] estimates perceptual distortions by considering structural information, and its variant MultiScale-SSIM (MS-SSIM) [45] provides better performance and more flexibility by incorporating multiscale resolution processing. Video Multi-method Assessment Fusion (VMAF) [25, 26] is another main stream evaluation metric in the real-world industry where lots of famous commercial companies like Netflix [26], Meta [34], Tiktok [48], Intel [22] etc., and standardization such as AOMedia [12] adopt it for video codec evaluation. VMAF combines three quality features: Visual Information Fidelity (VIF) [36], Detail Loss Metric (DLM) [23], and Motion, to train a Support Vector Machine (SVM) regressor [15] to predict subjective score of video quality. Lot of studies have demonstrated that VMAF is remarkably more correlated to the Mean Opinion Score (MOS) than SSIM and PSNR [33, 5, 47].
### Image Enhancement
Image enhancement has been a long-standing problem for its vitally practical value in all kinds of vision applications. Recently, with the development of deep learning techniques such as network design and gradient-based optimization problems, the learning-based methods [1, 42] have shown promising performance in various fields of image enhancement including super-resolution, denoising, deblurring, etc. Some methods [20, 27] aim at achieving real-time image super-resolution with well-designed lightweight CNN which can obtain better results with limited computational effort. Other approaches [16, 42] focus on designing the degradation models which aim to model the complex degradation process of the image. Wang _et al._[42] uses a high-order degradation process to simulate complex real-world degradations. While lots of great works have been done in the image enhancement field, there are rare works that utilize methods and techniques with video coding.
## 3 Proposed Method
### Overview
In this section, we give a brief overview of our rate-perception optimized preprocessing (RPP) method. The goal of our preprocessing model is to provide a preprocessed input frame that is optimized with both rate and perception via a learnable preprocessing neural network. Specifically, in order to optimize our model in the balance between rate and distortion, we design an adaptive DCT loss that can reduce the spatial redundancy and keep the essential high frequency components for perception in the meantime. On the other hand, for the perception optimization part, we aim to perceptually enhance our preprocessed input frame by using the full-reference IQA model: SSIM. We utilize the IQA model as the loss function in our training procedure. In addition, we combine the higher-order degradation modeling process to simulate real-world complex degradation [42]. By using this higher-order degradation method to generate the pair of training data, our preprocessing network can be trained to handle some complicated degradations in the real world which can also improve the perceptual quality of the output from the network. Furthermore, for the sake of performance and efficiency, we construct a light-weight fully convolutional neural network with a channel-wise attention mechanism [18]. In the deployment framework, for a given video frame \(f_{i}\), it simply goes a single forward pass through the RPP network. Then the processed frame \(f_{o}\) from the RPP network can be encoded by a standard video codec, such as an AVC [46], HEVC [38], VVC [8], or AV1 [11] encoder.
### Adaptive Discrete Cosine Transform Loss
Although it has been many years since DCT was first introduced in image/video compression algorithms, because of its high effectiveness and ease of use, DCT-like transforms are still the mainstream transform today. Generally, the basis function of two-dimensional(2D)DCT can be written as:
\[B_{h,w}^{i,j}=cos\frac{h\pi}{H}(i+\frac{1}{2})cos\frac{w\pi}{H}(j+\frac{1}{2}) \tag{1}\]
Then the 2D DCT is formulated as:
\[F_{h,w}=\sum_{i=0}^{H-1}\sum_{j=0}^{W-1}f_{i,j}B_{h,w}^{i,j} \tag{2}\]
\[s.t.\quad h\in\{\mathit{0},\mathit{1},\cdots,\mathit{H-1}\},w\in\{\mathit{0}, \mathit{1},\cdots,\mathit{W-1}\}\]
where \(F\in\mathbb{R}^{\mathbb{H}\times\mathbb{W}}\) is the 2D DCT frequency spectrum, \(f\in\mathbb{R}^{\mathbb{H}\times\mathbb{W}}\)is the input frame, \(H\) is the height of \(f\), and \(W\) is the width of \(f\). Normally, height and width are the same. \(H\) and \(W\) are usually denoted as \(N\) in most common cases.
With the input of the frame \(f\), it converts blocks of pixels into same-sized blocks of frequency coefficients. As we mentioned, the DCT has a crucial property which is that the blocks of frequency coefficients separate the high-frequency components from the low frequency. In an image, most of the energy will be concentrated in the lower frequencies, so in the traditional compression algorithms, they simply throw away the higher frequency coefficients to reduce the spatial redundancy. However, some of the high frequency components also play a very important role in the visual quality of the whole frame. Therefore, we first introduced the adaptive DCT loss for video preprocessing. First, we use DCT to transform a frame \(f\) into the frequency domain. Second, we select the frequency coefficients \(I\) which belong to the high frequency components by using the ZigZag order traversal. The formula can be written as:
\[F_{h,w}^{\prime}=F_{h,w}*I_{h,w} \tag{3}\]
\[\begin{split}& where\quad I_{h,w}=\begin{cases}0&\text{if}\ (h+w)<S\\ 1&\text{if}\ (h+w)\geq S\end{cases}\\ & S\in\{0,1,\cdots,(H-1)(W-1)\}\end{split} \tag{4}\]
In the DCT frequency domain, the value of the frequency coefficient means how much energy is in this frequency component in the whole frame. If a frequency component has less energy, it means that this frequency component is relatively less essential to reconstruct the frame. So we want to throw away some high frequency component with a relatively small value of coefficients. In this case, we do the mean average of the absolute value of these selected coefficients \(F^{\prime}_{h,w}\) to get a Threshold \(T\), which can be formulated as:
\[T=\frac{1}{H\cdot W}\sum_{h=i}^{H-1}\sum_{w=j}^{W-1}(|F^{\prime}_{h,w}|) \tag{5}\]
\[\begin{split}& where\quad i+j\geq N\end{split}\]
If \(|F^{\prime}_{h,w}|\) is smaller than Threshold \(T\), this means it has less effect on reconstructing the frame than the average. Then we select it into another set of the coefficients \(F^{\prime\prime}_{h,w}\). Finally, we calculate the mean absolute error between the filtered DCT frequency coefficients \(F^{\prime\prime}_{h,w}\) and zero, which can be written as:
\[\begin{split}\mathcal{L}_{det}=\sum_{h=i}^{H-1}\sum_{w=j}^{W-1} |F^{\prime\prime}_{h,w}-0|,\\ F^{\prime\prime}_{h,w}\in\{|F^{\prime}_{h,w}|<T\}\quad and\quad i +j\geq N\end{split} \tag{6}\]
By using this loss function in the model training, the model will be optimized to preserve the essential high frequency components and discard some trivial high frequency components. With this optimization, the frame processed by the model can make the video encoder allocate more bit rates to these important high frequency components such as edges and contrast areas. In the meanwhile, since the adaptive DCT loss function will filter some trivial high frequency components to be zero, it can also benefit the entropy coding process [19, 35] which will consume much less bitrate with consecutive zeros.
### Network and Image Degradation
Inspired by the light-weight network architectures from the image enhancement field, we adopt a few ideas from them [20, 27]. Specifically, based on the feature extraction block like RFDB [27], we add the channel attention mechanism [18] into the block in order to let the network pay more attention to different channel frequencies. Moreover, we use an efficient sub-pixel convolution which is first introduced by Shi _et al._[37] to downscale and upscale the resolutions of feature maps. The overall network architecture is shown in Fig. 4.
The way to model the degradation of the training data is important to improve the visual quality during network training. We include some general degradation [16] methods into our degradation model, such as blur, noise, resize, and JPEG compression. For the blur, we model our blur degradation with isotropic and anisotropic Gaussian filters. We choose two commonly-used noise types which is Gaussian noise and Poisson noise for noise degradation. For resizing, we use both upsampling and downsampling with several resize algorithms including area, bilinear, and bicubic operations. Since in the real-world applications, the in
Figure 2: Example framework of training RPP. (a) is the histogram of frequency coefficient of the predicted frame. (b) is the histogram of frequency coefficient filtered by the adaptive DCT function
put frames of our framework mostly are decoded from a compressed video, so we add the video compression degradation which may introduce blocking and ringing artifacts from spatial and time domain. As we mentioned before, High-order degradation modeling [42] has been proposed to better simulate the complex real-world degradations. We utilize this idea in our image degradation model as well. By generating training pairs with these degradation models, our objective is to make the model have the ability to remove common noise and compression noise, which can also optimize the rate because video codec can not encode the noise well.
### Loss Functions
Our target is to train our preprocessing network by optimizing rate and perception at the same time. In order to perform the optimization of both rate and perception on the reconstructed frame \(\hat{f}\) relative to the input frame \(f\), we combine the adaptive DCT loss \(\mathcal{L}_{dct}\), reconstruction loss \(\mathcal{L}_{r}\) and perceptual loss \(\mathcal{L}_{p}\) together to optimize the model. \(\mathcal{L}_{dct}\) is the method introduced by us to optimize the rate and distortion in Eq.6. For reconstruction loss \(\mathcal{L}_{r}\), we want to ensure the basic reconstruction ability of the model so that we adopt the L1 distance as our reconstruction loss, which can be formulated as:
\[\mathcal{L}_{r}=\frac{1}{HW}\sum_{i=0}^{H-1}\sum_{j=0}^{W-1}|f_{i,j}^{GT}-\hat {f}_{i,j}| \tag{7}\]
in which \(f^{GT}\) is the processed ground truth of the \(f\). It is common knowledge that the contrast or edge in the high frequency areas has a higher correlation with human perception. Multiscale structural similarity (MS-SSIM) [45] is proven by being good at preserving the structural information and contrast in high frequency regions. Thus, we adopt the MS-SSIM as our perceptual loss part, which can be written as:
\[\mathcal{L}_{p}=1-\mathcal{L}_{MS-SSIM}(\hat{f},f^{GT}) \tag{8}\]
With the combination of \(\mathcal{L}_{dct}\),\(\mathcal{L}_{r}\) and \(\mathcal{L}_{p}\), our overall loss function can be formulated as:
\[\mathcal{L}_{all}=\lambda_{1}\mathcal{L}_{dct}+\lambda_{2}\mathcal{L}_{p}+ \mathcal{L}_{r} \tag{9}\]
Where \(\lambda_{1}\) and \(\lambda_{2}\) are the rate and perceptual coefficients respectively.
## 4 Experiments
### Experiments Setup
**Datasets.** We adopt DIV2K and Flickr2K datasets [1] for training our RPP model which DIV2K has 1000 high-definition 2K resolution images and Flickr2K has 2650 2K resolution images. To evaluate the performance of our proposed method, we test it on the UVG datasets [30], HEVC Standard 1080p Test Sequences [7] and MCL-JCV datasets [41]. With the diverse content, these datasets are widely used to evaluate the performance of video compression algorithms.
**Implementation Details.** We train our RPP model with two stages. The first warm-up stage is that we train the model on reconstruction loss \(\mathcal{L}_{r}\) by using the Adam optimizer [21] with initial learning rate as \(1\times 10^{-3}\), \(\beta_{1}\) as 0.9 and \(\beta_{2}\) as 0.999, respectively. The mini-batch size is set as 32. The resolution of training images is \(128\times 128\) which is randomly cropped from the original images in the datasets. After training 600K iterations with the warm-up training, we use the overall loss function \(\mathcal{L}_{all}\) by setting the \(\lambda_{1}\) as 10, \(\lambda_{2}\) as 0.1 in training, and adjust the learning rate to the \(1\times 10^{-4}\). To be specific in the adaptive DCT loss setting, we use both \(N\)=8 and \(N\)=16 to train the network at the same time since the most common size of the macroblock in traditional video codec is 8 and 16. With this setting, we train our RPP model for another 700K iterations so that the model can be converged. The training data of both two training stages are augmented by our two-order image degradation model. The whole training framework is implemented based on Pytorch [32] and it takes about only 1 day to train the network by using two NVIDIA GeForce RTX3090. In the deployment stage, the input frame will be first sent into our deployed RPP model to get preprocessed. We set a hyper-parameter here as \(\alpha\) to handle the preprocessing intensity of our approach for some cases that do not require intensive preprocessing with our pretrained model setting and are sensitive to all high frequencies information in the video. The value of \(\alpha\) is deduced empirically from experiments. The preprocessed frame can be written as:
\[f_{p}=\alpha f_{o}+(1-\alpha)f_{i} \tag{10}\]
where the \(f_{o}\) is the output frame from the RPP model and the \(f_{i}\) is the input frame. Then the preprocessed frame will be encoded by a standard video codec. Importantly, benefiting from our network design, our RPP model can achieve 87.7FPS inference performance for 1080p videos by deployed with TensorRT [39] on a single NVIDIA GeForce RTX3090. The inference performance on 720p and 4K is 185FPS and 22.6FPS, respectively.
**Evaluation Method.** To measure the performance of our proposed method, we use two evaluation metrics: MS-SSIM and VMAF, MS-SSIM is the most common metric in the academic video codec area and VMAF is a mainstream perceptually-oriented metric in the video-streaming industry. We test our proposed method with AVC/H.264, HEVC/H.265, VVC/H.266, and AV1 which cover all the popular standard video codecs.
### Experiments Results
In this section, we show the experimental results of the comparison between standard video codecs and our RPP + standard video codecs. We fix the \(\alpha\) = 0.5 in Eq.10 for both HEVC dataset and MCL_JCV dataset, and \(\alpha\) = 1 for UVG dataset. The results of Figure 3(a) and Table 1,2,3 show that our proposed method can obviously improve the BD-rate of both two metrics with standard codecs over all three datasets. The average saving of RPP + H.264 is 18.21% under VMAF and 8.73% under MS-SSIM over three datasets. The average saving of RPP + H.265 is 24.62% under VMAF and 13.51% under MS-SSIM over three datasets. Some learning-based video encoders [29] have shown to outperform traditional standard codec only under'very fast' pre-set. To demonstrate the generalizability of our approach, We also test our RPP approach with the'medium' pre-set. As it shown in the top figure of Figure 3(b), our approach still outperforms the standard codecs which are consistent with the'very fast' preset results in Figure 3(a). Furthermore, we test our RPP approach with H.266 on UVG dataset and HEVC Class B dataset. As it shown in the bottom figure of Figure 3(b), the average saving of RPP + H.266 is 8.42% under MS-SSIM over both two datasets. As we expected, our approach can get significant gains when jointly used with all the mainstream standard codecs. In addition, our method has a lower bitrate than the standard codec under the same Quantization Parameter (QP), which can demonstrate the bit-saving ability of our approach.
very impressive effect. Compared to the BD-rate savings in Table 1, it contributes over 60% bitrate savings in the whole approach.
**Choice and Analysis of Hyper-parameter \(\alpha\)** We test our approach on HEVC class B dataset and MCL_JCV by setting different \(\alpha\) values (0.2, 0.5, 0.8, 1.0) in Eq.10. From Figure 4(b), we can see \(\alpha\) = 0.5 has the best rate-distortion curve compared to other values of \(\alpha\). As we mentioned before, \(\alpha\) is to control the preprocessing intensity of our approach. From our perspective, there are two reasons we need to have a hyper-parameter to control the intensity. First, our model is trained at a fixed setting with a small public dataset which means the data is not diverse enough. Second, some videos are extremely sensitive to the high frequency components that our fixed setting pretrained model may over-preprocess.
## 5 Conclusion
In this paper, we propose a rate-perceptual optimized preprocessing (RPP) method to generate a rate-optimized and perceptual-enhanced frame via a neural network for video coding. In the deployment stage, our RPP approach is plug-and-play on the standard video codecs without requiring any changes in encoding and decoding settings. In addition, Our proposed method is also very efficient and can achieve far beyond real-time performance. As shown in experimental results, our RPP approach can achieve considerable and consistent gains with all mainstream standard video codecs on different metrics.
|
2302.06767 | Coalitional Game Theory in Power Systems: Applications, Challenges, and
Future Directions | Game theory-based approaches have recently gained traction in a wide range of
applications, importantly in power and energy systems. With the onset of
cooperation as a new perspective for solving power system problems, as well as
the nature of power system problems, it is now necessary to seek appropriate
game theory-based tools that permit the investigation and analysis of the
behavior and relationships of various players in power system problems. In this
context, this paper performs a literature review on coalitional game theory's
most recent advancements and applications in power and energy systems. First,
we provide a brief overview of the coalitional game theory's fundamental ideas,
current theoretical advancements, and various solution concepts. Second, we
examine the recent applications in power and energy systems. Finally, we
explore the challenges, limitations, and future research possibilities with
applications in power and energy systems in the hopes of furthering the
literature by strengthening the applications of coalitional game theory in
power and energy systems. | Mukesh Gautam, Mohammed Ben-Idris | 2023-02-14T00:41:22Z | http://arxiv.org/abs/2302.06767v1 | # Coalitional Game Theory in Power Systems: Applications, Challenges, and Future Directions
###### Abstract
Game theory-based approaches have recently gained traction in a wide range of applications, importantly in power and energy systems. With the onset of cooperation as a new perspective for solving power system problems, as well as the nature of power system problems, it is now necessary to seek appropriate game theory-based tools that permit the investigation and analysis of the behavior and relationships of various players in power system problems. In this context, this paper performs a literature review on coalitional game theory's most recent advancements and applications in power and energy systems. First, we provide a brief overview of the coalitional game theory's fundamental ideas, current theoretical advancements, and various solution concepts. Second, we examine the recent applications in power and energy systems. Finally, we explore the challenges, limitations, and future research possibilities with applications in power and energy systems in the hopes of furthering the literature by strengthening the applications of coalitional game theory in power and energy systems.
Coalitional game theory, energy systems, nucleus, power systems, Shapley value.
## 1 Introduction
Game theory-based approaches provide a set of mathematical tools to assess complex interactions and rational behaviors of economic agents in a mutually interactive setting [1]. Specifically, coalitional game theory-based approaches have attracted considerable attention of power system researchers because of their ability to uniquely assign payoff among players of the game taking into account their marginal contributions [2]. Coalitional and non-coalitional game theory-based methodologies have been extensively used in a number of power system-related disciplines. Planning, economics, operations, and control of the power system are some of these areas. Game theory-based methods have been applied to determine the optimum dispatch strategies of thermal power plants in [3], coordinate the charging of plug-in hybrid electric vehicles in [4], and allocate the cost of transmission system losses to market participants (customers and power generating companies) in [5].
The primary focus of coalitional game theory is the distribution of rewards obtained from player collaboration. When members of a coalition work together and take coordinated action, the collective wealth or value of the coalition is frequently increased or decreased. Naturally, a fascinating and significant issue that has drawn the attention of many mathematicians and scientists is how to distribute the collective reward in an equitable and consistent way. The total payment or incentive of a coalitional game is divided among the players using various solution concepts such the Shapley value, the core, the Nucleolus, and the Nash-bargaining solution.
Applications of coalitional game theoretic approaches in power and energy systems have gained significant momentum in recent years due to their ability to uniquely assign payoffs among players of the game taking into consideration their marginal contributions. A coalitional game theory-based energy management system has been proposed in [6] to facilitate power exchange of microgrids connected with the main grid. In [7], a game theory-based strategy for improving system reliability and reducing power loss in active distribution networks and microgrids has been proposed, where the locational marginal cost was computed at each bus, and each player in the game received financial rewards when system reliability level was improved and the network power losses were reduced. In [8], a coalitional game theory-based strategy has been presented for involvement of distributed energy resources in the distribution network to take part in secondary frequency regulation. To determine the optimal locations and sizes of distributed energy resources in distribution systems, a methodology based on coalitional game theory has been presented in [9]. In [10], a multi-stage optimum planning of multi-microgrids utilizing deep learning and coalitional game theory has been developed, where a deep neural network was utilized for forecasting, and coalitional game theory was used to determine optimal set points for the multi-microgrid. A two-layer game model was developed in [11] to enhance the integrated energy system, where an upper-level Stackelberg game model of the improved energy network was firstly optimized and a cooperative game model for the users, the supply system, and the integrated energy system was developed to carry out an internal optimization. The viability of peer-to-peer (P2P) energy trading in a grid-connected system with voltage limits has been investigated in [12], where a local voltage control strategy that considers network constraints and prompts prosumers to engage in energy trading has been proposed. Prosumers participating in the P2P energy trading proposed in [12] could join a coalition to discuss and determine the energy trading specifications, including trading volumes and pricing, under a coalitional game-based structure.
Authors of [13] have conducted a survey of coalitional game theory applications focusing on expansion planning of power systems. Contrary to the aforementioned paper, this paper presents a review of recent advancements and applications of coalitional game theory in overall power and energy systems.
This paper starts by giving the basic introduction of coalitional games along with various solution concepts including the core, Shapley value, nucleolus, and Nash bargaining solutions. Then, the recent advancements of coalitional game theory in both transmission and distribution systems are explained. Moreover, the paper examines the applicability, challenges, and limitations of coalitional game theory through a case study in a 33-node distribution system considering the reserve allocation problem in active distribution systems.
The remainder of the paper is organized as follows: Section 2 present an introduction to coalitional game theory along with various solution concepts; Section 3 presents various applications of coalitional game theory in power and energy systems; Section 4 presents a case study to explain applicability, challenges, and limitations of coalitional game theory; and Section 5 summarizes the paper, and presents conclusion and future research directions.
## 2 Coalitional Game Theory
In game theory, games are generally categorized into two groups: (a) coalitional games and (b) non-coalitional games. In non-coalitional games, there is no coalition or cooperation between players and they compete among each other to optimize their individual utility functions, while in coalitional games the players form alliances or coalitions with each other to optimize both individual and coalitional utility functions. A coalition must always yield utilities that are equal to or greater than the individual player's utilities since members establish coalitions to optimize their individual utility functions. Non-coalitional games focus mainly on maximizing individual utilities of the players, while coalitional games focus on improving joint utility of the coalition [14]. A coalitional game is described by providing a value to each coalition. The following two elements comprise the coalitional game:
1. A set of players \(\mathcal{N}\), also referred to as the grand coalition.
2. A characteristic function \(V(S):2^{\mathcal{N}}\rightarrow\mathbb{R}\) that converts the set of all feasible player coalitions into a set of coalitional worths or values satisfying the condition \(V(\phi)=0\).
Every coalitional game specifies the characteristic function that represents the worths or values of all coalitions. The total value of all members of a coalition serves as the characteristic function of the coalition. Solution paradigms such as the core, the Nucleolus, and the Shapley value are the most popular ways used to allocate the overall payout or incentive among individual players of a coalitional game.
### _Core of a Coalitional Game_
In game theory, the core is the set of possible assignments that cannot be enhanced more through any alternative coalitions. The core is a set of payout assignments that ensures no player or player group has a motivation to quit \(\mathcal{N}\) to establish a new coalition. Mathematically, the core is defined as follows [15].
\[\mathcal{C}=\left\{\alpha:\sum_{j\in\mathcal{N}}\alpha_{j}=V(\mathcal{N})\text { and }\sum_{j\in S}\alpha_{j}\geq V(S),\forall S\subset\mathcal{N}\right\} \tag{1}\]
There is no certainty that the cores of coalitional games will always exist. In many cases, the core is in fact empty, making it impossible to stabilize the grand coalition [16]. Moreover, the core doesn't always give a unique solution and in many cases, the payoff distribution based on the core can be unfair to some players [16]. Shapley value or some alternate solution concept may be applied in these circumstances.
### _Nucleolus_
The nucleolus is another important concept in coalitional game theory introduced by Schmeidler [17] in 1969. It is founded on the idea of reducing the dissatisfaction of coalition(s) starting with the most dissatisfied coalition(s) [17]. The excess of a coalition is the difference between the sum of actual payoffs received by players in the coalition and the worth or value of the coalition. Nucleolus is defined as a payoff distribution vector \(\mathbf{x}\) such that the excess (given by (2)) of any potential coalition cannot be lowered without raising any other higher excess.
\[e_{S}(\mathbf{x})=V(S)-\sum_{j\in S}x_{j}, \tag{2}\]
where \(\sum_{j\in S}x_{j}\) denotes the actual value of total payoff received by the players of coalition \(S\) and \(V(S)\) denotes the worth or value of coalition \(S\).
If the core is not empty, the nucleolus should lie in the core as well, guaranteeing the grand coalition's stability [17]. However, obtaining the nucleolus might not be simple because of numerical issues [18]. Additionally, none of the monotonicity requirements are guaranteed [18]. Moreover, the payoff distribution based on the concept of the nucleolus can be unstable if the core doesn't exist. Therefore, when adopting the nucleolus, it might still be essential to ensure that the core isn't empty.
### _Shapley Value_
The Shapley value is an approach to getting solutions of coalitional game theory. In other words, the Shapley value is a method of distributing the total payoff to each player when everyone plays the game. The Shapley value is mathematically represented as follows [19].
\[\psi_{j}(V)=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
1. _Efficiency:_ The grand coalition's value is equal to the total of all players' Shapley values, therefore all gains are allocated among the players. Mathematically, \[\sum_{j\in\mathcal{N}}\psi_{j}(V)=V(\mathcal{N})\] (4)
2. _Individual Rationality:_ When a player participates in a coalition, then its Shapley value should exceed its individual value. Mathematically, \[\psi_{j}(V)\geq V(\{j\}),\forall j\in\mathcal{N}\] (5)
3. _Symmetricity:_ When two players in a coalition make the same contribution, their Shapley values must be equal. Mathematically, for two players \(i\) and \(j\) satisfying \(V(S\cup\{i\})=V(S\cup\{j\})\) for each coalition \(S\) without \(i\) and \(j\), \[\psi_{i}(V)=\psi_{j}(V)\] (6)
4. _Dumminess:_ When a player does not increase the coalition's value, its Shapley value should be zero. Mathematically, for player \(i\) satisfying \(V(S)=V(S\cup\{i\})\) for each coalition \(S\) without \(i\), \[\psi_{i}(V)=0\] (7)
5. _Linearity:_ The Shapley value corresponding to the sum of characteristic functions equals the sum of Shapley values corresponding to the individual characteristic functions. Mathematically, for two characteristic functions \(V_{1}\) and \(V_{2}\) of a coalitional game, \[\psi(V_{1}+V_{2})=\psi(V_{1})+\psi(V_{2})\] (8)
## 3 Applications of Coalitional Game Theory in Power and Energy Systems
This section presents some of the major applications of coalitional game theory in power and energy systems including transmission and distribution systems operations and planning.
### _Loss Reduction Allocation of Distributed Generators_
A cooperative game theory-based approach has been implemented in [20] for loss reduction allocation of distributed generators using the Shapley values (a solution concept in cooperative game theory). The paper has presented a new locational marginal pricing (LMP) strategy for distribution systems with substantial integration of distributed generation in a competitive energy market. The objective of the LMP technique presented in the paper was to reward distributed generators for their contribution to lower power loss in distribution systems resulting from the participation of all distributed generators in providing demand. Additionally, an iteration-based algorithm has been integrated with the proposed approach. In contrast to the existing LMP-based methodologies, the method presented in the paper has been supposed to offer distribution companies an effective tool for estimating the system state.
### _Reliability-centered Maintenance_
The concept of Shapley value has been utilized in [21] to determine the critical components of the system for reliability-centered maintenance. The paper has stated that the prevention of potential disruptions and the provision of the necessary electricity for end-use customers of distribution networks were two major functions of the generation side of power systems. Since generator failures may have severe or minimal effects depending on the location of generators and network structure, the paper has considered both of these factors during reliability-centered maintenance. In addition to attempting a fair distribution of outage implications to the participating units in the case of \(N-k\) contingencies, this study has been primarily focused with the proposal of an index that could be used to sort out the generators based on their outage impacts on power system reliability. The proposed approach has employed Shapley Value to rank generators based on how their failure will affect the reliability of the system. The proposed approach can be used to identify the critical generators of the system, after which the reliability-centered maintenance can be successfully carried out.
### _Under-frequency Load Shedding_
A coalitional game theory-based approach has been proposed in [22] for under frequency load shedding control, where a real-time digital simulator has been utilized to compute rate of change of frequency (RoCoF). In order to efficiently and accurately calculate the locations and amounts of loads that need to be shed in order to regulate under frequency load shedding, the paper has provided a two-stage strategy based on cooperative game theory. Using the initial RoCoF referred to the equivalent inertial center, the total amount of loads to be shed, also known as the deficit in generation or the disturbance power, was calculated in the first step. In the second step, load shedding amounts and locations were determined using the Shapley value. The Western Electricity Coordinating Council (WECC) 9-bus 3-machine system has been used to implement the proposed approach, and real-time digital simulators were used for simulation. The findings demonstrated that the proposed under-frequency load shedding technique can successfully restore the system to its pre-disturbance state.
### _Formulation of Distributed Slack Buses_
A coalitional game theory-based method for calculating the participation factors of distributed slack bus generators has been proposed in [23]. The effectiveness of the proposed method has been shown by comparing it to a traditional method for computing the participation factors of distributed slack bus generators. The paper's proposed methodology was a two-stage strategy. The worth (or value) of each participant generator and the coalitions they were a member of was calculated in the first phase. The Shapley value was employed in the second phase to establish participation criteria for each participating generator. The mismatch power was then divided among the several generators using the participation factors.
Case studies on the IEEE 14-bus, IEEE 30-bus, and IEEE 57-bus systems were used to show the feasibility of the proposed methodology. The findings demonstrated that systems with distributed slack buses, as opposed to those with a single slack bus, have lower generation costs and power losses.
### _Sizing and Siting of Distributed Energy Resources_
For the purpose of sizing and siting distributed energy resources (DERs), a coalitional game-theoretic strategy has been developed in [24]. The k-means method has been utilized to perform scenario reduction before identifying potential locations for the deployment of DERs. The two-stage method for choosing the best DER sites and sizes was presented in the paper. Using the equivalent locational marginal prices (LMPs) per unit active power at each bus, a certain number of prospective locations for DERs were chosen in the first stage, and the worths of each prospective location and their coalitions were calculated. The weighted average of LMPs for reduced sets of load scenarios was used to calculate the equivalent LMPs. The Shapley value was employed in the second stage to obtain the optimum placements and sizes for DERs. Case studies performed on several IEEE systems demonstrated that employing the proposed approach as opposed to the existing approaches lowered the total cost of generation after DER deployment.
### _Integrated DER Energy Management_
An integrated DER energy management strategy built on nucleolus estimate has been proposed in [25]. The paper has presented a strategy to quickly estimate the nucleolus by including the k-means clustering method, which uses a distinctive marginal allocation pattern as the clustering features. The nucleolus, a method of distributing these economic rewards, has been shown to guarantee the DERs' motivation to participate in coalitional games. The increase in computational time of nucleolus with increase in number of players imposes a hard limit on the system's scalability. A proportional random sampling approach has been proposed for performance assessment. The estimation performances of different clustering algorithms were compared after pairing with different clustering characteristics.
### _Transmission Expansion Planning_
A mechanism for the distribution of transmission expansion expenses among energy market players based on the core and nucleolus paradigms of coalitional games has been proposed in [26]. Using a coalitional game for a certain number of participants, the cost of transmission line expansion was divided among the participants. Transmission line expansion has been expected to ease transmission congestion, which is believed to be the transmission impediment. The proposed cost allocation was demonstrated through an illustrative case study consisting of an energy market model.
### _Small-signal Stability of Power Systems_
In order to determine which factors are most important for assessing a small-signal stability of a power system, the paper [27] has examined several coalitional game theoretic models. Identifying the factors that have the greatest impact on system stability will make it easier to operate and manage a power system economically in general since network operators and relevant parties would have to put less effort into network management, regulation, and modeling. The scarce resources might be properly allocated and prioritized after determining which factors have the most influence on small-signal stability concerns. In contrast to prior methods, a priority ranking algorithm based on a coalitional game theoretic approach has the benefit of taking into account both the individual and all potential cumulative impacts of players. In this paper [27], the most significant players have been found using a multi-level strategy that accounted for the network power flow, small-signal stability characteristics, and individual and coalitional behaviors of players.
### _Pre-positioning of Movable Energy Resources_
To enhance the resilience of the power supply, a strategy based on the combination of graph theory and coalitional game theory to determine pre-positioning locations of movable energy resources (MERs) has been presented in [28]. The proposed method has been used to identify MERs' pre-positioning sites using weather forecast information to guarantee the quickest response feasible in the case of a natural disaster. Numerous line outage scenarios were created in the paper using the distribution lines' fragility curves, and a scenario reduction approach was then used to generate a collection of reduced line outage scenarios. For each reduced line outage scenario, the power distribution network was reconfigured using a graph theory-based approach. The amount of curtailed critical loads and the probability of each reduced line outage scenario have been used to determine the expected load curtailment (ELC) associated to each site. The Dijkstra shortest path method was used to calculate the best path to take in order to travel to each site. Using the ELC and the optimum route, MER dispatch cost was calculated. The potential sites for MER pre-positioning were using the MER dispatch costs. The size of MER at each prospective location has been determined using the Shapley value. A 33-node distribution system test bed has been used to verify the proposed method for pre-positioning of MERs.
### _Microgrid Cooperative Energy Management_
For the cooperative energy trading management of microgrids, a hybrid Energy Management System (EMS) framework based on coalitional games has been developed in [6]. The paper has determined the energy trading plan that aims to minimize network power losses by using an efficient and robust non-linear model and a characteristic function that ensures the grand coalition is the right coalition structure. A collaborative plan for a moving horizon has been created using the scheduling procedure. The monitoring procedure
has employed performance thresholds to track program implementation and detect disturbances in order to improve the effectiveness of microgrids. To protect the privacy of microgrids, each regional EMS was supposed to shares its state summaries with the centralized EMS, which describe its power surplus or deficit for each time frame of the planning horizon. The incentive allocation mechanism distributed the power loss that was acquired as a result of the energy traded during the implementation of the plan using a core-based paradigm. After comparing the proposed framework with a coalition forming game-based methodology reported in the literature, it has been deduced that the task of managing the power exchange between microgrids must be represented as a canonical coalitional game rather than a coalition forming game.
### _Peer-to-Peer Energy Trading_
A framework based on coalitional game theory has been proposed in [29] to speed up the development of reliable trading strategies and to motivate participants. Depending on factors like region, peak energy consumption, peak energy generation, and price mechanism, the proposed trading methodology has been designed to provide different preferences at each timespan. The paper has formulated a grand coalition with a goal to enhance the total social welfare and make sure that all players of the game benefit from the policy. Due to the fact that neither peer wishes to initiate a merge or split with respect to its current state, the grand coalition formed by the coalitional game in the paper satisfied Nash equilibrium criteria. The results in [29] demonstrated that utilizing the optimal preference for each timespan was preferable than using a single preference for the whole day. The paper has also performed an economic analysis to determine how to fairly allocate the total payoff among all players of the game. When applying the proposed strategy, customers save money on their energy bills and prosumers earn high profit, according to the economic analysis performed in the paper.
## 4 Reserve Allocation in Active Distribution Systems: a Case Study
A case study on reserve allocation in active distribution system based on [30] is presented to explain applicability, challenges, and limitations of coalitional game theory in power and energy systems. The layout of the coalitional game theory-based approach proposed in [30] is as shown in Fig. 1. A 33-node distribution system with total active and reactive power loads of, respectively, 3715 kW and 2300 kVAr, is considered for the case study.
To analyze the scalability of coalitional game theory-based approaches, two different types of characteristic functions (worthiness index and power loss reduction) are computed by varying the number of DERs (players) from 2 to 8. The equivalent Shapley value is computed for each scenario and distribution factors are then determined for reserve allocation among DERs. Table I shows the number of DERs, their locations, and execution time when the study is conducted on a 64-bit Intel i5 personal computer with processor speed of 3.15 GHz, 8 GB RAM, and Windows operating system. Similarly, Fig. 2 shows the plot of execution time as the number of DERs is increased. The figure shows that the execution time increases exponentially as the number of DERs (i.e., players of the coalitional game) increases, demonstrating that the coalitional game theory-based approaches are highly unscalable. The coalitional game theory-based approaches are, therefore, highly applicable in power system planning problems where execution time is not a major concern compared to power system operations and control problems.
Fig. 1: Layout of the Coalitional Game Theoretic Model for Reserve Allocation [30]
Fig. 2: Plot of execution time versus number of DERs
## 5 Conclusion and Future Research Directions
This paper has presented a review of coalitional game theory applications in power and energy systems. The paper started with the basic introduction to coalitional game theory along with advantages and limitations of various solution concepts of coalitional games including the core, the nucleolus, and the Shapley value. The various applications of coalitional game theory were, then, presented. A case study of reserve allocation in active distribution systems was presented to point out applicability, challenges, and limitations of coalitional game theory in power and energy systems.
In the case study, the execution time was computed by increasing the number of players. The case study results showed that the execution time increases exponentially as the number of players in coalitional games is increased. It was, therefore, deduced that the coalitional game theory is applicable in case of power system operations and control problems if there are few numbers of players. On the other hand, the coalitional game theory-based applications are still viable in case of power system planning problems where execution time is not a prime concern. The consideration of marginal contribution of each player while distributing the overall reward among all players makes the coalitional game theoretic techniques based on Shapley values still favorable.
Since the existing approaches for computing the solutions of the coalitional games are not scalable, further research is needed to develop approaches that are computationally efficient and accurate enough. Since most of the solution concepts are based on the enumeration of possible coalition sets and computation of corresponding characteristic functions, the size of the coalition set increases non-linearly with increase in the number of players. Moreover, some coalitions might be more impactful and significant compared to others. Based on this, some approximations can be made while determining the solutions. This can help reduce the computational time while solving a coalitional game.
|
2308.13457 | Super FiboCatalan Numbers and their Lucas Analogues | Catalan observed in 1874 that the numbers $S(m,n) = \frac{(2m)! (2n)!}{m! n!
(m+n)!}$, now called the super Catalan numbers, are integers but there is still
no known combinatorial interpretation for them in general, although
interpretations have been given for the case $m=2$ and for $S(m, m+s)$ for $0
\leq s \leq 4$. In this paper, we define the super FiboCatalan numbers
$S(m,n)_F = \frac{F_{2m}! F_{2n}!}{F_m! F_n! F_{m+n}!}$ and the generalized
FiboCatalan numbers $J_{r,F} \frac{F_{2n}!}{F_n! F_{n+r+1}!}$ where $J_{r,F} =
\frac{F_{2r+1}!}{F_r!}$. In addition, we give Lucas analogues for both of these
numbers and use a result of Sagan and Tirrell to prove that the Lucas analogues
are polynomials with non-negative integer coefficients which in turn proves
that the super FiboCatalan numbers and the generalized FiboCatalan numbers are
integers. | Kendra Killpatrick | 2023-08-25T16:04:27Z | http://arxiv.org/abs/2308.13457v2 | # Super FiboCatalan Numbers and Generalized FiboCatalan Numbers
###### Abstract
Catalan observed in 1874 that the numbers \(S(m,n)=\frac{(2m)!(2n)!}{m!n!(m+n)!}\), now called the super Catalan numbers, are integers but there is still no known combinatorial interpretation for them in general, although interpretations have been given for the case \(m=2\) and for \(S(m,m+s)\) for \(0\leq s\leq 3\). In this paper, we define the super FiboCatalan numbers \(S(m,n)_{F}=\frac{F_{2m}!F_{2n}!}{F_{m}!F_{n}!F_{m+n}!}\) and prove they are integers for \(m=1\) and \(m=2\). In addition, we prove that \(S_{(}m,m+s)_{F}\) is an integer for \(0\leq s\leq 4\).
## 1 Background and Definitions
The well-known Fibonacci sequence is defined recursively by \(F_{n}=F_{n-1}+F_{n-2}\) with initial conditions \(F_{0}=0\) and \(F_{1}=1\). The \(n\)th Fibonacci number, \(F_{n}\), counts the number of tilings of a strip of length \(n-1\) with squares of length \(1\) and dominoes of length \(2\).
The fibonomial coefficients, an analogue of the binomial coefficients, are defined as
\[\binom{n}{k}_{F}=\frac{F_{n}!}{F_{k}!F_{n-k}!}\]
where \(F_{n}!=F_{n}F_{n-1}\cdots F_{2}F_{1}\).
In 2008, Benjamin and Plott [2] gave a combinatorial proof that the fibonomial coefficients are integers, using a notion of staggered tilings. In 2010, Sagan and Savage [12] gave a combinatorial interpretation of the coefficients in terms of tilings associated to paths in a \(k\) x \((n-k)\) rectangle.
A second famous sequence, the Catalan sequence, is defined recursively by \(C_{n}=C_{0}C_{n-1}+C_{1}C_{n-2}+\cdots+C_{n-2}C_{1}+C_{n-1}C_{0}\) with initial conditions \(C_{0}=1\) and \(C_{1}=1\). The Catalan numbers also have an explicit formula given by
\[C_{n}=\frac{1}{n+1}\binom{2n}{n}.\]
The FiboCatalan number \(C_{n,F}\), first given by Lou Shapiro, is defined as
\[C_{n,F}=\frac{1}{F_{n+1}}\binom{2n}{n}_{F}.\]
Shapiro posed the question about whether these numbers are integers and, if so, whether there is a combinatorial interpretation for them. The numbers are known to be integers since
\[C_{n,F}=\binom{2n-1}{n-2}_{F}+\binom{2n-1}{n-1}_{F}\]
but there is still no known combinatorial interpretation for them.
Since the Catalan numbers,
\[\frac{(2n)!}{n!(n+1)!}\]
are integers, one might wonder if the numbers
\[\frac{(2n)!}{n!(n+2)!}\]
are integers. Interestingly, these numbers are not necessarily integers but the numbers given by
\[6\frac{(2n)!}{n!(n+2)!}\]
do form an integer sequence. In 1992, Gessel [7] showed that, in fact, the generalized Catalan numbers
\[J_{r}\frac{(2n)!}{n!(n+r+1)!}\]
are integers when \(J_{r}\) is chosen to be \((2r+1)!/r!\).
In 2005, Gessel and Xin [8] gave a combinatorial interpretation of these numbers for \(r=1\) and proved
\[6\frac{(2n)!}{n!(n+2)!}=4C_{n}-C_{n+1}.\]
E. Catalan [4] observed as far back as 1874 that the numbers
\[S(m,n)=\frac{(2m)!(2n)!}{m!n!(m+n)!}\]
are integers, but there is no known combinatorial interpretation for them in general. Gessel [7] called these numbers the _super Catalan numbers_ since \(S(1,n)/2\)
gives the Catalan number \(C_{n}\). Note that \(S(2,n)/2=6\frac{(2n)!}{n!(n+2)!}\). Allen and Gheorghiciuc [1] have given a combinatorial interpretation for \(S(m,n)\) in the case \(m=2\) and Gheorghiciuc and Orelowitz have given a combinatorial interpretation for \(T(m,n)=\frac{1}{2}S(m,n)\) for \(m=3\) and \(m=4\)[9]. Chen and Wang [5] have given an interpretation for \(S(m,m+s)\) for \(0\leq s\leq 3\).
In this paper, we define the _super FibCatalan numbers_
\[S(m,n)_{F}=\frac{F_{2m}!F_{2n}!}{F_{m}!F_{n}!F_{m+n}!}\]
and the _generalized FiboCatalan numbers_ as
\[J_{r,F}\frac{F_{2n}!}{F_{n}!F_{n+r+1}!}\]
where \(J_{r,F}=F_{2r+1}!/F_{r}!\). Note the following relationship between super FiboCatalan numbers and the generalized FiboCatalan numbers:
\[J_{m-1,F}\frac{F_{2n}!}{F_{n}!F_{n+m}!}=\frac{F_{2m-1}!F_{2n}!}{F_{m-1}!F_{n}! F_{n+m}!}=\frac{F_{m}}{F_{2m}}S(n,m)_{F}.\]
This paper explores the generalized FiboCatalan numbers and the super FiboCatalan numbers. In Section 2, we prove that the generalized FiboCatalan numbers are integers for \(r=1\) (and trivially shows they are integers for \(r=0\)). In Section 3, we prove that the super FiboCatalan numbers are integers for \(m=1\) and \(m=2\) and prove that \(S(m,m+s)_{F}\) are integers for \(0\leq s\leq 3\). In Section 4, we make several conjectures and state open problems in this area.
## 2 The generalized FiboCatalan numbers
The generalized FiboCatalan number for \(r=0\) is equal to \(S(1,n)_{F}\) is equal to \(C_{n,F}\):
\[J_{0,F}\frac{F_{2n}!}{F_{n}!F_{n+0+1}!}=\frac{F_{1}!}{F_{0}!}\frac{F_{2n}!}{F _{n}!F_{n+1}!}=C_{n,F}=S(1,n)_{F}.\]
The generalized FiboCatalan number for \(r=1\) is:
\[J_{1,F}\frac{F_{2n}!}{F_{n}!F_{n+1+1}!}=\frac{F_{3}!}{F_{1}!}\frac{F_{2n}!}{F _{n}!F_{n+2}!}=2\frac{F_{2n}!}{F_{n}!F_{n+2}!}=\frac{1}{3}S(2,n)_{F}.\]
We will prove that these numbers are always integers.
**Lemma 1**.: \[F_{2n}F_{n+2}-F_{2n+2}F_{n}=(-1)^{n}F_{n}.\]
Proof.: This is a fairly well-known result for the Fibonacci numbers and the proof is a straightforward tail-swapping argument similar to those found in Section 1.2, Chapter 1 of Proofs That Really Count by Benjamin and Quinn [3]. For a more algebraic argument, see Theorem 1.2 (with \(q=1\)) in a paper by Garrett [6].
In general, we have
**Lemma 2**.: \[F_{kn}F_{n+2}-F_{kn+2}F_{n}=(-1)^{n}F_{(k-1)n}.\]
**Theorem 1**.: \[F_{2n+1}F_{2n}C_{n,F}-F_{n+1}F_{n}C_{n+1,F}=(-1)^{n}F_{n}F_{2n+1}\frac{F_{2n}!}{F _{n+2}!F_{n}!}.\] (1)
Proof.: \[F_{2n+1}F_{2n}C_{n,F}-F_{n+1}F_{n}C_{n+1,F} =\frac{F_{2n+1}F_{2n}F_{2n}!}{F_{n+1}F_{n}!F_{n}!}-\frac{F_{n+1}F _{n}F_{2n+2}!}{F_{n+2}F_{n+1}!F_{n+1}!}\] \[=F_{2n+1}F_{2n}F_{n+2}\frac{F_{2n}!}{F_{n+2}!F_{n}!}-F_{2n+2}F_{2 n+1}F_{n}\frac{F_{2n}!}{F_{n+2}!F_{n}!}\] \[=F_{2n+1}[F_{2n}F_{n+2}-F_{2n+2}F_{n}]\frac{F_{2n}!}{F_{n+2}!F_{n }!}\] \[=F_{2n+1}(-1)^{n}F_{n}\frac{F_{2n}!}{F_{n+2}!F_{n}!}\]
**Corollary 1**.: _For \(n\geq 1\),_
\[F_{2n+1}\frac{F_{2n}!}{F_{n+2}!F_{n}!}=\frac{1}{F_{n+2}}{2n+1\choose n}_{F}\]
_is an integer._
Given that the FiboCatalan number is defined as
\[C_{n,F}=\frac{1}{F_{n+1}}{2n\choose n}_{F},\]
the Corollary states that the odd FiboCatalan number
\[\frac{1}{F_{n+2}}{2n+1\choose n}_{F}\]
is always an integer. This is not true for the usual binomial expression
\[\frac{1}{n+2}{2n+1\choose n}\]
since this number is a fraction when \(n=2\), for example.
Proof.: It is well know that \(F_{2n}=F_{n}F_{n+1}+F_{n}F_{n-1}\), thus the left side of Equation (1) from Theorem 1 is equal to
\[F_{2n+1}[F_{n}F_{n+1}+F_{n}F_{n-1}]C_{n,F}-F_{n+1}F_{n}C_{n+1,F}\]
and is therefore divisible by \(F_{n}\). Using this expression as the left side and dividing both sides of Equation (1) by \(F_{n}\) gives
\[F_{2n+1}F_{n+1}C_{n,F}+F_{2n+1}F_{n-1}C_{n,F}-F_{n+1}C_{n+1,F} =(-1)^{n}F_{2n+1}\frac{F_{2n}!}{F_{n+2}!F_{n}!}\] \[=(-1)^{n}\frac{1}{F_{n+2}}\binom{2n+1}{n}_{F}.\]
Since the left side of this equation is clearly an integer, we have the result.
We can also rewrite the expression on the right side of Equation (1) as:
\[(-1)^{n}F_{2n+1}\frac{F_{2n}!}{F_{n+2}!F_{n}!}=(-1)^{n}F_{2n+1}\frac{1}{F_{n+2 }}C_{n,F}.\]
A well-known fact of the Fibonacci numbers is that \(gcd(F_{n},F_{m})=F_{gcd(m,n)}\). Thus \(gcd(F_{2n+1},F_{n+2})=F_{gcd(2n+1,n+2)}\). The \(gcd(2n+1,n+2)=1\) or \(3\). If \(gcd(2n+1,n+2)=1\), then \(gcd(F_{2n+1},F_{n+2})=F_{1}=1\) and so \(F_{n+2}\) divides \(C_{n,F}\). If \(gcd(2n+1,n+2)=3\), then \(gcd(F_{2n+1},F_{n+2})=F_{3}=2\) and so \(F_{n+2}\) divides \(2C_{n,F}\).
**Corollary 2**.: _For \(n\geq 1\), the generalized FiboCatalan number for \(r=1\),_
\[\frac{2F_{2n}!}{F_{n+2}!F_{n}!}=\frac{1}{F_{n+2}}2C_{n,F}\]
_is an integer._
## 3 The super FiboCatalan numbers
When \(m=1\), the super FiboCatalan numbers reduce to the FiboCatalan numbers and are known to be integers.
\[S(1,n)_{F}=\frac{F_{2}!F_{(2n)}!}{F_{1}!F_{n}!F_{(n+1)}!}=\frac{1}{F_{n+1}} \frac{F_{2n}!}{F_{n}!F_{n}!}=C_{n,F}\]
When \(m=2\), we have
\[S(2,n)_{F}=\frac{F_{4}!F_{2n}!}{F_{2}!F_{n}!F_{(n+2)}!}=\frac{6F_{2n}!}{F_{n}!F_{(n+2)}!}.\]
**Theorem 2**.: _For \(n\geq 1\), the super FiboCataln number is an integer for \(m=2\). I.e.,_
\[S(2,n)_{F}=\frac{6F_{2n}!}{F_{n}!F_{n+2}!}\]
_is an integer._
Proof.: From Corollary 2 in the preceding section, we have that
\[\frac{2F_{2n}!}{F_{n}!F_{n+2}!}\]
is an integer, thus we have the result.
**Theorem 3**.: \(S(m,m+s)_{F}\) _is an integer for \(0\leq s\leq 5\)._
Proof.: When \(m=0\) we have:
\[S(m,m)_{F}=\frac{F_{2m}!F_{2m}!}{F_{m}!F_{m}!F_{2m}!}=\binom{2m}{m}_{F}\]
which is an integer.
When \(m=1\),
\[S(m,m+1)_{F}=\frac{F_{2m}!F_{2m+2}!}{F_{m}!F_{m+1}!F_{2m+1}!}=\frac{F_{2m+2}F_{ 2m}!}{F_{m+1}!F_{m}!}=F_{2m+2}C_{m,F}\]
which is an integer.
When \(m=2\),
\[S(m,m+2)_{F} =\frac{F_{2m}!F_{2m+4}!}{F_{m}!F_{m+2}!F_{2m+2}!}\] \[=\frac{F_{2m}!F_{2m+4}F_{2m+3}F_{2m+2}!}{F_{m+2}F_{m+1}F_{m}!F_{m}!F_{2m+2}!}\] \[=\frac{1}{F_{m+1}}\frac{F_{2m}!}{F_{m}!F_{m}!}\frac{F_{2m+4}}{F_ {m+2}}F_{2m+3}\] \[=F_{2m+3}C_{m,F}\frac{F_{2(m+2)}}{F_{m+2}}.\]
Since \(F_{2n}=F_{n}F_{n-1}+F_{n}F_{n+1}\) then \(F_{n}\) divides \(F_{2n}\) so \(F_{m+2}\) divides \(F_{2(m+2)}\). Therefore,
\[F_{2m+3}C_{m,F}\frac{F_{2(m+2)}}{F_{m+2}}\]
is an integer.
When \(m=3\),
\[S(m,m+3)_{F} =\frac{F_{2m}!F_{2(m+3)}!}{F_{m}!F_{m+3}!F_{2m+3}!}\] \[=\frac{F_{2m}!F_{2m+6}F_{2m+5}F_{2m+4}F_{2m+3}!}{F_{m}!F_{m}!F_{m +1}F_{m+2}F_{m+3}F_{2m+3}!}\] \[=C_{m,F}\frac{F_{2m+6}}{F_{m+3}}\frac{F_{2m+4}}{F_{m+2}}F_{2m+5}\]
which again is an integer since \(F_{n}\) divides \(F_{2n}\).
When \(m=4\),
\[S(m,m+4)_{F} =\frac{F_{2m}!F_{2(m+4)}!}{F_{m}!F_{m+4}!F_{2m+4}!}\] \[=\frac{F_{2m}!F_{2m+8}F_{2m+7}F_{2m+6}F_{2m+5}F_{2m+4}!}{F_{m}!F_{m}!F_{m+1}F_{m+2}F_{m+3}F_{m+4}F_{2m+4}!}\] \[=\frac{1}{F_{m+2}}C_{m,F}\frac{F_{2m+8}}{F_{m+4}}\frac{F_{2m+6}}{ F_{m+3}}F_{2m+7}F_{2m+5}.\]
Since \(F_{n}\) divides \(F_{2n}\), we must show that
\[\frac{1}{F_{m+2}}C_{m,F}F_{2m+7}F_{2m+5}\]
is an integer. From Corollary 2, we know that
\[\frac{1}{F_{m+2}}2C_{m,F}\]
is an integer. If \(m\equiv 0\) mod \(3\), then \(F_{m+2}\) is odd, so \(F_{m+2}\) must divide \(C_{m,F}\). If \(m\equiv 1\) mod \(3\), then \(F_{2m+7}\) is even, thus \(F_{m+2}\) divides \(C_{m,F}F_{2m+7}\). If \(m\equiv 2\) mod \(3\), then \(F_{2m+5}\) is even, thus \(F_{m+2}\) divides \(C_{m,F}F_{2m+5}\).
## 4 Open Problems
It remains an open problem to determine if the super FiboCatalan numbers are integers for all values of \(m\) and \(n\) and if the generalized FiboCatalan numbers are integers for all values of \(n\) and \(r\). The problem of finding a combinatorial interpretation of the super FiboCatalan numbers remains an interesting open problem, yet will likely prove challenging given that there is a combinatorial interpretation for the super Catalan numbers in only a handful of cases.
In addition, the super Catalan numbers satisfy a number of interesting binomial identities, such as this identity of von Szily (1894), which can be found in [7], Eq. (29), p. 11:
\[S(m,n)=\sum_{k\in\mathbb{Z}}(-1)^{k}\binom{2m}{m+k}\binom{2n}{n+k}.\]
Mikic [10] recently proved the following alternating convolution formula for the super Catalan numbers:
\[\sum_{k=0}^{2n}(-1)^{k}\binom{2n}{k}S(k,l)S(2n-k,l)=S(n,l)S(n+l,n)\]
for all non-negative integers \(n\) and \(l\). Mikic [11] also proved a similar identity for the Catalan numbers:
\[\sum_{k=0}^{2n}(-1)^{k}\binom{2n}{k}C_{k}C_{2n-k}=C_{n}\binom{2n}{n}.\]
We conjecture that many of these identities have analogues for the super FiboCatalan numbers and are interested in exploring these analogues in further research.
|
2306.05941 | Rigidity of the free factor complex | We establish the following non-abelian analogue of the Fundamental Theorem of
Projective Geometry: the natural map from ${\rm{Aut}}(F_n)$ to the automorphism
group of the free-factor complex $\mathcal{AF}_n$ is an isomorphism. We also
prove the corresponding theorem for the action of ${\rm{Out}}(F_n)$ on the
complex of conjugacy classes of free factors. | Mladen Bestvina, Martin R Bridson | 2023-06-09T14:58:24Z | http://arxiv.org/abs/2306.05941v1 | # Rigidity of the free factor complex
###### Abstract.
We establish the following non-abelian analogue of the Fundamental Theorem of Projective Geometry: the natural map from \(\operatorname{Aut}(F_{n})\) to the automorphism group of the free-factor complex \(\mathcal{AF}_{n}\) is an isomorphism. We also prove the corresponding theorem for the action of \(\operatorname{Out}(F_{n})\) on the complex of conjugacy classes of free factors.
## 1. Introduction
Our purpose in this article is to describe the symmetries of the complex of free factors \(\mathcal{AF}_{n}\) associated to a finitely generated free group \(F_{n}\). We shall prove that the natural map from \(\operatorname{Aut}(F_{n})\) to the automorphism group of \(\mathcal{AF}_{n}\) is an isomorphism. We shall also prove the corresponding theorem for the action of \(\operatorname{Out}(F_{n})\) on the complex of conjugacy classes of free factors. These results can be viewed as non-abelian analogues of the Fundamental Theorem of Projective Geometry, as we shall now explain.
The Fundamental Theorem of Projective Geometry [10] establishes that, for any field \(K\), the only bijections of a projective space over \(K\) that preserve incidence relations are the natural ones, i.e. combinations of field automorphisms and projective-linear maps. This can be rephrased in terms of the _Tits building_\(\operatorname{Tits}_{n}^{<}(K)\), which is the poset of proper non-trivial subspaces of \(K^{n}\). If \(K=\mathbb{Q}\) then there are no field automorphisms and the theorem tells us that the natural map \(\operatorname{PGL}(n,\mathbb{Q})\to\operatorname{Aut}(\operatorname{Tits}_{n} ^{<}(\mathbb{Q}))\) is an isomorphism provided \(n\geq 3\). The geometric realisation \(\operatorname{Tits}_{n}(\mathbb{Q})\) of this poset has an additional symmetry: its group of simplicial automorphisms is \(\operatorname{PGL}(n,\mathbb{Q})\rtimes\mathbb{Z}/2\), with the generator of \(\mathbb{Z}/2\) swapping each vertex \(V\) with \(V^{\perp}\), where the orthogonal complement is taken with respect to an inner product on \(\mathbb{Q}^{n}\). (This is an anti-isomorphism of the poset \(\operatorname{Tits}_{n}^{<}(\mathbb{Q})\).)
The inclusion \(\mathbb{Z}^{n}\hookrightarrow\mathbb{Q}^{n}\) induces an isomorphism \(\mathcal{D}_{n}(\mathbb{Z})\to\operatorname{Tits}_{n}^{<}(\mathbb{Q})\), where \(\mathcal{D}_{n}(\mathbb{Z})\) is the poset of proper direct factors of \(\mathbb{Z}^{n}\), ordered by inclusion. Passing from the free abelian group \(\mathbb{Z}^{n}\) to the non-abelian free group \(F_{n}\), the natural analogue of \(\mathcal{D}_{n}(\mathbb{Z})\) is the poset of non-trivial proper _free factors_ of \(F_{n}\), ordered by inclusion. We shall work with the geometric realisation of this poset, which we denote by \(\mathcal{AF}_{n}\). This complex was introduced by Allen Hatcher and Karen Vogtmann [14, 15] who used it to study the cohomology of \(\operatorname{Aut}(F_{n})\)
they proved, in analogy with the Solomon-Tits theorem for \(\mathrm{Tits}_{n}(\mathbb{Q})\), that \(\mathcal{AF}_{n}\) has the homotopy type of a wedge of spheres of dimension \(n-2\).
As in the classical case, one has to assume \(n\geq 3\) in order to obtain the desired rigidity for the automorphism group of this complex.
**Theorem 1.1**.: _For \(n\geq 3\) the natural homomorphism \(\mathrm{Aut}(F_{n})\to\mathrm{Aut}(\mathcal{AF}_{n})\) is an isomorphism._
Note in particular that every automorphism of \(\mathcal{AF}_{n}\) preserves the type of each vertex, i.e. the rank of each free factor; there is no equivalent of the involution \(V\leftrightarrow V^{\perp}\) of \(\mathrm{Tits}_{n}(\mathbb{Q})\).
We also prove a version of the above theorem for \(\mathrm{Out}(F_{n})\). In this case, the natural analogue of \(\mathrm{Tits}_{n}(\mathbb{Q})\) is the geometric realisation \(\mathcal{OF}_{n}\) of the poset of conjugacy classes of non-trivial proper free factors in \(F_{n}\), i.e. the quotient \(\mathcal{AF}_{n}/\mathrm{Inn}(F_{n})\). The large-scale geometry of \(\mathcal{OF}_{n}\) was elucidated by Bestvina and Feighn [1], who proved that it is a space of infinite diameter that is hyperbolic in the sense of Gromov.
**Theorem 1.2**.: _For \(n\geq 3\) the natural homomorphism \(\mathrm{Out}(F_{n})\to\mathrm{Aut}(\mathcal{OF}_{n})\) is an isomorphism._
A key similarity between \(\mathrm{Tits}_{n}(\mathbb{Q})\), on the one hand, and \(\mathcal{AF}_{n}\) and \(\mathcal{OF}_{n}\) on the other, is that each is composed of _standard apartments_. In the case of \(\mathrm{Tits}_{n}(\mathbb{Q})\), such an apartment is the full subcomplex whose vertices represent the subspaces spanned by the proper, non-empty subsets of a basis for \(\mathbb{Q}^{n}\). A standard apartment in \(\mathcal{AF}_{n}\) is defined in much the same way, taking the free factors spanned by the non-empty proper subsets of a basis. In each case, an apartment is simplicially isomorphic to the barycentric subdivision of the boundary of an \((n-1)\)-simplex.
There are also important differences between \(\mathrm{Tits}_{n}(\mathbb{Q})\) and \(\mathcal{AF}_{n}\). The former is a spherical building of diameter \(3\), while \(\mathcal{AF}_{n}\) has infinite diameter. From a technical point of view, a major difficulty in understanding the automorphisms of \(\mathcal{AF}_{n}\) comes from the fact that, in contrast to \(\mathrm{Tits}_{n}(\mathbb{Q})\), there are many "fake apartments" in \(\mathcal{AF}_{n}\), i.e. subcomplexes abstractly isomorphic to the barycentric subdivision of the boundary of an \((n-1)\)-simplex that are not standard apartments (Section 7).
The first stage in our proof of Theorem 1.1 involves establishing another difference, to which we have already alluded: every simplicial automorphism of \(\mathcal{AF}_{n}\) preserves the partial ordering on the vertex set, i.e. the rank of free factors; this is achieved in Section 3.
Our aim in the second stage of the proof (Section 4) is to show that standard apartments can be recognized intrinsically: they can be distinguished from fake apartments by metric properties of their neighbourhoods. From this it
follows that the set of standard apartments is preserved by all automorphisms of \(\mathcal{AF}_{n}\). The key technical result in this part of the proof is the _Antipode Lemma_ (Theorem 4.5), which provides an intrinsic (metric) characterisation of pairs of vertices \(A,L\) such that \(A*L=F_{n}\).
In the third stage of the proof, working outwards from a fixed standard apartment, we consider adjacent apartments that have large overlaps. A key role is played in this part of the argument by _sticks_ - certain rank \(1\) factors that, when gathered in appropriate families, provide rigid, highly-symmetric frames controlling large overlaps between apartments (see Section 5.1).
With these tools in hand, the final step in our proof is straightforward: \(\operatorname{Aut}(F_{n})\) acts transitively on the set of standard apartments, preserving the rank of vertices, so by composing an arbitrary automorphism \(\Phi\) of \(\mathcal{AF}_{n}\) with a suitable element of \(\operatorname{Aut}(F_{n})\), we may assume that \(\Phi\) fixes a standard apartment; we argue that one can compose with a further element of \(\operatorname{Aut}(F_{n})\) to ensure that \(\Phi\) fixes the apartment and all of the adjacent sticks pointwise; this forces \(\Phi\) to fix the neighbouring apartments and their sticks pointwise (Proposition 5.9), and by propagation \(\Phi\) is forced to be the identity everywhere.
Our proof of Theorem 1.2 follows the same outline but there are some additional difficulties to address, notably that it is harder to recognise standard apartments, which are no longer uniquely determined by their rank \(1\) vertices.
The parallel that we focussed on to motivate Theorem 1.1 compared to \(|\mathcal{D}_{n}(\mathbb{Z})|\cong\operatorname{Tits}_{n}(\mathbb{Q})\). This is a facet of the powerful \(3\)-way analogy between automorphism groups of free groups, lattices such as \(\operatorname{SL}(n,\mathbb{Z})\), and mapping class groups of surfaces of finite type [1, 2]. In this grand analogy, the object corresponding to \(\mathcal{AF}_{n}\) and \(\mathcal{OF}_{n}\) in the setting of mapping class groups is the curve complex [1]. Ivanov [14] proved the analogue of Theorems 1.1 and 1.2 in this setting: the natural map from the extended mapping class group of a surface of finite type to the group of simplicial automorphisms of the corresponding curve complex is an isomorphism (with some exceptions for small surfaces - cf. [13], [15]).
Ivanov used his theorem to deduce that the extended mapping class group of a surface of finite type is equal to its own abstract commensurator (with the same exceptions for small surfaces). In connection with this, we should comment on the fact that \(\operatorname{Aut}(\mathcal{AF}_{n})\) is \(\operatorname{Aut}(F_{n})\), whereas \(\operatorname{Aut}(\mathcal{D}_{n}(\mathbb{Z}))\) is \(\operatorname{PGL}(n,\mathbb{Q})\) not \(\operatorname{PGL}(n,\mathbb{Z})\). This difference can be interpreted as a manifestation of the fact that \(\operatorname{GL}(n,\mathbb{Q})\) is the abstract commensurator of \(\operatorname{GL}(n,\mathbb{Z})\). In contrast, commensurations of \(\operatorname{Aut}(F_{n})\) (i.e. isomorphisms between subgroups of finite index) are as restricted as they are in the mapping class group case: Bridson and Wade [2] prove that the action of \(\operatorname{Aut}(F_{n})\) on \(\mathcal{AF}_{n}\) extends to a faithful action by \(\operatorname{Comm}(\operatorname{Aut}(F_{n}))\), and it then follows from Theorem 1.1 that \(\operatorname{Aut}(F_{n})=\operatorname{Comm}(\operatorname{Aut}(F_{n}))\). The corresponding result for \(\operatorname{Out}(F_{n})\) is due to
Farb and Handel [10] for \(n\geq 4\) and to Horbez and Wade [11] for \(n\geq 3\) (with proofs that do not follow the template we have described).
Theorems 1.1 and 1.2 also extend the range of faithful geometric models for \(\operatorname{Aut}(F_{n})\) and \(\operatorname{Out}(F_{n})\) - by which we mean spaces \(X\) where a natural action induces an isomorphism \(\operatorname{Aut}(F_{n})\to\operatorname{Aut}(X)\) or \(\operatorname{Out}(F_{n})\to\operatorname{Aut}(X)\). The first such rigidity result was proved by Bridson and Vogtmann, who showed that \(\operatorname{Out}(F_{n})\) is the group of simplicial automorphisms of the spine of Outer space [1]. Other such spaces \(X\) include the simplicial closure of Outer space [1], the free and cyclic splitting complexes [1, 11], and Outer space endowed with the Lipschitz metric [10]. This last result, due to Francaviglia and Martino, is the natural analogue of Royden's theorem on the isometries of Teichmuller space [12], which was reproved by Ivanov [13] using the rigidity of the curve complex (the analogue of Theorem 1.1), with an argument modelled on the proof of Mostow rigidity in higher rank [13], which in turn relies on understanding the automorphisms of spherical buildings such as \(\operatorname{Tits}_{n}(\mathbb{R})\), which is where we began.
**Acknowledgements.** The first author gratefully acknowledges the support by the National Science Foundation under grant number DMS-1905720.
## 2. Background and Preliminaries
We shall assume that the reader is familiar with basic algebraic facts about free groups and their subgroups. For example, if \(L<F_{n}\) is a free factor and \(H<F_{n}\) then \(H\cap L\) is a free factor of \(H\); in particular any intersection of free factors in \(F_{n}\) is a free factor.
Throughout this paper we shall explore subgroups of free groups by working with labeled graphs that represent them. In this section we gather a range of facts that we shall need concerning these graphical representations.
### Labeled graphs and Stallings folds
We fix a basis \(\{a_{1},\dots,a_{n}\}\) for \(F_{n}\) and identify \(F_{n}\) with the fundamental group of the rose \(R_{n}\), which is a graph1 with one vertex \(v\) and \(n\) edges, directed and labeled \(a_{1},\dots,a_{n}\). The length of a word \(w\) in the letters \(a_{i}^{\pm 1}\) (equivalently, an edge path in \(R_{n}\)) will be denoted by \(|w|\). A _morphism_ of graphs is a continuous map that sends vertices to vertices and edges to edges. Formally, a _labeled graph_ is a morphism of graphs \(\lambda:\Gamma\to R_{n}\); in practice, we regard \(\Gamma\) as a graph in which the edges have been oriented and labeled by letters \(a_{i}\) so that \(\lambda\) preserves the orientation and labeling. Given \(H<F_{n}\), the _pointed_\(\operatorname{core}_{*}(H)\) is the labeled graph obtained by restricting the (based) covering map \((\widetilde{R}_{n},*)/H\to(R_{n},v)\) to the minimal connected subgraph containing all the embedded loops and the basepoint, while the (unpointed) \(\operatorname{core}(H)\subset\operatorname{core}_{*}(H)\) is the minimal connected
subgraph containing all the embedded loops. \(H_{1}\) is conjugate to \(H_{2}\) if and only if \(\operatorname{core}(H_{1})=\operatorname{core}(H_{2})\).
If a pair of directed edges \(e,e^{\prime}\) in a labeled graph \(\Gamma\) have the same label and the same initial (resp. terminal) vertex, then the morphism of labeled graphs \(\Gamma\to\Gamma^{\prime}\) that identifies these edges and their terminal (resp. initial) vertices is called a Stallings _fold_, [10]. Any morphism of finite graphs can be expressed as a finite sequence of folds followed by an immersion (locally injective map). There is a unique graph \(\operatorname{fold}(\Gamma)\) obtained from \(\Gamma\) by a maximal sequence of folds; such a graph is said to be _fully folded_; its labeling map \(\operatorname{fold}(\Gamma)\to R_{n}\) is an immersion.
We say that a labeled graph with basepoint \((\Gamma,*)\)_supports_ a subgroup \(K<F_{n}\) if \(K\) is contained in the \(\pi_{1}\)-image of the labeling map \(\Gamma\to R_{n}\).
For labeled graphs \(\Gamma_{1}\) and \(\Gamma_{2}\) with basepoints, \(\Gamma_{1}\vee\Gamma_{2}\) will denote the labeled graph obtained from \(\Gamma_{1}\sqcup\Gamma_{2}\) by identifying the basepoints. We refer to \(\Gamma_{1}\vee\Gamma_{2}\) as the _wedge_ of \(\Gamma_{1}\) and \(\Gamma_{2}\). If \(\Gamma_{1}=\operatorname{core}_{*}(H_{1})\) and \(\Gamma_{2}=\operatorname{core}_{*}(H_{2})\), then \(\operatorname{fold}(\Gamma_{1}\vee\Gamma_{2})=\operatorname{core}_{*}\langle H _{1},H_{2}\rangle\). The following special case of this observation will be useful.
**Lemma 2.1**.: _A subgroup \(H<F_{n}\) of rank \(k\) is a free factor if and only if there is a labeled graph \(\Gamma\) of rank \((n-k)\) such that \(\operatorname{core}_{*}(H)\vee\Gamma\) folds to \(R_{n}\)._
The following well-known lemma is proved by observing how a graph of rank \(1\) can fold into \(\operatorname{core}(L_{n-1})\).
**Lemma 2.2**.: _Let \(L_{n-1}=\langle a_{1},\dots,a_{n-1}\rangle\). Then \(L_{n-1}*\langle u\rangle=F_{n}\) if and only if \(u=xa_{n}^{\pm 1}y\) for some \(x,y\in L_{n-1}\)._
The following criterion for recognising factors of corank \(1\) will also be useful.
**Proposition 2.3**.: _If \(H<F_{n}\) is a free factor of rank \(n-1\), then either \(\operatorname{core}_{*}(H)\) embeds in the rose \(R_{n}\) or else the labeled graph obtained by identifying two of its vertices folds to \(R_{n}\)._
Proof.: Choose \(u\in F_{n}\) such that \(H*\langle u\rangle=F_{n}\). We add a loop labeled \(u\) to \(\Gamma=\operatorname{core}_{*}(H)\) at \(*\) and start folding to obtain \(R_{n}\). Initially, at every step an edge of the \(u\)-loop folds with an edge of \(\Gamma\). If the process stops before the whole loop is folded in, \(\operatorname{core}_{*}(H)\) embeds in \(R_{n}\). Otherwise, when the last edge of the \(u\)-loop is folded in, two vertices of \(\operatorname{core}_{*}(H)\) will be identified before the folding to \(R_{n}\) continues.
### Concerning visible factors and powers
The following standard facts will be used without further comment throughout the paper; the second is used in the proof of the lemma that follows.
* If \(H_{1}<H_{2}\) then there is a unique label-preserving immersion \(\operatorname{core}_{*}(H_{1})\to\operatorname{core}_{*}(H_{2})\) restricting to an immersion \(\operatorname{core}(H_{1})\to\operatorname{core}(H_{2})\).
* \(u\in F_{n}\) is conjugate into \(H<F_{n}\) if and only if there is an oriented loop in \(\operatorname{core}(H)\) whose label is a cyclically reduced word representing the conjugacy class of \(u\); if \(H\) is malnormal (e.g. a free factor) then there is a unique such loop (up to rotation).
We need an elaboration on the second point. To explain this, recall that the set \(L_{H}\) of reduced words representing the elements of a finitely generated subgroup \(H<F_{n}\) consists of the labels on the reduced edge paths in \(\operatorname{core}_{*}(H)\) that begin and end at the basepoint. This sits inside \(\operatorname{sub}(H)\), the set of labels on all reduced paths in \(\operatorname{core}_{*}(H)\), i.e. words \(v\) such that some \(uvw\) is a reduced word in \(L_{H}\). Define
\[\operatorname{E}_{a_{i}}(H)=\{n\mid a_{i}^{n}\in\operatorname{sub}(H)\}.\]
**Lemma 2.4**.: _If \(H\) is a free factor, then \(\operatorname{E}_{a_{i}}(H)\) is infinite if and only if \(\operatorname{core}_{*}(H)\) has a loop labeled by the basis element \(a_{i}\)._
Proof.: Any edge path in \(\operatorname{core}_{*}(H)\) whose length exceeds the number of vertices will contain a loop, and a shortest such loop along the path will be embedded. So if \(\operatorname{E}_{a_{i}}\) is infinite then \(\operatorname{core}_{*}(H)\) contains an embedded loop labeled \(a_{i}^{m}\) for some \(m\neq 0\). This embedded loop represents the conjugacy class of a primitive element, so \(|m|=1\). The converse is obvious.
**Corollary 2.5**.: _Let \(A<F_{n}\) be a free factor of rank \(n-1\). Then, either \(\operatorname{E}_{a_{i}}(A)\) is finite for some \(i\geq 2\), or else \(\operatorname{core}_{*}(A)\) is a tree with \(n-1\) loops attached, labeled \(a_{2},\dots,a_{n}\)._
Proof.: If \(\operatorname{E}_{a_{i}}(A)\) is infinite for each \(i\geq 2\), then the lemma provides loops labeled \(a_{i}\), and since the rank of \(\operatorname{core}_{*}(A)\) is \(n-1\), the remainder of the graph is a tree.
**Lemma 2.6**.: _Let \(V<F_{n}\) be a free factor of rank \(n-1\) and assume that both \(\langle a_{3},\cdots,a_{n}\rangle\) and \(\langle a_{2},\cdots,a_{n-1}\rangle\) can be conjugated into \(V\)._
1. _If_ \(n\geq 4\) _then_ \(V\) _is conjugate to_ \(\langle a_{2},\cdots,a_{n}\rangle\)_._
2. _If_ \(n=3\) _then_ \(V\) _is conjugate to_ \(\langle a_{2}^{\gamma},a_{3}\rangle\) _for some_ \(\gamma\in F_{3}\)_._
Proof.: In this proof factors are considered up to conjugacy so we ignore basepoints and work with \(\operatorname{core}(V)\).
The assumptions imply that the inclusions \(\operatorname{core}(\langle a_{3},\cdots,a_{n}\rangle)\hookrightarrow R_{n}\) and \(\operatorname{core}(\langle a_{2},\cdots,a_{n-1}\rangle)\hookrightarrow R_{n}\) both lift to \(\operatorname{core}(V)\to R_{n}\). If \(n\geq 4\) these lifts both contain the unique loop of \(\operatorname{core}(V)\) labeled \(a_{3}\), so they overlap and their union is a wedge of \(n\) loops labeled \(a_{2},\cdots,a_{n}\), thus proving (1).
If \(n=3\) we know only that \(\operatorname{core}(V)\) contains embedded loops labeled \(a_{2}\) and \(a_{3}\). As \(\operatorname{core}(V)\) has no vertices of valence \(1\) and \(\operatorname{rank}(V)=2\), it must be the
graph obtained from these two loops by connecting them with an arc, labeled \(\gamma\) say. This proves (2).
The case \(n\geq 4\) in the preceding lemma can also be deduced from the following consequence of the second bullet point above.
**Lemma 2.7**.: _If \(V<F_{n}\) is a free factor that contains conjugates of \(a_{1},a_{2}\) and \(a_{1}a_{2}\), then the loops labeled \(a_{1}\) and \(a_{2}\) are based at the same vertex of \(\operatorname{core}(V)\)._
Proof.: The union of the loops in \(\operatorname{core}(V)\) labeled \(a_{1},a_{2}\) and \(a_{1}a_{2}\) is equal in homology to the union of the loops labeled \(a_{1}\) and \(a_{2}\), because \(H_{1}(V)\) injects into \(H_{1}(F)\). It follows that these subgraphs coincide, and hence the loop labelled \(ab\) is based at the same vertex as either the \(a\)-loop or the \(b\)-loop, forcing all three loops to be based at the same vertex.
### Intersections and pullbacks
Given finitely generated \(H_{1},H_{2}<F_{n}\) one can compute the intersection \(H_{1}\cap H_{2}\) by constructing the pullback of the labeling maps \(\operatorname{core}_{*}(H_{i})\to R_{n}\): the vertex set of the pullback graph \(P\) consists of pairs of vertices \((v,v^{\prime})\in\operatorname{core}_{*}(H_{1})\times\operatorname{core}_{*}( H_{2})\) with the same image in \(R_{n}\), and the directed edges of \(P\) are pairs of directed edges with the same image in \(R_{n}\). The component of \(P\) that contains the basepoint \((*,*)\) is \(\operatorname{core}_{*}(H_{1}\cap H_{2})\), possibly with trees attached. Some of the components of \(P\) may be trees, while those with non-trivial fundamental group correspond to the non-trivial intersections of \(H_{1}\) with the conjugates of \(H_{2}\).
### Free factor graphs, distance in \(\mathcal{AF}_{n}\) and \(\mathcal{OF}_{n}\), and links
\(\mathcal{AF}_{n}\) is the geometric realisation of the poset of non-trivial proper free factors of \(F_{n}\) ordered by inclusion. For \(n\geq 3\) it is a flag complex, so every automorphism of its \(1\)-skeleton \(\mathcal{AF}_{n}^{(1)}\) extends uniquely to a simplicial automorphism of \(\mathcal{AF}_{n}\). Thus, studying the group of simplicial automorphisms of \(\mathcal{AF}_{n}\) is equivalent to studying the group of isometries of the graph \(\mathcal{AF}_{n}^{(1)}\), metrized so that each edge has length \(1\). To lighten the notation, we sometimes write \(\mathcal{AF}_{n}\) in place of \(\mathcal{AF}_{n}^{(1)}\), when concentrating on the _free factor graph_, which has vertices the non-trivial free factors \(A<F_{n}\) and has an edge joining \(A\) to \(B\) if \(A<B\). Similarly, rather than studying \(\mathcal{OF}_{n}\) as a simplicial complex we shall sometimes concentrate on its \(1\)-skeleton, i.e. the quotient of the free factor graph by the action of \(\operatorname{Inn}(F_{n})\) - so vertices are conjugacy classes of proper free factors and there is an edge from \([A]\) to \([B]\) if there are representatives of these conjugacy classes with \(A<B\).
When \(n\geq 3\), we write \(d_{\mathcal{A}}(A,B)\) for the combinatorial distance between vertices in (the \(1\)-skeleton of) \(\mathcal{AF}_{n}\) and \(d_{\mathcal{O}}([A],[B])\) for the distance in \(\mathcal{OF}_{n}\). When there is no danger of ambiguity, we will simply write \(d\). We shall use the terms "automorphism" and "isometry" interchangeably and supress mention of the restriction isomorphism from the group of simplicial automorphisms of
the full complex \(\mathcal{AF}_{n}\) to the isometry group of its 1-skeleton, writing both groups as \(\operatorname{Isom}(\mathcal{AF}_{n})\) or \(\operatorname{Aut}(\mathcal{AF}_{n})\) (and similarly for \(\mathcal{OF}_{n}\)).
We shall not have to bother much with the case \(n=2\), but when we do we must modify the above definition because \(\mathcal{AF}_{2}\) is just a discrete set: to account for this we regard \(\mathcal{AF}_{2}\) as the vertex set of the graph that has an edge joining \(\langle a\rangle\) to \(\langle b\rangle\) whenever \(\langle a,b\rangle=F_{2}\) and metrize it and \(\mathcal{OF}_{2}\) accordingly. (This makes \(\mathcal{OF}_{2}\) isometric to the vertex set of the Farey graph.)
Estimating distances and understanding neighbourhoods in \(\mathcal{AF}_{n}\) and \(\mathcal{OF}_{n}\) is difficult in general, as we shall see in the proof of Theorems 1.1 and 1.2, but there are some simple facts relating distance to the algebra of free factors. For example:
**Lemma 2.8**.: _Let \(V_{1}\) and \(V_{2}\) be vertices in \(\mathcal{AF}_{n}\) with \(\operatorname{rank}(V_{1})=n-1\). Then \(d(V_{1},V_{2})\leq 2\) if and only if \(V_{1}\cap V_{2}\neq 1\)._
Proof.: \(d(V_{1},V_{2})=1\) if and only if \(V_{2}<V_{1}\). If \(d(V_{1},V_{2})=2\), then there is a free factor \(U\) with \(d(V_{1},U)=1=d(U,V_{2})\), whence \(U<V_{1}\) and either \(V_{2}<U\) or \(U<V_{2}\). In the first case \(V_{2}<V_{1}\) and in the second case \(U\subset V_{1}\cap V_{2}\).
A similar argument establishes:
**Lemma 2.9**.: _Let \([V_{1}]\) and \([V_{2}]\) be vertices in \(\mathcal{OF}_{n}\) with \(\operatorname{rank}(V_{1})=n-1\). Then \(d([V_{1}],[V_{2}])\leq 2\) if and only if \(V_{1}^{w}\cap V_{2}\neq 1\) for some \(w\in F_{n}\)._
There will be certain points in our argument where it is convenient to work with the whole complex \(\mathcal{AF}_{n}\) rather than just its 1-skeleton. This is particularly true of arguments that involve links \(\operatorname{Lk}(V)\). The following observations are useful in induction arguments.
**Lemma 2.10**.: _If \(V\in\mathcal{AF}_{n}\) has rank \(n-1\), then there is a rank-preserving isomorphism \(\operatorname{Lk}(V)\cong\mathcal{AF}_{n-1}\). More generally, if \(\operatorname{rank}(V)=k\) then the subcomplex \(\operatorname{Lk}_{-}(V)\subset\operatorname{Lk}(V)\) spanned by vertices of rank less than \(k\) is isomorphic to \(\mathcal{AF}_{k}\). Similarly, if \([V]\in\mathcal{OF}_{n}\) has rank \(k\), then \(\operatorname{Lk}_{-}[V]\subset\operatorname{Lk}[V]\) is isomorphic to \(\mathcal{OF}_{k}\)._
Proof.: The assertions about \(\mathcal{AF}_{n}\) are immediate from the definitions. For the assertion about \(\mathcal{OF}_{n}\) one needs to note that because \(V\) is malnormal in \(F_{n}\), free factors \(A,A^{\prime}<V\) are conjugate in \(V\) if they are conjugate in \(F_{n}\).
We shall also need the following observation concerning links.
**Lemma 2.11**.: _If \([C]\in\mathcal{OF}_{n}\) is a vertex of rank 1, then \(\operatorname{Lk}_{\mathcal{OF}_{n}}([C])\cong\operatorname{Lk}_{\mathcal{AF} _{n}}(C)\)._
Proof.: Without loss of generality we may assume \(C=\langle a_{1}\rangle\). If \(d([C],[L])=1\), then \(L\) contains a conjugate of \(a_{1}\) and \(\operatorname{core}(L)\) contains a unique loop labeled
\(a_{1}\). We select a conjugate \(L_{a_{1}}\in[L]\) by decreeing the vertex at which this loop is based to be the basepoint. This choice \([L]\mapsto L_{a_{1}}\) provides an inverse to the canonical projection \(\operatorname{Lk}_{\mathcal{AF}_{n}}(C)\to\operatorname{Lk}_{\mathcal{O} \mathcal{F}_{n}}([C])\).
### Fully irreducible automorphisms and injectivity radius
Recall that an automorphism \(f:F_{n}\to F_{n}\) is called _fully irreducible_ if for every proper free factor \(A<F_{n}\) and every \(k>1\), the free factor \(f^{k}(A)\) is _not_ conjugate to \(A\). Fully irreducible automorphisms exist in every rank \(n\geq 2\), [10]. The results in this section are valid for an **arbitrary fully irreducible automorphism**\(f\) but when we come to use them in Section 4.1 we will be free to fix a choice, so it would be enough, for example, to prove these results for \(a_{1}\mapsto a_{2}\mapsto a_{1}a_{2}\) in the case \(n=2\), or
\[f_{0}:a_{1}\mapsto a_{2}\mapsto a_{3}\mapsto\cdots\mapsto a_{n-1}\mapsto a_{n} \mapsto a_{1}a_{3}a_{4}\ldots a_{n}a_{2}, \tag{1}\]
in the general case.
The following proposition can be proved using standard facts about stable laminations [1]. We give an alternative proof suited to the study of free factor complexes; the general theory is hidden in our appeal to [1].
**Proposition 2.12**.: _Let \(f\in\operatorname{Aut}(F_{n})\) be a fully irreducible automorphism. For all \(\ell>0,R>0\) and every free factor \(A<F_{n}\), there is an integer \(K=K(f,A,\ell,R)\) such that, all \(k\geq K\),_
\[d_{\mathcal{A}}(f^{k}(A),C)\geq d_{\mathcal{O}}([f^{k}(A)],[C])\geq R\]
_for all rank-1 free factors \(C=\langle c\rangle\) with \(|c|\leq\ell\)._
Proof.: The first inequality is obvious. As there are only finitely many rank-1 free factors with \(|c|\leq\ell\), the second inequality is an immediate consequence of the fact (Theorem 9.3 of [1]) that fully irreducible elements act on \(\mathcal{O}\mathcal{F}_{n}\) as isometries with positive translation length, so \(\inf\{d_{\mathcal{O}}(f^{k}(V),V)\mid V\in\mathcal{O}\mathcal{F}_{n}\}>k \lambda_{f}\) with \(\lambda_{f}>0\), hence
\[d_{\mathcal{O}}([f^{k}(A)],[C])\geq k\lambda_{f}-d_{\mathcal{O}}([A],[C]).\]
In the above proof it was overkill to use the fact that orbits of \(f\) grow linearly: we only needed the orbits to be unbounded.
**Corollary 2.13**.: \(\operatorname{injrad}(\operatorname{core}(f^{k}(A)))\to\infty\) _as \(k\to\infty\)._
Proof.: If \(c\) is a word of length \(\ell\) labeling an embedded loop in \(\operatorname{core}(f^{k}(A))\), then \(C=\langle c\rangle\) is a rank-1 free factor conjugate into \(f^{k}(A)\), so \(d_{\mathcal{O}}([f^{k}(A)],[C])=1\), which contradicts the proposition unless \(k<K(f,A,\ell,2)\)
**Corollary 2.14**.: _Let \(A<F_{n}\) be a factor of rank \(n-1\) and fix \(\ell>0\). If \(k\) is sufficiently large, then \(\langle f^{k}(A),w\rangle=F_{n}\) implies that every word conjugate to \(w\) has length at least \(\ell\)._
Proof.: Let \(C=\langle w\rangle\); it is a free factor. Then \(\langle f^{k}(A),w\rangle=F_{n}\) implies \(d(f^{k}(A),C)=3\) when \(n\geq 3\) or \(d(f^{k}(A),C)=1\) when \(n=2\). The lemma tells us that this cannot happen if \(k\) is sufficiently large and \(w\) is conjugate to a word of length less than \(\ell\).
### Subfactor Projections
Subfactor projections were introduced by Bestvina and Feighn in [1]. The definition and use of subfactor projections is motivated by the theory of subsurface projections introduced by Masur and Minsky [14]. For \(n\geq 3\), if \(A<F_{n}\) is a free factor of rank \(n-1\), then the subfactor projection \(\pi_{A}\) assigns to suitable vertices \([B]\in\mathcal{OF}_{n}\) a subcomplex \(\pi_{A}([B])\) of uniformly bounded diameter in the free factor complex \(\mathrm{Lk}[A]\cong\mathcal{OF}_{n-1}\).
In more detail (see [13]), \(\pi_{A}\) is defined on \([B]\neq[A]\) provided that \([B]\) does not contain a conjugate \(B^{w}\) antipodal to \(A\) and that it has the following properties:
* the diameter of \(\pi_{A}([B])\) is uniformly bounded
* if \(B\) is conjugate into \(A\) then \(\pi_{A}([B])=[B]\)
* \(\pi_{A}\) is coarsely Lipschitz, i.e. there is a constant \(\delta\) such that if \(d_{\mathcal{O}}([B],[C])=1\) and \(\pi_{[A]}\) is defined for both \([B]\) and \([C]\), then the Hausdorff distance between \(\pi_{A}([B])\) and \(\pi_{A}([C])\) is at most \(\delta\).
## 3. Automorphisms of \(\mathcal{AF}_{n}\) preserve the rank of vertices
Let \(\mathcal{AF}_{n}(i)\subset\mathcal{AF}_{n}\) be the set of vertices of rank \(i\). The following proposition is the first step in the proof of Theorem 1.1.
**Proposition 3.1**.: _Every automorphism of \(\mathcal{AF}_{n}\) preserves \(\mathcal{AF}_{n}(i)\) for \(i=1,\dots,n-1\)._
The proof is broken into several preliminary results.
**Lemma 3.2**.: _Every automorphism of \(\mathcal{AF}_{n}\) preserves \(\mathcal{AF}_{n}(1)\cup\mathcal{AF}_{n}(n-1)\)._
Proof.: When \(n=3\) there is nothing to prove, so assume that \(n>3\). In this proof it is convenient to work with the whole complex \(\mathcal{AF}_{n}\) rather than just the \(1\)-skeleton. If \(A\) is a factor of rank \(i\) with \(1<i<n-1\) then \(\mathrm{Lk}(A)\) can be written as the join \(\mathrm{Lk}_{-}(A)*\mathrm{Lk}_{+}(A)\), where \(\mathrm{Lk}_{-}(A)\cong\mathcal{AF}_{i}\) is the full subcomplex spanned by factors contained in \(A\) and \(\mathrm{Lk}_{+}(A)\) is the full subcomplex spanned by factors containing \(A\). To finish the proof we need to
argue that links of vertices of rank \(1\) and \(n-1\) are not joins. We will argue that they have diameter greater than \(2\).
In the case of a rank \(n-1\) factor \(A\), we have \(\operatorname{Lk}(A)\cong\mathcal{AF}_{n-1}\). As \(\operatorname{Aut}(F_{n})\) acts transitively on the set of factors of each rank, we may assume \(A=\langle a_{1},\ldots,a_{n-1}\rangle\). We could appeal to the non-trivial fact that \(\mathcal{AF}_{n-1}\) has infinite diameter, but it is easy to see that it has diameter at least \(3\), which suffices here: by Lemma 2.8, it is enough to exhibit a rank \(1\) free factor \(C<A\) and a rank \(n-2\) free factor \(B<A\) such that \(C\cap B=1\); let \(C=\langle a_{1}\rangle\) and let \(B=\langle a_{2},\ldots,a_{n-1}\rangle\).
For the rank \(1\) case we examine the link of \(\langle a_{1}\rangle<F_{n}\), focusing on \(\langle a_{1},a_{2}\rangle\) and \(\langle a_{1},a_{3},\ldots,a_{n}\rangle\). The intersection of these factors is \(\langle a_{1}\rangle\) so, arguing as in the proof of Lemma 2.8, we see that their distance in the link is greater than \(2\).
To distinguish rank \(1\) vertices from rank \(n-1\) vertices, we examine the geometry of their neighbourhoods in \(\mathcal{AF}_{n}\).
**Lemma 3.3**.: _Let \(A<F_{n}\) be a free factor of rank \(n-1\), let \(C=\langle u\rangle\) be a free factor of rank \(1\), and suppose \(F_{n}=A*C\). For any vertex \(L\), if \(d(A,L)=1\) then \(d(L,C)=2\)._
Proof.: If \(L<A\) then \(C\) is not contained in \(L\) and \(V=\langle L,C\rangle=L*C\) is a free factor with \(d(L,V)=d(V,C)=1\).
This lemma says that a geodesic from \(C\) to \(A\) (which has length \(3\)) cannot be extended to a geodesic of length \(4\); indeed any extension will necessarily backtrack towards the initial vertex \(C\). We shall prove that this metric property fails if we reverse the roles of rank \(1\) and rank \(n-1\) vertices, that is, we find extensions that don't backtrack.
**Proposition 3.4**.: _For every rank \(1\) vertex \(C\) and every rank \(n-1\) vertex \(A\), if \(d(A,C)>1\) then there exists a vertex \(L\) with \(d(C,L)=1\) and \(d(L,A)>2\)._
This proposition is an immediate consequence of Lemma 2.8 and the following result.
**Lemma 3.5**.: _If \(A<F_{n}\) is a free factor of rank \(n-1\) and \(C\) is a free factor of rank \(1\) that is not contained in \(A\), then there exists a free factor \(L\) of rank \(2\) with \(C<L\) and \(L\cap A=1\)._
Proof.: We may assume that \(C=\langle a_{1}\rangle\). We analyse \(A\) according to the two cases in Corollary 2.5. Suppose first that \(\operatorname{E}_{a_{2}}(A)\) is finite, fix \(M>\max\operatorname{E}_{a_{2}}(A)\) and let \(L=\langle a_{1},\ a_{2}^{M}a_{1}a_{3}\rangle\). Note that \(L<F_{n}\) is a free factor, since \(\langle L,a_{2}\rangle=\langle a_{1},a_{2},a_{3}\rangle\). A reduced word in the generators of \(L\) either belongs to \(C=\langle a_{1}\rangle\) or else contains \(a_{2}^{M}\) as a subword. The intersection of \(C\) with \(A\) is trivial, by
hypothesis, and reduced words of the latter form do not belong to \(A\), by the definition of \(M\), so \(L\cap A=1\).
It remains to consider the second case in Corollary 2.5. Thus we assume now that \(\operatorname{core}_{*}(A)\) is a tree with \(n-1\) loops attached, labeled \(a_{2},\ldots,a_{n}\). Observe that if \(p\) is greater than the diameter of \(\operatorname{core}_{*}(A)\), then \(a_{1}^{p}\not\in\operatorname{sub}(A)\). It follows that no reduced word in the generators of the rank \(2\) free factor \(L=\langle a_{1},\ a_{2}a_{1}^{p}a_{3}\rangle\) belongs to \(A\). (Again, \(L\) is a free factor because \(\langle L,a_{2}\rangle=\langle a_{1},a_{2},a_{3}\rangle\).)
Proof of Proposition 3.1.: With Lemma 3.2 in hand, we compare Lemma 3.3 with Proposition 3.4 to deduce that both \(\operatorname{\mathcal{AF}}_{n}(1)\) and \(\operatorname{\mathcal{AF}}_{n}(n-1)\) are preserved by every isometry of \(\operatorname{\mathcal{AF}}_{n}\). For \(n=3\) there is nothing more to prove, so we assume \(n\geq 4\). Let \(A\) be a vertex of rank \(i<n-1\) and let \(V\) be a rank \((n-1)\) vertex with \(A<V\). The action of \(\operatorname{Aut}(F_{n})\) preserves the rank of vertices and acts transitively on vertices of each rank, so by composing an arbitrary automorphism \(\psi\in\operatorname{Isom}(\operatorname{\mathcal{AF}}_{n})\) with a suitable element of \(\operatorname{Aut}(F_{n})\) we may assume that \(\psi(V)=V\). Then \(\psi\) restricts to an isometry of \(\operatorname{Lk}(V)\cong\operatorname{\mathcal{AF}}_{n-1}\), and by induction on \(n\) this restriction preserves the rank of vertices.
## 4. Recognising Standard Apartments
In the introduction we discussed the significance of _standard apartments_.
**Definition 4.1**.: A _standard apartment_ in \(\operatorname{\mathcal{AF}}_{n}\) is the full subcomplex spanned by the free factors generated by the non-empty proper subsets of a basis for \(F_{n}\).
For the second step in our proof of Theorem 1.1, we must prove that every isometry of \(\operatorname{\mathcal{AF}}_{n}\) sends standard apartments to standard apartments, i.e. the set of standard apartments is characteristic in the following sense.
**Definition 4.2**.: We say that a collection of subcomplexes of a simplicial complex \(X\) is _characteristic_ (or metrically distinguished) if it is preserved by the simplicial automorphism group of \(X\).
For example, for each \(k\) the collection of \(k\)-simplices of \(X\) will be characteristic. In the previous section we proved that \(\operatorname{\mathcal{AF}}(i)\) is characteristic in \(\operatorname{\mathcal{AF}}_{n}\) for \(i=1,\ldots,n-1\). Our purpose in this section is to prove that the set of standard apartments is characteristic, and a key step in the proof will be to show that the pairs of vertices \(\{A,C\}\) with \(\operatorname{rank}(A)=n-1,\ \operatorname{rank}(C)=1\) and \(A*C=F_{n}\) is characteristic (the _Antipode Lemma_). Along the way, we shall prove that various other types of subcomplexes are characteristic.
The Antipode Lemma is needed to distinguish standard apartments from _fake apartments_ (as defined in Definition 4.6). Figure 4.2 illustrates two of the
concerns that have to be overcome in the case \(n=3\) and more elaborate fakes are discussed in Section 7.
### The Antipode Lemma
**Definition 4.3**.: A rank \(n-1\) factor \(\Lambda\) and a rank \(1\) factor \(\langle u\rangle\) are _algebraically antipodal_ if \(\Lambda\ast\langle u\rangle=F_{n}\). We write \(\Lambda\perp\langle u\rangle\).
\(\Lambda\) and \(\langle u\rangle\) are _metrically antipodal_ in \(\mathcal{AF}_{n}\) if \(d(\langle u\rangle,L)=2\) for all free factors \(L\) with \(d(\Lambda,L)=1\).
**Remark 4.4**.: The condition that \(\Lambda\) and \(\langle u\rangle\) are metrically antipodal is equivalent to the following algebraic statement: \(u\not\in\Lambda\) and for all free factors \(L\subsetneq\Lambda\) there is a proper free factor of \(F_{n}\) that contains both \(L\) and \(u\). We chose the more concise formulation in the definition because it makes clear that this property is invariant under isometries of \(\mathcal{AF}_{n}\).
**Theorem 4.5** (The Antipode Lemma).: _Let \(\Lambda<F_{n}\) be a free factor of rank \(n-1\) and \(\langle u\rangle\) a free factor of rank 1. Then \(\Lambda\) and \(\langle u\rangle\) are algebraically antipodal if and only if they are metrically antipodal._
Proof.: It follows easily from the definitions that algebraically antipodal implies metrically antipodal (Lemma 3.3), so we will assume that \(\langle u\rangle\not\perp\Lambda\) and argue that \(\Lambda\) and \(\langle u\rangle\) are not metrically antipodal. The case \(u\in\Lambda\) is trivial, so suppose \(u\not\in\Lambda\). By applying a suitable element of \(\operatorname{Aut}(F_{n})\) we may assume \(\Lambda=\langle a_{1},\dots,a_{n-1}\rangle\). To complete the proof, it suffices to exhibit a free factor \(L\subset\Lambda\) of rank \(n-2\) such that \(d(L,\langle u\rangle)>2\). Our proof will show that if \(f:\Lambda\to\Lambda\) is a fully irreducible automorphism and \(L_{0}<\Lambda\) is any free factor of rank \(n-2\), then \(L=f^{k}(L_{0})\) has the desired property, provided \(k>0\) is sufficiently large.
First we consider the case where no conjugate of \(u\) is algebraically antipodal to \(\Lambda\). In this case, we argue using the subfactor projection \(\pi_{\Lambda}\) described in section 2.6. Consider \(\pi_{\Lambda}([u])\). Choose \(L\) (as above or otherwise) so that the distance between \([L]\) and \(\pi_{\Lambda}([u])\) is large; this is possible because \(Lk([\Lambda])\cong\mathcal{OF}_{n-1}\) has infinite diameter, using the modified definition of \(\mathcal{OF}_{2}\) if \(n=3\) (see Proposition 2.12). The coarse Lipschitz property of \(\pi_{\Lambda}\) (section 2.6) tells us any short path between \([L]\) and \([u]\) in \(\mathcal{OF}_{n}\) must pass through a conjugacy class of factors where \(\pi_{\Lambda}\) is not defined. It follows that there does not exist a free factor \(B\) that contains both \(L\) and \(\langle u\rangle\), because \(\pi_{\Lambda}[B]\) would be well-defined in that case, and \(\pi_{\Lambda}[B]\) would be a distance at most \(\delta\) (the constant of section 2.6) from both \(\pi_{\Lambda}([u])\) and \(\pi_{\Lambda}([L])\). (\(B\) is not conjugate to \(\Lambda\) because \(L<B\) and \(u\in B\smallsetminus\Lambda\), whereas distinct conjugates of \(\Lambda\) intersect trivially.)
It remains to consider the case where \(\langle u\rangle\not\perp\Lambda\) but some conjugate of \(\langle u\rangle\) is antipodal to \(\Lambda\). By applying an automorphism that fixes \(\Lambda\) we may assume that, in reduced form, \(u=wa_{n}w^{-1}\) where \(w\) is a word whose first letter is
\(a_{n}^{\pm 1}\). Let \(L=f^{k}(L_{0})\) be as above and assume that \(k\) is large enough to ensure that the injectivity radius of \(\operatorname{core}(L)\) is at least \(2|u|\) and Corollary 2.14 holds for \(\ell=2|u|\) with \(L\) in the role of \(A\) and \(\Lambda\) in place of \(F_{n}\). We will obtain a contradiction from the assumption that there is a free factor \(B\) of rank \(n-1\) that contains both \(L\) and \(u\).
First we observe that if there were such a factor, then \(B=\langle L,u\rangle\) and \(\operatorname{core}_{*}(B)=\operatorname{core}_{*}(L)\vee\operatorname{core} _{*}\langle u\rangle\). To see this, note that if the canonical map \(\operatorname{core}_{*}(L)\to\operatorname{core}_{*}(B)\) were not injective, then the fundamental group of the image would be a free factor \(V\subseteq B\) that strictly contained \(L\). As \(\operatorname{rank}(B)=\operatorname{rank}(L)+1\), this would imply \(V=B\). But the edges of the graph defining \(V\) are labeled by letters from \(\Lambda\), whereas \(B\) is not contained in \(\Lambda\). Thus \(\operatorname{core}_{*}(L)\to\operatorname{core}_{*}(B)\) is injective. As \(B\) contains a conjugate of \(a_{n}\) but not \(a_{n}\) itself, \(\operatorname{core}_{*}(B)\) has a loop labeled \(a_{n}\) based at a vertex \(v\neq*\). And since \(wa_{n}w^{-1}\in B\), there is path from \(*\) to \(v\) labeled \(w\), which begins with an \(a_{n}\)-edge. As \(B\) has \(\operatorname{rank}(L)+1\), this path is disjoint from \(\operatorname{core}_{*}(L)\). Thus \(\operatorname{core}_{*}(B)=\operatorname{core}_{*}(L)\vee\operatorname{core} _{*}\langle u\rangle\).
Proposition 2.3 tells us that if \(B=\langle L,u\rangle\) were a free factor, then by identifying two vertices in \(\operatorname{core}_{*}(B)\) we could obtain a graph \(\Gamma\) that folded to the standard rose \(R_{n}\). We consider three cases, depending on the location of the two vertices being identified, and reach a contradiction in each case.
We shall refer to \(\operatorname{core}_{*}\langle u\rangle\) as a lollipop, with stalk labeled \(w\) and loop \(a_{n}\).
_Case 1: Suppose \(v_{0},v_{1}\in\operatorname{core}_{*}(L)\)._ In this case, the image of \(\operatorname{core}_{*}(L)\) in \(\Gamma\) defines a free factor of rank \(n-1\) that contains \(L\) and is contained in \(\Lambda\), hence is equal to \(\Lambda\). And \(R_{n}\) is obtained by folding this image with \(\operatorname{core}_{*}\langle u\rangle\), so \(F_{n}=\Lambda*\langle u\rangle\), contrary to the assumption that \(\Lambda\not\perp\langle u\rangle\).
_Case 2: Suppose \(v_{0}\in\operatorname{core}_{*}(L)\) and \(v_{1}\in\operatorname{core}_{*}\langle u\rangle\smallsetminus\{*\}\)._ In this case, an arc of the stalk of \(\operatorname{core}_{*}\langle u\rangle\) that contains \(v_{1}\) but has no edges labeled \(a_{n}\) might fold into \(\operatorname{core}_{*}(L)\) which, by construction, has injectivity radius greater than \(2|u|\). If \(v_{0}\) were a distance at least \(|u|\) from the basepoint, then after this folding we would have a fully folded graph that still contained \(\operatorname{core}_{*}(L)\). If \(v_{0}\) is a distance less than \(|u|\) from the basepoint, let \(\alpha\) be the label on the arc from \(*\) to \(v_{0}\), let \(\tilde{\beta}\) be the prefix of \(w\) labeling the arc from \(*\) to \(v_{1}\), and let \(\beta\in\Lambda\) be the word obtained from \(\tilde{\beta}\) by deleting all occurences of \(a_{n}\). Then \(\langle L,\alpha\beta^{-1}\rangle=\Lambda\) if \(\Gamma\) folds to \(R_{n}\), because \(\langle L,\alpha\beta^{-1}\rangle\) is the fundamental group of the graph obtained by collapsing the edges of \(\Gamma\) labeled \(a_{n}\). But this contradicts Corollary 2.14, because \(|\alpha\beta^{-1}|<2|u|\).
_Case 3: Suppose \(v_{0},v_{1}\in\operatorname{core}_{*}\langle u\rangle\)._ We fold \(\Gamma_{1}:=\operatorname{core}_{*}\langle u\rangle/v_{0}\sim v_{1}\). If the initial edge on the stalk of the lollipop \(\operatorname{core}_{*}\langle u\rangle\) is not identified with the loop of the lollipop during this folding, then \(\operatorname{core}_{*}(L)\vee\operatorname{fold}(\Gamma_{1})\) is fully folded and we are done. Otherwise, fold\((\Gamma_{1})\) is the wedge of two loops, one labeled
and the other either labeled by a word \(c\) in the letters \(a_{i}\) with \(i\neq n\), or else labeled \(c_{1}c_{2}c_{3}\), where \(c_{1}\) and \(c_{3}\) are non-empty words of this form and \(c_{2}\) is a non-empty word that begins and ends \(a_{n}^{\pm 1}\). In the former case, we have a contradiction from Corollary 2.14, because \(\langle L,c\rangle\subsetneq\Lambda\). In the latter case, the arcs labelled \(c_{1}\) and \(c_{3}\) fold into \(\operatorname{core}_{*}(L)\) and the folding stops with \(\operatorname{core}_{*}(L)\) still embedded.
### Apartments, fake and standard
The barycentric subdivision of the boundary of the standard \(k\)-simplex, \(\partial\Delta_{k}\), is the geometric realisation of the poset of nonempty proper subsets of \(\mathbf{k}=\{0,1,\ldots,k\}\) ordered by inclusion. The barycentre of the face opposite \(i\in\mathbf{k}\) is \(\mathbf{k}\smallsetminus\{i\}\).
**Definition 4.6**.: An _apartment_ in \(\mathcal{AF}_{n}\) is the image of a simplicial embedding \(\sigma:\partial\Delta_{n-1}\hookrightarrow\mathcal{AF}_{n}\) such that \(\operatorname{rank}(\sigma(S))=|S|\) for all \(S\subset\mathbf{n}\)**-1**. The apartment is _fake_ if it is not standard.
Note that an apartment is standard if and only if its rank-1 vertices form a basis for \(F_{n}\). Figure 1 illustrates two of the ways in which fake apartments can arise. There are more examples in Section 7.
**Lemma 4.7**.: _An apartment in \(\mathcal{AF}_{3}\) is standard if and only if each vertex is antipodal to the barycentre of the opposite face._
Proof.: The "if" assertion is the non-trivial one. Suppose that the rank 1 vertices are \(\langle a\rangle\), \(\langle b\rangle\), \(\langle c\rangle\) and let \(V\) be the barycentre of the face opposite \(\langle a\rangle\). Then \(V=\langle b,v\rangle\) for some \(v\in V\), and by hypothesis \(F_{3}=V*\langle a\rangle\). Thus \(\langle a,b\rangle\) is the unique free factor containing \(a\) and \(b\), and it is therefore the barycentre of the face with vertices \(\langle a\rangle\) and \(\langle b\rangle\). This is antipodal to \(\langle c\rangle\), so \(\{a,b,c\}\) is a basis for \(F_{3}\).
**Proposition 4.8**.: _For \(n\geq 3\), every automorphism of \(\mathcal{AF}_{n}\) takes standard apartments to standard apartments._
Figure 1. Two fake apartments in rank 3. In the first, the rank 2 factor \(\langle a,b\rangle\) is not generated by the adjacent rank 1 factors. In the second, the rank 1 factors are not antipodal to the opposite rank 2 factors.
Proof.: We proceed by induction on \(n\). In the light of Lemma 4.7, the Antipode Lemma (Theorem 4.5) covers the case \(n=3\).
Assume now that \(n\geq 4\) and consider a rank \(n-1\) vertex \(V\) of a standard apartment \(\sigma\) and let \(\psi\) be an automorphism of \(\mathcal{AF}_{n}\). By composing \(\psi\) with an element of \(\operatorname{Aut}(F_{n})\), we may assume that \(\psi\) fixes \(V\). Then \(\psi\) restricts to an automorphism of \(\operatorname{Lk}(V)\cong\mathcal{AF}_{n-1}\), where by induction we know that it takes standard apartments to standard apartments. The intersection \(\sigma\cap\operatorname{Lk}(V)\) is such an apartment, so the image under \(\psi\) of its rank \(1\) vertices form a basis for \(V\). The Antipode Lemma tells us that the image under \(\psi\) of the remaining rank \(1\) vertex of \(\sigma\) is antipodal to \(V\). Thus the image under \(\psi\) of the vertex set of \(\sigma\) is a basis for \(F_{n}\).
### Notation
\(\Delta(b_{1},\dots,b_{n})\) will denote the standard apartment associated to a basis \(\{b_{1},\dots,b_{n}\}\) of \(F_{n}\). A _face of rank \(k\)_ is the subcomplex \(\Delta[T]\) spanned by a \(k\)-element subset \(T\subset\{b_{1},\dots,b_{n}\}\). The face opposite \(\Delta[T]\) is \(\Delta[T^{c}]\), where \(T^{c}=\{b_{1},\dots,b_{n}\}\smallsetminus T\).
## 5. Sticks and propagation: the proof of Theorem 1.1
In this section we complete the proof of Theorem 1.1.
### Summary of the proof
Given an automorphism \(\Phi\) of \(\mathcal{AF}_{n}\), with \(n\geq 3\), we now know that \(\Phi\) sends standard apartments to standard apartments. As \(\operatorname{Aut}(F_{n})\) acts transitively on the set of standard apartments, we can compose \(\Phi\) with an element of \(\operatorname{Aut}(F_{n})\) so as to assume that \(\Phi\) leaves a standard apartment \(\Delta=\Delta(a_{1},\dots,a_{n})\) invariant. The stabilizer of \(\Delta\) in \(\operatorname{Aut}(F_{n})\) is the group of signed permutations \(W_{n}\cong(\mathbb{Z}/2)^{2}\rtimes\operatorname{sym}(n)\) of the corresponding basis; its action on \(\Delta\) is the full group of rank-preserving symmetries of \(\Delta\). By composing \(\Phi\) with an element of \(W_{n}<\operatorname{Aut}(F_{n})\) we may assume that \(\Phi\) fixes \(\Delta\) pointwise. We would be done if this modification forced \(\Phi\) to be the identity on the whole of \(\mathcal{AF}_{n}\), but it does not. For example, automorphisms of the form \(a_{i}\mapsto a_{i}^{\pm 1}\) fix \(\Delta\) but not \(\mathcal{AF}_{n}\).
Let \(\lambda\) be a Nielsen transformation for the basis \(\{a_{1},\dots,a_{n}\}\), that is \([a_{i}\mapsto a_{i}a_{j},\ a_{k}\mapsto a_{k}\ (k\neq i)]\) or \([a_{i}\mapsto a_{j}a_{i},\ a_{k}\mapsto a_{k}\ (k\neq i)]\). We say that \(\lambda(\Delta)\) is _Nielsen adjacent_ to \(\Delta\); it has a large overlap with \(\Delta\).
\(\mathcal{AF}_{n}\) is the union of its standard apartments and the index-\(2\) subgroup of \(\operatorname{Aut}(F_{n})\) generated by Nielsen transformations acts transitively on the set of standard apartments. Thus, by propagating to neighbours throughout \(\mathcal{AF}_{n}\), we would be done if any isometry of \(\mathcal{AF}_{n}\) that fixed a standard apartment pointwise had to fix the Nielsen adjacent apartments pointwise. Although this is not the case, we shall see that standard apartments have _canonical enlargements_ that make this argument work: by composing \(\Phi\) with a further element of \(W_{n}<\operatorname{Aut}(F_{n})\) we can assume that it fixes the canonical enlargement of
\(\Delta\) and this forces \(\Phi\) to fix the canonical enlargement of each Nielsen adjacent apartment.
The vertices of these canonical enlargements are rank-1 vertices adjacent to \(\Delta\) that we call _sticks_ and _supersticks_.
### Sticks and snops
**Definition 5.1**.: The _sticks_ at a face \(\Delta[b_{i},b_{j}]\) of rank 2 in a standard apartment \(\Delta(b_{1},\dots,b_{n})\) are the rank 1 factors of the form \(\langle b_{i}^{\epsilon}b_{j}^{\delta}\rangle,\ \epsilon,\delta=\{\pm 1\}\).
Note that this definition depends only on \(\langle b_{i},b_{j}\rangle\) and \(\langle b_{i}\rangle\), \(\langle b_{j}\rangle\), not on the rest of \(\Delta(b_{1},\dots,b_{n})\). There are 4 sticks at each rank 2 face, so \(\Delta(b_{1},\dots,b_{n})\) has \(4\binom{n}{2}\) sticks in total. See Figure 3.
**Lemma 5.2**.: _A rank 1 free factor \(C<F_{n}\) is a stick of the standard apartment \(\Delta(b_{1},\dots,b_{n})\) if and only if, for some \(b_{i}\neq b_{j}\), \(d(C,\,\langle b_{i},b_{j}\rangle)=1\) and \(C\) is antipodal to the barycentres of the rank \(n-1\) faces opposite \(\langle b_{i}\rangle\) and \(\langle b_{j}\rangle\)._
Proof.: This follows immediately from Lemma 2.2.
**Corollary 5.3**.: _The sets of sticks associated to standard apartments and their faces are characteristic in \(\mathcal{AF}_{n}\)._
Proof.: Immediate from Proposition 4.8 and the lemma.
**Remark 5.4**.: As an indication of the way in which sticks determine the geometry of \(\mathcal{AF}_{n}\) in a neighbourhood of an apartment, note that in \(\mathcal{AF}_{3}\) each of the 12 sticks of a standard apartment \(\Delta(a,b,c)\) gives rise to a 2-sphere (after gluing in disks to each apartment) made from three apartments: for example \(bc\) determines the 2-sphere
\[\Delta(a,b,c)\cup\Delta(a,bc,b)\cup\Delta(a,bc,c).\]
The intersection of each pair of these spheres is \(\Delta(a,b,c)\).
**Remark 5.5** (Sticks and Cubes).: Our formal proofs for \(n>3\) do not rely on the following description of sticks in terms of cubes, but nevertheless we include the general case in our discussion because it provides useful insight into the local geometry of \(\mathcal{AF}_{n}\).
The \(4\binom{n}{2}\) sticks associated to a standard apartment parametrize the codimension-2 faces of an \(n\)-cube \(I^{n}\). The signed permutations of the basis associated to the apartment form a subgroup \(W_{n}=(\mathbb{Z}/2)^{2}\rtimes\operatorname{sym}(n)<\operatorname{Aut}(F_{n})\) and the action of this on the sticks is the restriction of the standard representation of \(W_{n}\) as the isometry group of the cube. Figure 2 illustrates the case \(n=3\).
There are 12 sticks associated to a standard apartment (if \(n=3\)) or rank-3 face (if \(n>3\)). When three of these sticks lie in a common free factor of rank 2
in such a way that any two form a basis of the subgroup they generate, we say that these sticks form a _bonded triple_. We also say that two sticks are _bonded_ to each other if they lie in a common bonded triple. There are \(8\) bonded triples associated to each standard apartment (if \(n=3\)) or rank-\(3\) face (if \(n>3\)); they parametrize the vertices of the cube in Figure 2.
The \(12\) sticks also divide into \(3\) classes of _parallel sticks_, such that no pair of sticks in a given class belong to the same bonded triple; these correspond to the \(3\) classes of parallel edges in Figure 2. Each parallelism class divides into \(2\) pairs: the _opposite_ of a given stick is the one that labels the edge that is parallel but has no bonds in common.
With Corollary 5.3 in hand, the following observation is immediate from these definitions.
**Lemma 5.6**.: _Isometries of \(\mathcal{AF}_{n}\) preserve bonded triples and parallelism classes of sticks, as well as pairs of opposite sticks._
We have already noted that the sticks associated to a standard apartment \(\Delta\) parametrize the codimension-\(2\) faces of a cube, and that in the case \(n=3\) the vertices of the cube correspond to bonded triples. In the general case, the vertices of this cube \(I^{n}(\Delta)\) correspond to _snops_, which are defined as follows. (We shall not rely on this geometric description in our proofs.)
**Definition 5.7**.: A _snow2_ is a collection \(\mathcal{B}\) of sticks associated to a standard apartment \(\Delta(b_{1},\cdots,b_{n})\) with the following properties:
Figure 2. The sticks associated to the standard apartment \(\Delta=\Delta(a,b,c)\) parametrize the edges of the \(3\)-cube. Three sticks form a bonded triple (snow) when the three edges are adjacent to the same vertex. The stabilizer of \(\{a,b,c\}\) in \(\operatorname{Aut}(F_{3})\) is the full isometry group of the cube.
1. Exactly one of the sticks associated to each rank-2 face \(\Delta[b_{i},b_{j}]\) belongs to \(\mathcal{B}\).
2. For every rank-3 face \(\Delta[b_{i},b_{j},b_{k}]\), the 3 sticks in \(\mathcal{B}\) form a _bonded triple_.
The following lemma is an immediate consequence of our previous results.
**Lemma 5.8**.: _Snops are characteristic, i.e. every isometry of \(\mathcal{AF}_{n}\) takes snops to snops._
There are \(2^{n}\) snops associated to a standard apartment \(\Delta\). The 1-skeleton of the cube \(I^{n}(\Delta)\) can be constructed by joining two snops with an edge if they share all but \((n-1)\) of their sticks. (Distinct snops differ by at least \((n-1)\) sticks.)
The following proposition can be proved by analysing the faithful action of the stabiliser of \(\Delta(b_{1},\ldots,b_{n})\) on the cube \(I^{n}(\Delta)\), arguing that if an isometry of the cube fixes sufficiently many codimension-2 faces then it must be the identity. We leave the details of this proof to the reader and give a different proof that adapts better to the case of \(\mathcal{OF}_{n}\) considered in the next section.
**Notation.** The pointwise stabilizer in \(\operatorname{Aut}(F_{n})\) of the standard apartment \(\Delta(a_{1},\ldots,a_{n})\) is \((\mathbb{Z}/2)^{n}=\langle\varepsilon_{1},\ldots,\varepsilon_{n}\rangle\) where \(\varepsilon_{i}\) is the involution that sends \(a_{i}\) to \(a_{i}^{-1}\) and fixes \(a_{j}\) if \(j\neq i\).
To be clear, when we say that an isometry _fixes_ a subcomplex, we mean that it does so pointwise.
**Proposition 5.9**.: _If \(\Phi\in\operatorname{Isom}(\mathcal{AF}_{n})\) fixes \(\Delta=\Delta(a_{1},\ldots,a_{n})\), then there exists \(\theta\in\langle\varepsilon_{1},\ldots,\varepsilon_{n}\rangle\) such that \(\theta\circ\Phi\) fixes \(\Delta\) and all of its sticks._
We require a lemma.
Figure 3. The apartment \(\Delta(a,b,c)\) in \(\mathcal{AF}_{3}\) with its 12 sticks. The three sticks that are labeled form a bonded triple (snp).
**Lemma 5.10**.: _If an isometry \(\Phi\) of \(\mathcal{AF}_{3}\) fixes the standard apartment \(\Delta=\Delta(a,b,c)\) and a stick at \(\Delta[a,b]\) then exactly one of \(\{\Phi,\ \varepsilon_{c}\circ\Phi\}\) fixes \(\Delta\) and all of its sticks._
Proof.: The sticks at \(\Delta[b,c]\) bonded to \(\langle ab\rangle\) are \(\langle b^{-1}c\rangle\) and \(\langle cb\rangle\), so if \(\Phi\) fixes \(\langle ab\rangle\) then it must either exchange or fix these sticks. Composing with \(\varepsilon_{c}\) if necessary, we may assume that it fixes them. The action of \(\Phi\) as an isometry of the cube in Figure 2 then fixes three edges of the top face. The only such isometry is the identity.
Proof of Proposition 5.9.: We shall proceed by induction. Suppose \(n=3\) and consider a standard apartment \(\Delta=\Delta(a,b,c)\) fixed by \(\Phi\). If \(\Phi\) does not fix the stick \(\langle ab\rangle\) then we can compose \(\Phi\) with an element of \(\langle\varepsilon_{a},\varepsilon_{b}\rangle\) to arrange that it does. Then Lemma 5.10 tells us that, composing with \(\varepsilon_{c}\) if necessary, we may assume that \(\Phi\) fixes all of the sticks of \(\Delta\).
We now assume \(n>3\) and consider a standard apartment \(\Delta=\Delta(a_{1},\ldots,a_{n})\) fixed by \(\Phi\). Let \(\operatorname{Aut}(F_{n-1})\hookrightarrow\operatorname{Aut}(F_{n})\) be the subgroup fixing \(a_{n}\) and acting in the standard way on \(\{a_{1},\ldots,a_{n-1}\}\). Consider the barycentre \(V=\langle a_{1},\ldots,a_{n-1}\rangle\) of the face opposite \(\langle a_{n}\rangle\). We have \(\operatorname{Lk}(V)\cong\mathcal{AF}_{n-1}\), where the isomorphism is \(\operatorname{Aut}(F_{n-1})\)-equivariant. By induction, there exists \(\theta\in\langle\varepsilon_{1},\ldots,\varepsilon_{n-1}\rangle\) such that \(\theta\circ\Phi\) fixes \(\Delta\) and all of the sticks of \(\Delta[a_{1},\ldots,a_{n-1}]\). Applying Lemma 5.10 to \(\Delta[a_{1},a_{2},a_{n}]\), we deduce that by further composing with \(\varepsilon_{n}\) if necessary, we may assume that \(\Phi\) fixes \(\Delta\), the sticks of \(\Delta[a_{1},\ldots,a_{n-1}]\) and the sticks of \(\Delta[a_{1},a_{2},a_{n}]\). The remaining sticks are based at \(\Delta[a_{i},a_{n}]\subset\Delta[a_{1},a_{i},a_{n}]\) with \(i>2\). The sticks at \(\Delta[a_{1},a_{i}]\subset\Delta[a_{1},a_{i},a_{n}]\) are fixed by \(\Phi\), as is \(\langle a_{1}a_{n}\rangle\). Moreover the latter is not fixed by \(\varepsilon_{n}\circ\Phi\). So Lemma 5.10 tells us that \(\Phi\) must fix all the sticks of \(\Delta[a_{1},a_{i},a_{n}]\). This completes the induction.
### Supersticks and the end of the proof
We obtain a more rigid framework of rank-1 vertices in the neighbourhood of an apartment by adding _supersticks_ to sticks. In rank 3, the supersticks associated to an apartment are at distance 2 from the apartment, but from \(n=4\) onwards they are adjacent to the barycentres of the rank-3 faces of the apartment.
**Definition 5.11**.: The _supersticks_ associated to a standard apartment \(\Delta(a_{1},a_{2},a_{3})\) (if \(n=3\)) or a rank 3 face \(\Delta[a_{1},a_{2},a_{3}]\) (if \(n>3\)) are the 24 rank 1 factors \(\langle a_{i}^{\delta_{i}}a_{j}^{\delta_{i}}a_{k}^{\delta_{k}}\rangle\) with \(\{i,j,k\}=\{1,2,3\}\) and \(\delta_{i}=\pm 1\).
**Lemma 5.12**.: _A rank 1 free factor of \(F_{3}\) is a superstick of the standard apartment \(\Delta(a,b,c)\) if and only if it is antipodal to each of the rank 2 vertices of \(\Delta(a,b,c)\)._
_For \(n>3\), a rank 1 free factor \(V<F_{n}\) is a superstick of the rank 3 face \(\Delta[a,b,c]\) if and only if \(d(V,\langle a,b,c\rangle)=1\) and \(V\) is antipodal in \(\operatorname{Lk}_{-}\langle a,b,c\rangle\cong\mathcal{AF}_{3}\) to each of the rank 2 vertices of \(\Delta(a,b,c)\)._
Proof.: This follows immediately from Lemma 2.2.
**Corollary 5.13**.: _The sets of supersticks associated to standard apartments and their rank-3 faces are characteristic in \(\mathcal{AF}_{n}\)._
We need one last lemma.
**Lemma 5.14**.: _If an isometry \(\Phi\) of \(\mathcal{AF}_{n}\) fixes a standard apartment \(\Delta=\Delta(a,b,c)\) (if \(n=3\)) or rank-3 face \(\Delta=\Delta[a,b,c]\) (if \(n>3\)) and it fixes the sticks of \(\Delta\), then it also fixes all of the supersticks of \(\Delta\)._
Proof.: Consider first the superstick \(\langle abc\rangle\). As \(M_{1}=\langle a,bc\rangle<F_{n}\) is the unique factor of rank 2 adjacent to both \(\langle a\rangle\) and \(\langle bc\rangle\), it must be fixed by \(\Phi\). Likewise \(M_{2}=\langle c,ab\rangle\) must be fixed. The unique rank-1 factor adjacent to \(M_{1}\) and \(M_{2}\) is \(\langle abc\rangle=M_{1}\cap M_{2}\), so it too must be fixed by \(\Phi\). The general case is similar.
### End of the Proof of Theorem 1.1
We refer the reader to the summary of the proof given at the beginning of this section. Given an automorphism \(\Phi\) of \(\mathcal{AF}_{n}\), with \(n\geq 3\), we compose it with an element of \(\operatorname{Aut}(F_{n})\) so as to assume that it leaves a standard apartment \(\Delta=\Delta(a_{1},\dots,a_{n})\) invariant. We use Proposition 5.9 to compose \(\Phi\) with a further element of \(\operatorname{Aut}(F_{n})\) so that it fixes \(\Delta\) and all of its sticks. Lemma 5.14 then tells us that \(\Phi\) fixes the supersticks of \(\Delta\). We will be done if we can argue that this adjusted \(\Phi\) fixes every standard apartment that is Nielsen adjacent to \(\Delta\) and fixes all the sticks (and hence supersticks) of such an apartment.
Without loss of generality we may assume that the Nielsen transformation is \(\lambda:a_{1}\mapsto a_{1}a_{2}\). Consider \(\Delta_{\lambda}=\Delta(a_{1}a_{2},a_{2},\dots,a_{n})\). The first point to observe is that every rank 1 vertex of \(\Delta_{\lambda}\) is a vertex or stick of \(\Delta\), and hence is fixed by \(\Phi\). Each vertex of \(\Delta_{\lambda}\) is uniquely determined by its adjacent rank 1 vertices, so \(\Phi\) must fix the whole of \(\Delta_{\lambda}\). The second point to observe is that every stick of \(\Delta_{\lambda}\) is a vertex, stick or superstick of \(\Delta\), with the exception of the sticks at \(\Delta[a_{1},\,a_{1}a_{2}]\). And since these last sticks are distinguished from one another by the sticks of \(\Delta[a_{1},\,a_{1}a_{2},a_{3}]\) with which they form bonded triples, they too must be fixed.
## 6. \(\mathcal{OF}_{n}\) is rigid: Proof of Theorem 1.2
Our proof of Theorem 1.2 follows the same outline of proof as Theorem 1.1 but there are some additional difficulties to be overcome in the case of \(\mathcal{OF}_{n}\), particularly with regard to the recognition of standard apartments.
We will typically write \([A]\) for the conjugacy class of a free factor \(A<F_{n}\) but for rank-1 factors abbreviate \([\langle u\rangle]\) to \([u]\), and often write \([a,b]\) for rank-2 factors.
### Step 1: Distinguishing the ranks of vertices
At various stages in the proof of Theorem 1.1 we used the isomorphism \(\operatorname{Lk}(A)\cong\mathcal{AF}_{n-1}\) for vertices of rank \(n-1\) to facilitate induction arguments. Lemma 2.10 assures us that such arguments remain valid in \(\mathcal{OF}_{n}\).
The following lemma can be established by choosing \(L\) exactly as in the proof of Lemma 3.5.
**Lemma 6.1**.: _If \(A<F_{n}\) is a free factor of rank \(n-1\) and \(C\) is a free factor of rank \(1\), no conjugate of which is contained in \(A\), then there exists a free factor \(L\) of rank \(2\) with \(C<L\) such that no conjugate of \(L\) intersects \(A\) non-trivially._
**Proposition 6.2**.: _For \(n\geq 3\), every isometry of \(\mathcal{OF}_{n}\) preserves the set of vertices of rank \(i\), for \(i=1,2,\cdots,n-1\)._
Proof.: The proof is a straightforward adaptation of the proof of Proposition 3.1. To distinguish vertices of rank \(1\) or \(n-1\) from those of rank \(i\) with \(1<i<n-1\), we prove that the former are not joins, and we do this by showing that they have diameter greater than \(2\). For \(n=3\) there is nothing to prove, so we assume \(n\geq 4\) and proceed by induction. The link of a vertex of rank \(n-1\) is isomorphic to \(\mathcal{OF}_{n-1}\), which has infinite diameter (alternatively, as in Lemma 3.2, one can see easily that it has diameter at least \(3\)). For vertices of rank \(1\), Lemma 2.11 tells us that \(\operatorname{Lk}_{\mathcal{OF}_{n}}([C])\cong\operatorname{Lk}_{\mathcal{AF}_ {n}}(C)\), so the proof for \(\mathcal{AF}_{n}\) applies directly.
The argument for distinguishing vertices of rank \(n-1\) from vertices of rank \(1\) also follows the case of \(\mathcal{AF}_{n}\): the proof of Lemma 3.3 shows that for every vertex \([A]\in\mathcal{OF}_{n}\) of rank \(n-1\) there exist vertices \([C]\) of rank \(1\) such that \(d([A],[L])=1\) implies \(d([C],[L])=2\), and Lemmas 6.1 and Lemma 2.9 tell us this statement becomes false if we reverse the roles of \(A\) and \(C\).
The inductive argument in the final paragraph of Section 3 remains valid in the setting of \(\mathcal{OF}_{n}\).
### The Antipode Lemma
**Definition 6.3**.: A rank \(n-1\) vertex \([\Lambda]\in\mathcal{OF}_{n}\) and a rank \(1\) vertex \([u]\in\mathcal{OF}_{n}\) are _algebraically antipodal_ if there are factors \(\Lambda_{0}\in[\Lambda]\) and \(\langle u^{\gamma}\rangle\in[u]\) such that \(\Lambda_{0}*\langle u^{\gamma}\rangle=F_{n}\). We write \([\Lambda]\perp[u]\).
\([\Lambda]\) and \([u]\) are _metrically antipodal_ in \(\mathcal{OF}_{n}\) if \(d([u],[L])=2\) for all free factors \(L\) with \(d([\Lambda],[L])=1\).
**Theorem 6.4** (The Antipode Lemma).: \([\Lambda]\perp[u]\) _if and only if \([\Lambda]\) and \([u]\) are metrically antipodal_
Proof.: As was the case for \(\mathcal{AF}_{n}\), it is easy to see that if \([\Lambda]\perp[u]\) then \([\Lambda]\) and \([u]\) are metrically antipodal, and it is obvious that if \(u\) is conjugate into \(\Lambda\)
then \([\Lambda]\) and \([u]\) are not metrically antipodal. So what we must argue is that if no conjugate of \(u\) is contained in \(\Lambda\) and no conjugate of \(u\) is antipodal to \(\Lambda\), then there is a free factor \(L<\Lambda\) such that \(d([\langle u\rangle],[L])>2\). This is what we proved in the second paragraph of the proof of Theorem 4.5.
### Step 2: Recognising Standard Apartments
The reader should compare the following definition to Definition 4.6. The more cumbersome definition here reflects the fact that in \(\mathcal{OF}_{n}\)_apartments are not uniquely determined by their rank 1 vertices_. This will cause us considerable difficulty, as will the fact that standard apartments are difficult to characterise using the Antipode Lemma alone; see Example 6.9 and Section 7.
**Definition 6.5**.: An _apartment_ in \(\mathcal{OF}_{n}\) is the image of a simplicial embedding \(\sigma:\partial\Delta_{n-1}\hookrightarrow\mathcal{OF}_{n}\) such that \(\operatorname{rank}(\sigma(S))=|S|\) for all \(S\subset\mathbf{n}\)**-1**. The apartment is _standard_ if it is the image under \(\mathcal{AF}_{n}\to\mathcal{OF}_{n}\) of a standard apartment in \(\mathcal{AF}_{n}\). We shall maintain the notation \(\Delta(a_{1},\dots,a_{n})\) for the standard apartment associated to the basis \(\{a_{1},\dots,a_{n}\}\) and the notation \(\Delta[T]\) for its faces; if \(|T|=k+1\) then \(\Delta[T]\) is a _standard rank-\(k\) face_.
**Definition 6.6** (Sticks, supersticks, bonded triples).: We define the sticks, supersticks and bonded triples for standard faces in \(\mathcal{OF}_{n}\) to be the images of the sticks, supersticks and bonded triples in \(\mathcal{AF}_{n}\). For a standard apartment \(\Delta=\Delta(b_{1},\dots,b_{n})\), the _sticks of \(\Delta\) at the rank-2 face_\(\Delta[b_{i},b_{j}]\) are the rank 1 vertices of the form \([b_{i}^{\epsilon}b_{j}^{\delta}]\) (of which there are only two, because \(b_{i}^{\epsilon}b_{j}^{\delta}\) and \(b_{j}^{\delta}b_{i}^{\epsilon}\) are conjugate and \([x]=[x^{-1}]\)).
**Remark 6.7**.: (1) Considerable care is needed with this definition: the _"sticks of \(\Delta\) at the face \(\Delta[b_{i},b_{j}]\)"_ depend on \(\Delta\) and not just \(\Delta[b_{i},b_{j}]\) and its neighbours \([b_{i}],[b_{j}]\). Indeed, if one drops the reference to \(\Delta\) then there are infinitely many sticks at \(\Delta[b_{i},b_{j}]\). To see this note, for example, that for any \(u\in\langle a_{1},a_{2}\rangle\), the triple \([ua_{1}u^{-1},a_{2}]\), \([ua_{1}u^{-1}]\), \([a_{2}]\) is identical to \([a_{1},a_{2}]\), \([a_{1}]\), \([a_{2}]\), but the sticks of \(\Delta(ua_{1}u^{-1},a_{2},\dots,a_{n})\) at \([ua_{1}u^{-1},a_{2}]=[a_{1},a_{2}]\) are \([ua_{1}u^{-1}a_{2}]\) and \([ua_{1}^{-1}u^{-1}a_{2}]\), whereas the sticks of \(\Delta(a_{1},a_{2},\dots,a_{n})\) at \([ua_{1}u^{-1},a_{2}]=[a_{1},a_{2}]\) are \([a_{1}a_{2}]\) and \([a_{1}^{-1}a_{2}]\).
(2) As \(\Delta=\Delta(b_{1},\dots,b_{n})\) has only two sticks at \(\Delta[b_{i},b_{j}]\), it has \(2\binom{n}{2}\) sticks in total. There are 8 supersticks associated to each standard apartment (if \(n=3\)) or rank 3 face (if \(n>3\)).
(3) It is no longer useful to discuss which pairs of sticks are bonded, because any pair of sticks associated to a rank 3 face will be bonded, but it remains true and useful that any two sticks in a bonded triple uniquely define the third.
Passing to conjugacy classes \(A\mapsto[A]\) preserves the relation of being algebraically antipodal, so sticks of an apartment remain antipodal to the barycentres of opposite faces. But at this stage we do not have a metric characterisation of sticks (as in Lemma 5.2) because we do not yet know that isometries of \(\mathcal{O}\mathcal{F}_{n}\) take standard apartments to standard apartments.
**Example 6.8**.: \(\Delta(a,b,c)\) has four bonded triples (snops) in \(\mathcal{O}\mathcal{F}_{3}\)
* \(ab,b^{-1}c,ac,\)__
* \(ab,bc,a^{-1}c,\)__
* \(a^{-1}b,bc,ac,\)__
* \(a^{-1}b,b^{-1}c,a^{-1}c.\)__
For any pair of sticks chosen from two rank 2 faces, there is a unique stick at the third face that forms a bonded triple (snop) with that pair.
The eight supersticks of \(\Delta(a,b,c)\) in \(\mathcal{O}\mathcal{F}_{3}\) are
* \(abc,\ abc^{-1},\ ab^{-1}c,\ ab^{-1}c^{-1},\ acb,\ acab^{-1},\ ac^{-1}b,\ ac^{-1}b^{-1}c.\)
**Example 6.9**.: We describe an example of a fake (i.e. non-standard) apartment of \(\mathcal{O}\mathcal{F}_{3}\) in which all pairs of opposite vertices are antipodal.
Starting with the standard apartment \(\Delta(a,b,c)\), we replace \([a,b]\) by \([a,\gamma b\gamma^{-1}]\) with \(\gamma=baca^{-1}\) to obtain the face \(\Delta^{\prime}\). The graph \(\operatorname{core}\langle a,\gamma b\gamma^{-1}\rangle\) consists of two loops labeled \(a,b\) joined by an arc labeled \(baca^{-1}\). To see that \([a,\gamma b\gamma^{-1}]\) is antipodal to \([c]\), we glue a loop labeled \(c\) to one of the endpoints of the edge of \(\operatorname{core}\langle a,\gamma b\gamma^{-1}\rangle\) labeled \(c\) and fold to obtain the rose \(R_{3}\). To see that the apartment \(\Delta^{\prime}\) is fake, observe that it has no sticks at \([a,\gamma b\gamma^{-1}]\): more precisely, there are no rank 1 factors \([u]\) adjacent to \([a,\gamma b\gamma^{-1}]\) that are antipodal to both \([a,c]\) and \([b,c]\). Indeed, any cyclically reduced word in the conjugacy class \([u]\)
Figure 4. The apartment \(\Delta(a,b,c)\) in \(\mathcal{O}\mathcal{F}_{3}\) with its 6 sticks.
must label a loop in \(\operatorname{core}\langle a,\gamma b\gamma^{-1}\rangle\) that is not \(a^{\pm 1}\) or \(b^{\pm 1}\), and the label on any such loop contains more than one occurence of both \(a\) and \(b\), so is not antipodal to \([a,c]\) or \([b,c]\).
Fortunately, the problem identified in this example is the only new obstruction to recognising standard apartments in rank 3.
**Proposition 6.10**.: _Let \(\Delta\) be an apartment in \(\mathcal{O}\mathcal{F}_{3}\) and assume_
1. _opposite vertices of_ \(\Delta\) _are antipodal, and_
2. \(\Delta\) _has "a potential stick" at each rank 2 vertex, i.e. there is an adjacent rank 1 vertex that is antipodal to the other two rank 2 vertices of_ \(\Delta\)_._
_Then \(\Delta\) is a standard apartment._
Proof.: Let \(\{a,b,c\}\) be a basis for \(F_{3}\). We may assume that \(\Delta\) has opposing vertices \([a]\) and \([b,c]\). By applying an automorphism of \(F_{3}\) that fixes \(a\) and leaves \(\langle b,c\rangle\) invariant, we can assume that one of the rank 1 vertices adjacent to \([b,c]\) is \([b]\). The rank 2 vertex \(V\) between \([a]\) and \([b]\) is then \([b,\gamma a\gamma^{-1}]\) for some \(\gamma\in F_{3}\). If \([u]\) is a potential stick of \(\Delta\) at \([b,\gamma a\gamma^{-1}]\), then it is antipodal to \([b,c]\) and hence the cyclically reduced form of \(u\) contains exactly one occurence of \(a\), by Lemma 2.2. This word labels a tight (i.e. locally-injective) loop in \(\operatorname{core}\langle b,\gamma a\gamma^{-1}\rangle\). The only tight loops with a single occurence of \(a\) in their label, besides \(a^{\pm 1}\), are the loops labeled \(a^{\pm 1}\gamma^{-1}b^{p}\gamma\) with \(p\neq 0\), and these only qualify if there is no occurence of \(a\) in \(\gamma\). The loops \(a^{\pm 1}\) can be excluded as potential sticks because they are not antipodal to the rank-2 vertex opposite \([b]\), since that already contains a conjugate of \(a\). Thus the existence of a potential stick at \(V\) forces \(\gamma\in\langle b,c\rangle\), and after applying the automorphism that fixes \(b\) and \(c\) and sends \(a\mapsto\gamma^{-1}a\gamma\) we may assume \(V=[a,b]\).
Consider now the rank 1 vertex of \(\Delta\) opposite \(V\); call it \([x]\). Since \([x]\) is antipodal to \([a,b]\), the cyclically reduced word conjugate to \(x\) contains exactly one \(c^{\pm 1}\), and since \(x\) is conjugate into \(\langle b,c\rangle\) we may assume (by conjugating and replacing \(x\) with \(x^{-1}\)) that \(x=b^{m}c\) for some \(m\). After applying the automorphism that fixes \(a,b\) and sends \(c\mapsto b^{-m}c\), we have \(x=c\). Then \(\Delta\) has 5 of its vertices in common with the standard apartment \(\Delta(a,b,c)\), and the last one is the conjugacy class of a factor of the form \(H=\langle a,\delta c\delta^{-1}\rangle\). The labeled graph \(\operatorname{core}(H)\) has loops \(a\) and \(c\) connected by an arc \(\delta\). Repeating the argument used to analyse \(V\), we see that \(H\) can only contain a rank 1 factor antipodal to \([b,c]\) if \(\delta\) contains no \(c\), and it can only contain a rank 1 factor antipodal to \([a,b]\) if \(\delta\) contains no \(a\). Thus \(\delta=b^{q}\), and the automorphism of \(F_{3}\) that fixes \(a\) and \(b\) and sends \(c\mapsto b^{-q}cb^{q}\) will map \(\Delta\) to the standard apartment \(\Delta[a,b,c]\).
**Corollary 6.11**.: _Isometries of \(\mathcal{O}\mathcal{F}_{3}\) take standard apartments to standard apartments._
Proof.: In Step 1 (section 6.1) we proved that isometries of \(\mathcal{O}\mathcal{F}_{3}\) preserve rank, in the Antipode Lemma we proved that they send antipodal pairs to antipodal pairs, and in Proposition 6.10 we characterised standard apartments in terms of these invariants.
The last lemma we need before concluding that isometries preserve standard apartments is the following. The fake apartments described in Section 7 illustrate the need for condition (3) in this lemma.
**Lemma 6.12**.: _Let \(n\geq 3\). An apartment \(\Delta\) in \(\mathcal{O}\mathcal{F}_{n}\) is standard if and only if it satisfies the following conditions:_
1. _Each rank_ \((n-1)\) _face of_ \(\Delta\) _is standard._
2. _Every rank 1 vertex of_ \(\Delta\) _is antipodal to the barycentre of the opposite face._
3. _Adjacent to each rank_ \((n-1)\) _vertex_ \(V\) _of_ \(\Delta\)_, there is a rank_ \(1\) _vertex that is antipodal to every rank_ \((n-1)\) _vertex of_ \(\Delta\) _other than_ \(V\)_._
Proof.: First note that standard apartments satisfy these conditions: for (3), a suitable rank 1 factor adjacent to \([a_{1},\ldots,a_{n-1}]\in\Delta(a_{1},\ldots,a_{n})\) is \([a_{1}\ldots a_{n-1}]\).
For the converse, Proposition 6.10 covers the case \(n=3\), so we suppose \(n>3\). Condition (1) lets us assume that there is a basis \(\{a_{1},\ldots,a_{n}\}\) of \(F\) such that one of the codimension-1 faces of \(\Delta\) is the standard \(\Delta[a_{1},\ldots,a_{n-1}]\). Condition (2) says that the rank 1 vertex opposite this face is \([x]\) where \(x\) is antipodal to \(\langle a_{1},\ldots,a_{n-1}\rangle\). The action of \(\operatorname{Aut}(F_{n})\) (through \(\operatorname{Out}(F_{n})\)) preserves conditions (1), (2) and (3), so we are free to apply an automorphism to ensure that \(x=a_{n}\).
Consider the codimension 1 face \(Y_{1}\) of \(\Delta\) opposite \([a_{1}]\). By condition (1), this is standard, so the barycentre of the face is \([V_{1}]\) where \(V_{1}\) is generated by \(\langle a_{2},\ldots,a_{n-1}\rangle\) and a conjugate of \(a_{n}\), say \(a_{n}^{\gamma_{1}}\). We can assume that \(\gamma_{1}\in\langle a_{1},\ldots,a_{n}\rangle\) is a word that does not end in \(a_{n}^{\pm 1}\) and (if nontrivial) starts with \(a_{n}^{\pm 1}\) - it is the label on the bridge of \(\operatorname{core}(V_{1})\) connecting the rose with petals \(a_{2},\ldots,a_{n-1}\) to the loop labeled \(a_{n}\). For \(j=2,\ldots,n-1\), the barycentre of the edge of \(Y_{1}\) joining \([a_{j}]\) to \([a_{n}]\) is \([a_{j},a_{n}^{\gamma_{1}}]\). Because of our assumptions on \(\gamma_{1}\), the core graph of \([a_{j},a_{n}^{\gamma_{1}}]\) consists of loops labeled \(a_{j}\) and \(a_{n}\) with the bridge connecting them with the label precisely \(\gamma_{1}\).
Similar considerations apply to face \(Y_{i}\) opposite \([a_{i}]\) for \(i=2,\ldots,n-1\) and we define \(V_{i}\) and \(\gamma_{i}\) accordingly. For example, \(V_{2}=\langle a_{1},a_{3},\ldots,a_{n-1},a_{n}^{\gamma_{2}}\rangle\) and for \(j=1,3,\ldots,n-1\), the barycentre edge of \(Y_{2}\) joining \([a_{j}]\) to \([a_{n}]\) is \([a_{j},a_{n}^{\gamma_{2}}]\).
The edge joining \([a_{3}]\) to \([a_{n}]\) in \(Y_{1}\) is, of course, the same as the edge joining them in \(Y_{2}\), so \([a_{3},a_{n}^{\gamma_{1}}]=[a_{3},a_{n}^{\gamma_{2}}]\). Comparing core graphs, we conclude that \(\gamma_{1}=\gamma_{2}\) since both are the label on the bridge. Proceeding in this manner, we conclude that \(\gamma_{i}=\gamma_{j}\) for all \(i,j\in\{1,\ldots,n-1\}\). If this common conjugator
\(\gamma\) lies \(\langle a_{1},\ldots,a_{n-1}\rangle\), then the automorphism that fixes \(a_{i}\) for \(i<n\) and conjugates \(a_{n}\) by \(\gamma^{-1}\) will map \(\Delta\) to the standard apartment \(\Delta(a_{1},\ldots,a_{n})\), so \(\Delta\) is standard.
To complete the proof, we argue that if \(\gamma\not\in\langle a_{1},\ldots,a_{n-1}\rangle\) then \(\Delta\) would not satisfy condition (3). If a rank 1 vertex \([u]\) is adjacent to \([V_{1}]\), there is a reduced loop in \(\operatorname{core}(V_{1})\) labeled \(u\). The key point to note is that each reduced loop in \(\operatorname{core}(V_{1})\) either lies in the rose with labels \(a_{2},\ldots,a_{n-1}\), or runs only around the loop labeled \(a_{n}\), or else traverses the bridge labeled \(\gamma\) twice. In the first case \([u]\) is not antipodal to \([a_{1},\ldots,a_{n-1}]\in\Delta\), in the second case it is not antipodal to \([V_{2}]\), and in the last case every conjugate of \(u\) contains at least 3 occurrences of the letter \(a_{n}\), so \([u]\) is not antipodal to \([a_{1},\ldots,a_{n-1}]\), by Lemma 2.2.
**Proposition 6.13**.: _For \(n\geq 3\), every isometry of \(\mathcal{O}\mathcal{F}_{n}\) takes standard apartments to standard apartments._
Proof.: Same as Corollary 6.11.
**Corollary 6.14**.: _For \(n\geq 3\), every isometry \(\Psi\) of \(\mathcal{O}\mathcal{F}_{n}\) takes the sticks of a standard apartment \(\Delta\) to the sticks of \(\Psi(\Delta)\)._
Proof.: It follows from Lemma 2.2 that the sticks of \(\Delta\) at a rank 2 face \(\Delta[a,b]\) are the unique rank 1 vertices \(V\) adjacent to \([a,b]\) with the property that for every rank 3 face \(\Delta[a,b,c]\), the vertex \(V\) is antipodal to \([a,c]\) and \([b,c]\) in \(\operatorname{Lk}_{-}([a,b,c])\cong\mathcal{O}\mathcal{F}_{3}\). And \(\Psi\) transports this condition to the sticks of the standard apartment \(\Psi(\Delta)\).
Similarly, following Lemma 5.12 we have:
**Corollary 6.15**.: _For \(n\geq 3\), every isometry \(\Psi\) of \(\mathcal{O}\mathcal{F}_{n}\) takes the supersticks of a standard apartment \(\Delta\) to the supersticks of \(\Psi(\Delta)\)._
### The endgame
The sum of our previous results tells us that for \(n\geq 3\), every isometry of \(\mathcal{O}\mathcal{F}_{n}\) maps standard apartments to standard apartments, respecting their sets of sticks, supersticks and bonded triples (triples of sticks contained a common factor of rank 2). We shall deduce Theorem 1.2 by following the final steps in the proof of Theorem 1.1; only minor adjustments are needed, except for the issue resolved in Lemma 6.19.
It will be convenient to consider the action of \(\operatorname{Aut}(F_{n})\) on \(\mathcal{O}\mathcal{F}_{n}\) (with the inner automorphisms acting trivially), as the subgroups \(\operatorname{Aut}(F_{n-1})\hookrightarrow\operatorname{Aut}(F_{n})\) fixing basis elements appear in the proof. The pointwise stabilizer in \(\operatorname{Out}(F_{n})\) of the standard apartment \(\Delta(a_{1},\ldots,a_{n})\) is \((\mathbb{Z}/2)^{n}=\langle\varepsilon_{1},\ldots,\varepsilon_{n}\rangle\) where \(\varepsilon_{i}\) is the involution that sends \(a_{i}\) to \(a_{i}^{-1}\) and fixes \(a_{j\neq i}\). The diagonal element \(\iota=\varepsilon_{1}\ldots\varepsilon_{n}\) will play a special role, related to the following observation.
**Lemma 6.16**.: \(\iota\) _acts trivially on the set of sticks associated to the standard apartment \(\Delta(a_{1},\dots,a_{n})\) in \(\mathcal{O}\mathcal{F}_{n}\), but it acts without fixed points on the set of supersticks._
**Proposition 6.17**.: _If \(\Phi\in\operatorname{Isom}(\mathcal{O}\mathcal{F}_{n})\) fixes \(\Delta=\Delta(a_{1},\dots,a_{n})\), then there exists \(\theta\in\langle\varepsilon_{1},\dots,\varepsilon_{n}\rangle\) such that \(\theta\circ\Phi\) fixes \(\Delta\) and all of its sticks._
The inductive proof of Proposition 5.9 applies verbatim to this proposition (replacing \(\mathcal{A}\mathcal{F}_{n-1}\) with \(\mathcal{O}\mathcal{F}_{n-1}\)) once we have the following analogue of Lemma 5.10 in hand.
**Lemma 6.18**.: _If an isometry \(\Phi\) of \(\mathcal{O}\mathcal{F}_{3}\) fixes the standard apartment \(\Delta=\Delta(a,b,c)\) and the sticks at \(\Delta[a,b]\), then one exactly one of \(\{\Phi,\varepsilon_{c}\circ\Phi\}\) fixes \(\Delta\) and all of its sticks._
Proof.: If \(\Phi\) exchanges the two sticks at \([b,c]\) then we compose with \(\varepsilon_{c}\) so that it fixes them. It must then fix the sticks at \([a,c]\), because they are contained in bonded triples where the other two sticks are fixed, and each pair of sticks in a triple uniquely determines the third stick (see Example 6.8).
At this stage in the proof of Theorem 1.1 we argued (Lemma 5.14) that if an isometry \(\Phi\) of \(\mathcal{A}\mathcal{F}_{n}\) fixes a standard apartment \(\Delta\) and its sticks, then it also fixes all of the supersticks of that apartment. This is not true in the case of \(\mathcal{O}\mathcal{F}_{n}\); it has to be adjusted as follows.
Note that since \(\iota\) acts freely on the supersticks of \(\Delta(a_{1},\dots,a_{n})\), the word "one" in the following statement means "exactly one".
**Lemma 6.19**.: _For \(n\geq 3\), if an isometry \(\Phi\) of \(\mathcal{O}\mathcal{F}_{n}\) fixes the standard apartment \(\Delta(a_{1},\dots,a_{n})\) and its sticks, then one of \(\Phi\) and \(\iota\circ\Phi\) fixes the apartment, its sticks and its supersticks._
Proof.: The vertices \([M_{i}]\) and \([M^{\prime}_{i}]\) appearing in this proof should be regarded as _midpoints_ between the rank 1 vertices of \(\Delta=\Delta(a_{1},\dots,a_{n})\) and the sticks of \(\Delta\); these midpoints come in pairs.
Consider first the superstick \([a_{1}a_{2}a_{3}]\). The rank 2 vertices \([M]\) adjacent to both \([a_{1}a_{2}]\) and \([a_{3}]\) in \(\mathcal{O}\mathcal{F}_{n}\) are of the form \([a_{1}a_{2},a_{3}^{\gamma}]\) or \([a_{2}a_{1},a_{3}^{\gamma}]\), where \(\gamma\) is the label on the arc in \(\operatorname{core}(M)\) connecting the loop labeled \(a_{1}a_{2}\) or \(a_{2}a_{1}\) to the loop labeled \(a_{3}\). The key point to observe is that if \(\gamma\neq 1\) then \(\operatorname{core}(M)\) does not contain a loop labeled by a superstick \(a_{i}^{\delta_{1}}a_{j}^{\delta_{2}}a_{k}^{\delta_{3}}\) with \(|\delta_{1}|=|\delta_{2}|=|\delta_{3}|=1\). Thus the only rank 2 vertices \([M]\in\mathcal{O}\mathcal{F}_{n}\) at distance 1 from \([a_{1}a_{2}]\) and \([a_{3}]\) and a superstick of \(\Delta\) are \([M_{1}]:=[a_{1}a_{2},a_{3}]\) and \(M^{\prime}_{1}:=[a_{2}a_{1},a_{3}]\). Likewise, the only rank 2 vertices \([M]\in\mathcal{O}\mathcal{F}_{n}\) at distance 1 from \([a_{2}a_{3}]\) and \([a_{1}]\) and a superstick of \(\Delta\) are \([M_{2}]:=[a_{2}a_{3},a_{1}]\) and \(M^{\prime}_{2}:=[a_{3}a_{2},a_{1}]\).
The two supersticks carried by \(M_{1}\) are \([a_{1}a_{2}a_{3}]\) and \([a_{1}a_{2}a_{3}^{-1}]\), while \(M^{\prime}_{1}\) carries \([a_{1}a_{3}^{\pm 1}a_{2}]\) and \(M_{2}\) carries \([a_{1}(a_{2}a_{3})^{\pm 1}]\) and \(M^{\prime}_{2}\) carries \([a_{1}(a_{3}a_{2})^{\pm 1}]\). Thus
\(M_{1}\) and \(M_{2}\) have a single superstick in common, as do \(M_{1}^{\prime}\) and \(M_{2}^{\prime}\), and no other combination does.
As \(\Phi\) fixes \([a_{1}a_{2}]\) and \([a_{3}]\) and takes supersticks to supersticks, it must fix both of \(M_{1}\) and \(M_{1}^{\prime}\) or interchange them. Likewise it must fix both of \(M_{2}\) and \(M_{2}^{\prime}\) or interchange them. And if it interchanges \(M_{1}\) and \(M_{1}^{\prime}\) then it must also interchange \(M_{2}\) and \(M_{2}^{\prime}\), since \(M_{1}\) and \(M_{2}\) have a superstick in common, whereas \(M_{1}\) and \(M_{2}^{\prime}\) do not. The action of \(\iota\) fixes \(\Delta\) and its sticks while interchanging \(M_{1}\) and \(M_{1}^{\prime}\) and interchanging \(M_{2}\) and \(M_{2}^{\prime}\). So by composing with \(\iota\) if necessary, we may assume that \(\Phi\) fixes each of \(M_{1},M_{1}^{\prime},M_{2},M_{2}^{\prime}\). It must then also fix the common supersticks that pairs of these factors support, and the remaining supersticks that they carry must then also be fixed. Thus \(\Phi\) (possibly adjusted by \(\iota\)) must fix all six of the sticks listed above. The remaining supersticks of \(\Delta[a_{1},a_{2},a_{3}]\) are \([a_{1}a_{2}^{-1}a_{3}]\) and \([a_{1}a_{2}^{-1}a_{3}^{-1}]\). These too must be fixed because the latter is supported in common with \([a_{1}a_{3}a_{2}]\) on a midpoint graph between \([a_{1}]\) and \([a_{3}a_{2}]\), whereas the former is not.
At this point we are done in the case \(n=3\), but to complete the proof of the lemma in the general case we must argue that because \(\Phi\) fixes the supersticks associated to one rank-3 face, it fixes the supersticks on all rank 3 faces. The argument given above shows that \(\Phi\) either fixes all or none of the supersticks at a rank 3 face, so it will be enough to prove that \(\Phi\) fixes one of the supersticks at an adjacent face; we focus on \([a_{1}a_{2}a_{4}]\).
Observe that \(V=[a_{1}a_{2},a_{3},a_{4}]\) is the unique rank-3 vertex adjacent to \(M_{1}\), \([a_{3}]\), \([a_{4}]\), \([a_{3}a_{4}]\), all of which we know to be fixed by \(\Phi\), and \(\operatorname{core}(V)\) is the wedge of loops labeled \(a_{3},a_{4},a_{1}a_{2}\). The only superstick of \(\Delta[a_{1},a_{2},a_{4}]\) carried by this graph is \([a_{1}a_{2}a_{4}]\); in other words this is the only such superstick that is a distance 1 from \(V\). Thus the isometry \(\Phi\) must fix \([a_{1}a_{2}a_{4}]\).
The proof of the following observation is contained in the preceding proof.
**Addendum 6.20**.: _If \(\Phi\) fixes \(\Delta(a_{1},\dots,a_{n})\), all of its sticks, and all of its substicks, then, for all distinct triples \(i,j,k\in\{1,\dots,n\}\), it also fixes each of the rank 2 vertices \([M]\) adjacent to both \([a_{i}]\) and \([a_{j},a_{k}]\)_
We need one last lemma.
**Lemma 6.21**.: _Let \(\Delta_{1}=\Delta(b_{1},b_{2},\cdots,b_{n})\) be a standard apartment that contains all the vertices of \(\Delta_{0}=\Delta(a_{1},a_{2},\cdots,a_{n})\) except for \(\langle a_{2},a_{3},\cdots,a_{n}\rangle\)._
1. _If_ \(n>3\) _then_ \(\Delta_{1}=\Delta_{0}\)_._
2. _If_ \(n=3\) _then_ \(\Delta_{1}=\Delta_{0}(a_{1},a_{2}^{a_{1}^{k}},a_{3})\) _for some_ \(k\in\mathbb{Z}\)_._
Proof.: The rank \(n-1\) factor \(V\) that \(\Delta_{1}\) has in place of \(\langle a_{2},\cdots,a_{n}\rangle\) contains, up to conjugation, both \(\langle a_{2},\cdots,a_{n-1}\rangle\) and \(\langle a_{3},\cdots,a_{n}\rangle\). For \(n>3\) this implies that \(V\) is conjugate to \(\langle a_{2},\cdots,a_{n}\rangle\), by Lemma 2.6.
If \(n=3\) then \(V\) must have the form \(V=\langle a_{2}^{\gamma},a_{3}\rangle\), whose core graph has two loops, labeled \(a_{2}\) and \(a_{3}\) connected by an arc labeled \(\gamma\). Arguing with the existence of sticks (as in Example 6.9) we see that \(\gamma\) must be a power of \(a_{1}\).
End of the Proof of Theorem 1.2.: Given an automorphism \(\Phi\) of \(\mathcal{O}\mathcal{F}_{n}\), with \(n\geq 3\), we compose it with an element of \(\operatorname{Aut}(F_{n})\) so as to assume that it leaves a standard apartment \(\Delta=\Delta(a_{1},\dots,a_{n})\) invariant. We use Proposition 6.17 to compose \(\Phi\) with a further element of \(\operatorname{Aut}(F_{n})\) so that it fixes \(\Delta\) and all of its sticks. Lemma 6.19 then tells us that, after composing with \(\iota\) if necessary, \(\Phi\) fixes the supersticks of \(\Delta\). We will be done if we can argue that this adjusted \(\Phi\) fixes every standard apartment that is Nielsen adjacent to \(\Delta\) and fixes all the sticks and supersticks of such an apartment.
Without loss of generality we may assume that the Nielsen transformation is \(a_{1}\mapsto a_{1}a_{2}\). Consider \(\Delta_{\lambda}=\Delta(a_{1}a_{2},a_{2},\dots,a_{n}\}\). Every rank 1 vertex of \(\Delta_{\lambda}\) is a vertex or stick of \(\Delta\), and all of the faces that do not include the vertex \([a_{1}a_{2}]\) are fixed as they lie in \(\Delta\). Proceeding by induction on the rank we may assume that every vertex except \(V=\langle a_{1}a_{2},a_{3},\cdots,a_{n}\rangle\) is fixed. It then follows from Lemma 6.21 that \(V\) is also fixed. All the sticks of this apartment except for one (namely \(a_{1}a_{2}a_{2}\)) are either vertices, sticks or supersticks of \(\Delta\), so all of the sticks are fixed. It follows from Lemma 6.19 that \(\Phi\) fixes all of the supersticks of \(\Delta_{\lambda}\) or none of them (because \(\iota\) acts without fixed points on the set of supersticks). But there is one that we know it does fix, namely \([a_{1}a_{2}^{2}a_{3}]\), because this is the only superstick of \(\Delta_{\lambda}\) that is carried by the rank 2 vertex \([a_{2},a_{1}a_{3}]\), and this is one of the midpoint vertices \([M]\) that Addendum 6.20 tells us is fixed by \(\Phi\). This completes the proof.
## 7. Fakery in every rank
In this section we underscore the subtlety of recognising standard apartments by describing a family of fake apartments in \(\mathcal{A}\mathcal{F}_{n}\) and \(\mathcal{O}\mathcal{F}_{n}\). This family shows that there exist fake apartments in \(\mathcal{O}\mathcal{F}_{n}\) with the property that each of their rank \((n-1)\) faces is standard and each of their rank 1 vertices is antipodal to the barycentre of the opposite face.
We consider a family of rank \(n\) subgroups \(H<F_{n}\) for which \(\operatorname{core}_{*}(H)\) is obtained from the rose for \(\langle a_{1},\dots,a_{n-1}\rangle\) by connecting it to a loop labelled \(a_{n}\) with a bridge labelled by a word of a particular form. The words that we want are defined recursively:
\[W_{0}:=a_{n}\text{ and }\ W_{k+1}:=W_{k}a_{k+1}W_{k}^{-1}.\]
For example, \(W_{2}=(a_{n}a_{1}a_{n}^{-1})\,a_{2}(a_{n}a_{1}^{-1}a_{n}^{-1})\). Define
\[H:=\langle a_{1},\dots,a_{n-1},W_{n-1}a_{n}W_{n-1}^{-1}\rangle.\]
**Lemma 7.1**.: _For \(j\leq n-1\), the subgroup \(V_{j}<F_{n}\) generated by \(W_{n-1}a_{n}W_{n-1}^{-1}\) and \(A_{j}=\{a_{i}\mid i\leq n-1,\,i\neq j\}\) is a rank \((n-1)\) free factor antipodal to \(\langle W_{j-1}a_{j}W_{j-1}^{-1}\rangle\)._
Proof.: We shall refer to the arc of \(\operatorname{core}_{*}(V_{k})\) joining the basepoint \(*\) to the loop labeled \(a_{n}\) as the _bridge_; it is labeled \(W_{n-1}\). Let \(p\) be the vertex on the bridge that is the terminus of the path from \(*\) labeled by the prefix \(W_{j-1}\prec W_{n-1}\). We attach the lollipop \(\operatorname{core}_{*}(W_{j-1}a_{j}W_{j-1}^{-1})\) to \(\operatorname{core}_{*}(V_{k})\) at \(*\) and start folding. The stalk of the lollipop folds entirely into \(\operatorname{core}_{*}(V_{k})\), at which point we have the graph obtained from \(\operatorname{core}_{*}(V_{j})\) by attaching a loop labeled \(a_{j}\) at \(p\). The edge \(e\) immediately beyond \(p\) then folds around this loop and the folding continues as the arc beyond \(e\) that is labeled \(W_{j-1}^{-1}\) folds into the section of the bridge joining \(p\) to \(*\) - at this point the folded graph is the wedge of the rose for \(A_{j}\) and two lollipops, one with stalk labeled \(W_{j-1}\) and petal \(a_{j}\), and one with stalk \(a_{j+1}W_{j}^{-1}a_{j+2}W_{j}\dots\) and petal \(a_{n}\). The initial edge on the stalk of the second lollipop folds into the rose, then the arc labeled \(W_{j}^{-1}\) traces around the first lollipop, then the edge labeled \(a_{j+2}\) folds into the rose, _etc._
The folding continues until the entire stalk of the second lollipop has folded into the wedge of the rose and the first lollipop. At this stage, the loop labeled \(a_{n}\) shares its vertex with the rose for \(A_{j}\), and the stalk of the first lollipop folds into the rank-\((n-1)\) rose that they form. Thus we obtain the rose \(R_{n}\).
**Proposition 7.2**.: _The proper subsets of \(\{a_{1},\dots,a_{n-1},\,W_{n-1}a_{n}W_{n-1}^{-1}\}\) generate free factors of \(F_{n}\), and the subcomplex \(\Delta<\mathcal{AF}_{n}\) that they span is an apartment with the following properties:_
1. _every codimension-1 face is standard;_
2. _the apartment is fake._
_The image of \(\Delta\) in \(\mathcal{OF}_{n}\) is also fake, and_
1. _the barycentre of each codimension-1 face is antipodal to the rank 1 factor opposite it._
Proof.: Lemma 7.1 assures us that each subset of cardinality \(k<n\) generates a free factor of rank \(k\), so \(\Delta\) is indeed an apartment and (1) holds. The lemma also tells us that, in \(\mathcal{OF}_{n}\), the codimension-1 face opposite the vertex \([a_{j}]\) is antipodal to \([a_{j}]=[W_{j-1}a_{j}W_{j-1}^{-1}]\), so (3) holds.
In a standard apartment of \(\mathcal{OF}_{n}\), if \([A],[B]\) are the barycentres of distinct codimension-1 faces and \(A,B\) are representatives with \(A\cap B\neq 1\), then \(A\cup B\) will generate \(F_{n}\). But in the image of \(\Delta\), such representatives will only generate \(H\neq F_{n}\), so the apartment is fake. |
2310.18990 | Studying the production mechanisms of light meson resonances in two-pion
photoproduction: a Regge approach | A calculation of the angular moments of two-pion photoproduction is
presented. The underlying theoretical model encodes the prominent $\rho(770)$
resonance and the expected leading background contribution coming from the Deck
mechanism. The model contains a number of free parameters which are fit to
experimental data. A good description of the angular moments is obtained. | Εukasz Bibrzycki, Nadine Hammoud, Vincent Mathieu, Robert J. Perry, Adam P. Szczepaniak | 2023-10-29T12:17:29Z | http://arxiv.org/abs/2310.18990v1 | Studying the production mechanisms of light meson resonances in two-pion photoproduction: a Regge approach
###### Abstract
A calculation of the angular moments of two-pion photoproduction is presented. The underlying theoretical model encodes the prominent \(\rho(770)\) resonance and the expected leading background contribution coming from the Deck mechanism. The model contains a number of free parameters which are fit to experimental data. A good description of the angular moments is obtained.
## 1 Introduction
This proceedings focuses on a theoretical determination of the angular moments of two (charged) pion photoproduction at low invariant mass of the \(\pi^{+}\pi^{-}\) system. The angular moments are well-defined experimental observables which are bilinear in the partial waves of the two-pion subsystem. They are thus a key spectroscopic observable of interest. After presenting the theoretical model for two-pion photoproduction, the angular moments are computed, and compared with experimental results.
## 2 Kinematics and Preliminaries
In this paper a theoretical model for the process \(\gamma(q,\lambda_{\gamma})+p(p_{1},\lambda_{1})\to\pi^{+}(k_{1})+\pi^{-}(k_{2})+p( p_{2},\lambda_{2})\) is described. The calculation is performed in the helicity frame. In this frame, the \(\pi^{+}\pi^{-}\) system is at rest (\({\bf k}_{1}=-{\bf k}_{2}\)) and the recoiling proton (\({\bf p}_{2}\)) defines the negative \(z\)-axis. The \(\pi^{+}\) recoils at an angle \(\Omega^{\rm H}=(\theta^{\rm H},\phi^{\rm H})\) to the \(z\) axis. The scalar amplitudes in such a \(2\to 3\) process are maximally described by five independent kinematical variables. Thus in addition to the two angles for the \(\pi^{+}\), the kinematic invariants \(s=(p_{1}+q)^{2}\), \(t=(p_{1}-p_{2})^{2}\), and \(s_{12}=(k_{1}+k_{2})^{2}=m_{12}^{2}\) are used. This set of kinematic variables, \((s,t,s_{12},\Omega^{\rm H})\) is complete, in the sense that all other kinematic invariants may be computed by knowing these five. The notation for the phase-space and intensity is taken from Ref. [1]. The differential cross section is defined as
\[\frac{d\sigma}{dtdm_{12}d\Omega^{\rm H}}=\kappa\sum_{\lambda_{1}\lambda_{2} \lambda_{\gamma}}|{\mathcal{M}}_{\lambda_{1}\lambda_{2}\lambda_{\gamma}}(s,t, s_{12},\Omega^{\rm H})|^{2}\,, \tag{1}\]
where \(\kappa\) contains the kinematical factors and \({\mathcal{M}}_{\lambda_{1}\lambda_{2}\lambda\gamma}(s,t,s_{12},\Omega^{\rm H})\) is the invariant matrix element. This paper will focus on a description of the unpolarized angular moments. These may be obtained from Eq. (1) via
\[\langle Y_{LM}\rangle\,(s,t,s_{12})=\sqrt{4\pi}\int d\Omega^{\rm H}\,\frac{d \sigma}{dtdm_{12}d\Omega^{\rm H}}{\rm Re}\,Y_{LM}(\Omega^{\rm H})\,. \tag{2}\]
## 3 Description of Model
The global features of the angular moments, \(\langle Y_{LM}\rangle\), measured in two-pion photoproduction can be understood as the result two competing mechanisms which may be separated based on their effect on the two-pion invariant mass distribution. In particular, the model incorporates both explicit low-energy two-pion resonances, and the leading continuum contribution from the "Deck" or "Drell-Soding" Mechanism [2, 3, 4]. These contributions are represented diagramatically in Fig. 1. In the following subsections, these two mechanisms are described in more detail.
### Resonant Model
The resonant model is constructed by assuming that two-pion resonances are produced via Reggeon-photon fusion, which subsequently decay into two
Figure 1: Dominant contributions to two-pion photoproduction at small momentum transfer. With this approximation, it is possible to relate this \(2\to 3\) process to \(2\to 2\) processes.
pions. Symbolically, one may write
\[\mathcal{M}_{\lambda_{1}\lambda_{2}\lambda_{\gamma}}(s,t,s_{12},\Omega^{\rm H})= \sum_{E}\Gamma^{N\to EN}_{\lambda_{1}\lambda_{2}}(t)R_{E}(s,t)i\mathcal{M}^{ \gamma E\rightarrow\pi^{+}\pi^{-}}_{\lambda_{\gamma}}(s_{12},\Omega^{\rm H}) \tag{3}\]
where \(\Gamma^{N\to EN}_{\lambda_{1}\lambda_{2}}\) describes the nucleon-reggeon vertex, \(R_{E}(s,t)\) is a Regge propagator and \(i\mathcal{M}^{\gamma E\rightarrow\pi^{+}\pi^{-}}_{\lambda_{\gamma}}\) is the \(\gamma E\rightarrow\pi^{+}\pi^{-}\) scattering amplitude. The Regge propagator is
\[R_{E}(s,t)=\frac{\alpha^{E}(t)}{\alpha^{E}(0)}\frac{1+e^{-i\pi\alpha^{E}(t)}}{ \sin\pi\alpha^{E}(t)}\biggl{(}\frac{s}{s_{0}}\biggr{)}^{\alpha^{E}(t)} \tag{4}\]
where \(\alpha_{E}(t)=\alpha_{0}^{E}+\alpha_{1}^{E}t\) is the Regge trajectory, and \(s_{0}=1~{}{\rm GeV}^{2}\) is a mass scale introduced to fix the dimensions. In this study only \(\mathbb{P}\), \(f_{2}\) and \(\rho/\omega\) exchanges are considered. The \(t\)-dependence of the vertices is not predicted from Regge theory and here is obtained from an effective Lagrangian approach. Residual \(t\)-dependence can be absorbed into the fitted parameters. Resonances are described by energy-dependent Breit-Wigner distributions following the formalism in Ref. [5].
The CLAS database [6] contains experimental measurements of the two-pion angular moments from near threshold, \(m_{12}\sim 0.4~{}{\rm GeV}\), to \(m_{12}\sim 1.4~{}{\rm GeV}\). Clear evidence for the presence of meson resonances may be observed by studying the structures of the angular moments. In this proceedings, only resonances with a mass of less than \(1~{}{\rm GeV}\) will be considered. In this mass region, the PDG lists two \(S\)-wave resonances, the \(\sigma(500)\) and \(f_{0}(980)\) and one \(P\)-wave resonance, the \(\rho(770)\).
### Continuum Model
The Deck mechanism forms the continuum contribution to the model. For the reaction considered, it is given by
\[\begin{split}\mathcal{M}^{\rm Deck,GI}_{\lambda_{1}\lambda_{2} \lambda_{\gamma}}&=\sqrt{4\pi\alpha}\biggl{[}\biggl{(}\frac{ \epsilon(q,\lambda_{\gamma})\cdot k_{1}}{q\cdot k_{1}}-\frac{\epsilon(q, \lambda_{\gamma})\cdot(p_{1}+p_{2})}{q\cdot(p_{1}+p_{2})}\biggr{)}\beta(u_{1} )M^{-}_{\lambda_{1}\lambda_{2}}(s_{2},t;u_{1})\\ &-\biggl{(}\frac{\epsilon(q,\lambda_{\gamma})\cdot k_{2}}{q \cdot k_{2}}-\frac{\epsilon(q,\lambda_{\gamma})\cdot(p_{1}+p_{2})}{q\cdot(p_ {1}+p_{2})}\biggr{)}\beta(u_{2})M^{+}_{\lambda_{1}\lambda_{2}}(s_{1},t;u_{2}) \biggr{]}\end{split} \tag{5}\]
where \(\beta(u_{i})=\exp\bigl{(}(u_{i}-u_{i}^{\rm min})/\Lambda_{\pi}^{2}\bigr{)}\) are hadronic form factors introduced to suppress the Born term pion propagator at large \(u_{i}=(q-k_{i})^{2}\), \(\Lambda_{\pi}=0.9~{}{\rm GeV}\) and \(M^{\pm}_{\lambda_{1}\lambda_{2}}\) is the scattering amplitude for the process \(p+\pi^{*\pm}\to p+\pi^{\pm}\). Note that although this is a binary amplitude, the virtuality of the initial pion, \(u_{i}\) implies that this amplitude is dependent on three kinematic invariants. In the limit that \(u_{i}\to m_{\pi}^{2}\), these amplitudes may be related to elastic \(\pi^{\pm}p\) scattering, for which there is a wealth of experimental and theoretical information. The form of the \(\pi N\) amplitudes are taken from Ref. [7]. At low energy, this amplitude is described in terms of the SAID parameterization [8, 9, 10] of \(\pi N\) partial wave amplitudes, which are extracted from elastic \(\pi N\) scattering data, while at high energies, the model interpolates to a Regge description of the data.
## 4 Results and Discussion
Angular moments of two-pion photoproduction were computed using the model defined above and compared to the CLAS experimental data [11]. In order to improve the
agreement with experimental data, the relative weights of the partial waves were refit to the experimental data. The resulting fit to experimental data is shown in Fig. 2. A good fit to the angular moments is obtained, except possibly for \(m_{12}>1.2\ \mathrm{GeV}\). Note however, that in this region, there are several resonances which are expected to contribute, but which are not currently incorporated in the model.
The resulting absolute values of the partial-waves are shown in Fig. 3. The notation for the partial waves is taken from Ref. [11]. In particular, \(P_{+}\) denotes the \(P\)-wave partial wave amplitude with the same helicity as the incident photon, \(P_{0}\) denotes the amplitude with one unit of helicity flip and \(P_{-}\) denotes the amplitude with two units of helicity flip. It was observed in Ref. [12] that at small \(-t\), the process \(\gamma p\to\rho p\) proceeded primarily via the helicity conserving amplitude. This observation was called \(s\)-channel helicity
Figure 3: Preliminary prediction of the absolute value of \(S\)- and \(P\)-wave amplitudes. Note that at small \(-t\), the hierachy of \(P\)-wave amplitudes is in accordance with SCHC, but at larger \(-t\), the spin-flip amplitude \(|P_{0}|\) appears to dominate. This behaviour can be understood from Regge theory.
Figure 2: Comparing the preliminary predictions for angular moments of two-pion photoproduction with experimental data from Ref. [11]. After fitting a number of free parameters, a good description of the experimental data is obtained.
conservation (SCHC). In the notation given here, this implies that \(P_{+}\) dominates over \(P_{0}\) and \(P_{-}\). Regge theory implies that for small \(-t/s\), the top and bottom vertices factorize, and the resulting \(-t\)-dependence of the top vertex is \(\sim(-t)^{|\lambda_{\gamma}-\lambda_{\rho}|}\). Thus in the forward limit, only the helicity amplitude corresponding to \(\lambda_{\gamma}=\lambda_{\rho}\) is expected to contribute. However, compared with Ref. [12], the \(-t\) measured at CLAS is larger. Due to the \(-t\) dependence of the \(P_{0}\) and \(P_{-}\) amplitudes, one expects that at larger \(-t\) SCHC should be increasingly violated, as these amplitudes are \(-t\)-enhanced. This behaviour is observed in the \(t\)-dependence in Fig. 3. Note that at \(-t=0.45\) GeV, the hierachy of partial waves is as one would predict from SCHC, with \(|P_{+}|>|P_{0}|>|P_{-}|\). However, at larger \(-t\), this hierachy is no-longer obeyed.
## 5 Conclusions
This proceedings presented a theoretical model of two-pion photoproduction which incorporated the prominent \(\rho(770)\) resonance and leading background contribution from the Deck mechanism. A good agreement between model and data was obtained after the relative weights of partial waves were fit to experimental data, suggesting that the model correctly identifies the production mechanisms and partial waves which are instrumental for the description of the angular moments. This model can be applied to make predictions at photon energies relevant for the CLAS12 and GlueX experiments.
###### Acknowledgements.
This work contributes to the aims of the USDOE ExoHad Topical Collaboration, contract DE-SC0023598. In addition, the support of project PID2020-118758GB-I00, financed by the Spanish MCIN/ AEI/10.13039/501100011033/ is acknowledged. The support of project No. 2018/29/B/ST2/ 02576 (National Science Center), partly financed by a Polish research project is also acknowledged. The authors thank their JPAC colleagues for useful conversations.
|
2308.14338 | Fair Few-shot Learning with Auxiliary Sets | Recently, there has been a growing interest in developing machine learning
(ML) models that can promote fairness, i.e., eliminating biased predictions
towards certain populations (e.g., individuals from a specific demographic
group). Most existing works learn such models based on well-designed fairness
constraints in optimization. Nevertheless, in many practical ML tasks, only
very few labeled data samples can be collected, which can lead to inferior
fairness performance. This is because existing fairness constraints are
designed to restrict the prediction disparity among different sensitive groups,
but with few samples, it becomes difficult to accurately measure the disparity,
thus rendering ineffective fairness optimization. In this paper, we define the
fairness-aware learning task with limited training samples as the \emph{fair
few-shot learning} problem. To deal with this problem, we devise a novel
framework that accumulates fairness-aware knowledge across different
meta-training tasks and then generalizes the learned knowledge to meta-test
tasks. To compensate for insufficient training samples, we propose an essential
strategy to select and leverage an auxiliary set for each meta-test task. These
auxiliary sets contain several labeled training samples that can enhance the
model performance regarding fairness in meta-test tasks, thereby allowing for
the transfer of learned useful fairness-oriented knowledge to meta-test tasks.
Furthermore, we conduct extensive experiments on three real-world datasets to
validate the superiority of our framework against the state-of-the-art
baselines. | Song Wang, Jing Ma, Lu Cheng, Jundong Li | 2023-08-28T06:31:37Z | http://arxiv.org/abs/2308.14338v1 | # Fair Few-shot Learning with Auxiliary Sets
###### Abstract
Recently, there has been a growing interest in developing machine learning (ML) models that can promote fairness, i.e., eliminating biased predictions towards certain populations (e.g., individuals from a specific demographic group). Most existing works learn such models based on well-designed fairness constraints in optimization. Nevertheless, in many practical ML tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance. This is because existing fairness constraints are designed to restrict the prediction disparity among different sensitive groups, but with few samples, it becomes difficult to accurately measure the disparity, thus rendering ineffective fairness optimization. In this paper, we define the fairness-aware learning task with limited training samples as the _fair few-shot learning_ problem. To deal with this problem, we devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks. To compensate for insufficient training samples, we propose an essential strategy to select and leverage an _auxiliary set_ for each meta-test task. These auxiliary sets contain several labeled training samples that can enhance the model performance regarding fairness in meta-test tasks, thereby allowing for the transfer of learned useful fairness-oriented knowledge to meta-test tasks. Furthermore, we conduct extensive experiments on three real-world datasets to validate the superiority of our framework against the state-of-the-art baselines.
## 1 Introduction
Machine learning (ML) tools have been increasingly utilized in high-stake tasks such as credit assessments [26] and crime predictions [22]. Despite their success, the data-driven nature of existing machine learning methods makes them easily inherit the biases buried in the training data and thus results in predictions with discrimination against some sensitive groups [33]. Here, sensitive groups are typically defined by certain sensitive attributes such as race and gender [35, 3, 4, 19, 45]. For example, a criminal risk assessment model can unfavorably assign a higher crime probability for specific racial groups [33]. In fact, such undesirable biases commonly exist in various real-world applications such as toxicity detection [6], recommendation systems [21], loan approval predictions [29], and recruitment [11].
In response, a surge of research efforts in both academia and industry have been made for developing fair machine learning models [9, 7]. These models have demonstrated their ability to effectively mitigate unwanted bias in various applications [1, 47]. Many fair ML methods [8, 10] incorporate fairness constraints to penalize predictions with statistical discrepancies among different sensitive groups. These methods often rely on sufficient training data from each sensitive group (e.g., collecting data from a specific region with an imbalanced population composition [49]). However, in many scenarios, only very few data samples can be collected, especially for those from the minority group. This could render existing fair ML methods ineffective or even further amplify discrimination against the minority group. To enhance the applicability of fair ML in practice [49], this work aims to address the crucial and urgent problem of _fair few-shot learning_: promoting fairness in few-shot learning tasks with a limited number of samples.
One feasible solution to address fair few-shot learning is to incorporate fairness techniques into few-shot learning methods. Particularly, we first learn from _meta-training tasks_ with adequate samples [32, 18, 39], and then leverage the learned knowledge and fine-tune the model on other disjoint _meta-test tasks_ with few samples based on fairness constraints. We define such a step of fine-tuning as _fairness adaptation_. However, there still remain two primary challenges for our problem. First, the insufficiency of samples in meta-test tasks can result in unsatisfactory fairness adaptation performance. Although the model can adapt to meta-test tasks with limited samples via fine-tuning for classification, these samples may not be sufficient to ensure fairness performance. Many fairness constraints are designed to restrict the prediction disparity among different sensitive groups. However, in fair few-shot learning, the lack of samples in each sensitive group inevitably increases the difficulties in measuring the prediction disparity. Moreover, in meta-test sets, the sensitive attributes of data samples can often be extremely imbalanced (e.g., a majority of individuals belonging to the same race, while other sensitive groups have very few, or even no samples). In these cases, the conventional fairness constraints are often ineffective, or completely inapplicable. Second, the generalization gap between meta-training tasks and meta-test tasks hinders the efficacy of fairness adaptation. Similar to other few-shot learning studies, the key point of fair few-shot learning is to leverage the learned knowledge from meta-training tasks to facilitate the model performance on meta-test tasks with few samples. In our problem, it is essential to leverage the learned knowledge for fairness adaptation. However, models that manage to reduce disparities on meta-training tasks do not necessarily achieve the same performance in fairness on meta-test tasks [10], due to the fact that fairness constraints are data-dependent and thus lack generalizability [8]. As a result, it remains challenging to extract and leverage the learned knowledge that is beneficial for fairness adaptation.
To tackle these challenges, we devise a novel framework for fair few-shot learning, named FEAST (**F**air **f**E**w-shot learning with **A**uxiliary **S**e**Ts). Specifically, we propose to leverage an _auxiliary set_ for each meta-test task to promote fair adaptation with few samples while addressing the issues caused by insufficient samples. The auxiliary set is comprised of several samples from meta-training data and is specific to each meta-test task. By incorporating these auxiliary sets via a novel _fairness-aware mutual information loss_, the model can be effectively adapted to a meta-task with few samples while preserving the fairness knowledge learned during training. Furthermore, to effectively leverage the learned knowledge from meta-training tasks for fairness adaptation, our proposed framework selects the auxiliary sets based on the _fairness adaptation direction_. This ensures that the selected auxiliary sets share similar fairness adaptation directions and thus can provide beneficial learned knowledge. We summarize our main contributions as follows:
* **Problem.** We study the crucial problem of fair few-shot learning. We introduce the importance of this problem, analyze the challenges, and point out the limitations of existing studies. To the best of our knowledge, this is the first work that addresses these unique challenges in fair few-shot learning.
* **Method.** We develop a novel fair few-shot learning framework that (1) can leverage auxiliary sets to aid fairness adaptation with limited samples, and (2) can select auxiliary sets with similar optimization directions to promote fairness adaptation.
* **Experiments.** We conduct extensive experiments on three real-world fairness datasets under the few-shot scenario and demonstrate the superiority of our proposed framework in terms of fairness compared with a couple of state-of-the-art baselines.
## 2 Problem Statement
In this section, we provide a formal definition for the problem of fair few-shot learning that we study in this paper. Denote \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\) as the input space, where \(\mathcal{X}\subset\mathbb{R}^{n}\) is the input space with \(n\) different features and \(\mathcal{Y}=\{1,2,\dots,N\}\) is the label space with \(N\) discrete classes. We consider inputs \(X\in\mathcal{X}\), labels \(Y\in\mathcal{Y}\), and sensitive attribute \(A\in\{0,1\}\). In the few-shot setting, the dataset \(\mathcal{D}\) is comprised of two different smaller datasets: meta-training data \(\mathcal{D}_{tr}\) and meta-test data \(\mathcal{D}_{te}\). Moreover, \(\mathcal{D}=\mathcal{D}_{tr}\cup\mathcal{D}_{te}\) and \(\mathcal{D}_{tr}\cap\mathcal{D}_{te}=\emptyset\), i.e., \(|\mathcal{D}_{tr}|+|\mathcal{D}_{te}|=|\mathcal{D}|\). In general, few-shot settings assume that there exist sufficient samples in \(\mathcal{D}_{tr}\), while samples in \(\mathcal{D}_{te}\) are generally scarce [18, 34].
The proposed framework is built upon the prevalent paradigm of episodic meta-learning [34, 32], which has demonstrated superior performance in the field of few-shot learning [18, 39]. The process of episodic meta-learning consists of meta-training on \(\mathcal{D}_{tr}\) and meta-test on \(\mathcal{D}_{te}\). During meta-training, the model is trained on a series of _meta-training tasks_\(\{\mathcal{T}_{1},\mathcal{T}_{2},\dots,\mathcal{T}_{T}\}\), where each meta-training task contains support set \(\mathcal{S}\) as the reference and a query set \(\mathcal{Q}\) to be classified. \(T\) is the number of meta-training tasks. More specifically, \(\mathcal{S}=\{(x_{1},y_{1}),(x_{2},y_{2}),\dots,(x_{N\times K},y_{N\times K})\}\) contains \(N\) classes and \(K\) samples for each of these \(N\) classes (i.e., the \(N\)-way \(K\)-shot setting). Meanwhile, the query set \(\mathcal{Q}=\{(x_{1}^{q},y_{1}^{q}),(x_{2}^{q},y_{2}^{q}),\)\(\dots,(x_{|\mathcal{Q}|}^{q},y_{|\mathcal{Q}|}^{q})\}\) consists of \(|\mathcal{Q}|\) different samples to be classified from these \(N\) classes. Subsequently, our goal is to develop a machine learning model that can accurately and fairly predict labels for samples in \(\mathcal{D}_{te}\) with limited labeled samples after training on \(\mathcal{D}_{tr}\). Formally, the studied problem of fair few-shot learning can be formulated as follows.
**Definition 1**.: _Fair few-shot learning: Given meta-training data \(\mathcal{D}_{tr}\) and a meta-test task \(\mathcal{T}=\{\mathcal{S},\mathcal{Q}\}\) sampled from meta-test data \(\mathcal{D}_{te}\), our goal is to develop a fair learning model such that after meta-training on samples in \(\mathcal{D}_{tr}\), the model can accurately and fairly predict labels for samples in the query set \(\mathcal{Q}\) when the only available reference is the limited samples in the support set \(\mathcal{S}\)._
Note that the support sets and the query sets are sampled from meta-training data \(\mathcal{D}_{tr}\). That is, for any sample \((x_{i},y_{i})\) in a meta-training task, \((x_{i},y_{i})\sim P_{tr}(X,Y)\), where \(P_{tr}(X,Y)\) is the meta-training task distribution from meta-training data \(\mathcal{D}_{tr}\). We then evaluate the model on a series of meta-test tasks, which share the same structure as meta-training tasks, except that the samples are now from meta-test data \(\mathcal{D}_{te}\). In other words, for any sample \((x_{i},y_{i})\) during meta-test, we have \((x_{i},y_{i})\sim P_{te}(X,Y)\), where \(P_{te}(X,Y)\) is the meta-test task distribution from meta-test data \(\mathcal{D}_{te}\). Under the meta-learning framework [18, 51, 20], the model needs to be first fine-tuned for several steps (i.e., fairness adaptation) using the support set, and then performs fair classification for samples in the query set.
## 3 Proposed Framework
We formulate the problem of _fair few-shot learning_ in the \(N\)-way \(K\)-shot meta-learning framework. The meta-training process typically involves a series of randomly sampled meta-training tasks, each of which contains \(K\) samples for each of the \(N\) classes as the support set, along with several query samples to be classified. Under the few-shot scenario, it is challenging to conduct fairness adaptation on the support set due to the insufficiency of samples and the generalization gap between meta-training tasks and meta-test tasks. Therefore, as illustrated in Fig. 1, we propose the use of auxiliary sets that can enhance fairness adaptation for each meta-test task. In this section, we first introduce the process of conducting fairness adaptation with auxiliary sets and then discuss the strategy to select auxiliary sets.
### Fairness Adaptation with Auxiliary Sets
To alleviate the issue of ineffective fairness adaptation to meta-test tasks caused by insufficient samples, we propose to leverage the samples in meta-training tasks for fairness adaptation. Specifically, considering a target meta-test task \(\mathcal{T}=(\mathcal{S},\mathcal{Q})\), our goal is to utilize an auxiliary set \(\mathcal{A}\) obtained from meta-training data that can compensate for inadequate samples in \(\mathcal{S}\). However, due to the distribution difference between meta-training tasks and meta-test tasks, it remains non-trivial to leverage the auxiliary set \(\mathcal{A}\), which follows a different distribution from \(\mathcal{S}\). Since the data distribution in \(\mathcal{A}\) differs from that in \(\mathcal{S}\), directly conducting fairness adaptation on \(\mathcal{A}\) can be ineffective for fairness in \(\mathcal{S}\). Therefore, to enhance fairness adaptation with the help of the auxiliary set \(\mathcal{A}\), we propose to maximize the mutual information (MI) between the support set \(\mathcal{S}\) and the auxiliary set \(\mathcal{A}\). In consequence, the fairness adaptation on \(\mathcal{S}\) will benefit from \(\mathcal{A}\).
Generally, the support set \(\mathcal{S}\) in \(\mathcal{T}\) can be expressed as \(\mathcal{S}=\{(x_{1},y_{1}),(x_{2},y_{2}),\dots,(x_{N\times K},y_{N\times K})\}\), which contains \(K\) samples for each of \(N\) classes. \(x_{i}\) is an input sample, and \(y_{i}\) is the corresponding label. We use \(a_{i}\in\{0,1\}\) to denote its sensitive attribute. In particular, we propose to construct an auxiliary set that shares the same structure as the support set. In this way, the auxiliary set \(\mathcal{A}\) can be represented as \(\mathcal{A}=\{(x_{1}^{\star},y_{1}^{\star}),(x_{2}^{\star},y_{2}^{\star}), \dots,(x_{|\mathcal{A}|}^{\star},y_{|\mathcal{A}|}^{\star})\}\). Here \(|\mathcal{A}|\), i.e., the size of the auxiliary set, is set as a controllable hyperparameter. Moreover, based on the classification model \(f(\cdot)\), we can obtain the sample embedding \(\mathbf{x}_{i}\in\mathbb{R}^{d}\), and the classification probabilities \(\mathbf{p}_{i}=f(x_{i})\in\mathbb{R}^{N}\) for \(x_{i}\). Here \(d\) denotes the embedding
dimension of samples, and \(N\) is the number of classes in \(\mathcal{T}\). Particularly, we maximize the fairness-aware MI between \(\mathcal{S}\) and \(\mathcal{A}\) by
\[\max_{\theta}I(\mathcal{S};\mathcal{A})=\max_{\theta}\sum_{i=1}^{|\mathcal{S}| }\sum_{j=1}^{|\mathcal{A}|}p(x_{i},x_{j}^{*};\theta)\log\frac{p(x_{i}|x_{j}^{* };\theta)}{p(x_{i};\theta)}, \tag{1}\]
where \(\theta\) denotes the parameters of classification model \(f(\cdot)\). Since the MI term \(I(\mathcal{S};\mathcal{A})\) is difficult to obtain and also intractable, it is infeasible to directly maximize it [27]. Therefore, we first re-formulate the MI term to make it computationally tractable based on the property of conditional probabilities:
\[\begin{split} I(\mathcal{S};\mathcal{A})&=\sum_{i =1}^{|\mathcal{S}|}\sum_{j=1}^{|\mathcal{A}|}p(x_{i}|x_{j}^{*};\theta)p(x_{j}^ {*};\theta)\log\frac{p(x_{i}|x_{j}^{*};\theta)}{p(x_{i};\theta)}\\ &=\sum_{i=1}^{|\mathcal{S}|}\sum_{j=1}^{|\mathcal{A}|}p(x_{j}^{* }|x_{i};\theta)p(x_{i};\theta)\log\frac{p(x_{i}|x_{j}^{*};\theta)}{p(x_{i}; \theta)}.\end{split} \tag{2}\]
Since the support set \(\mathcal{S}\) is randomly sampled, we can assume that the prior probability \(p(x_{i};\theta)\) follows a uniform distribution and set it as a constant: \(p(x_{i};\theta)=1/|\mathcal{S}|\), which thus can be ignored in optimization. Therefore, it remains to estimate \(p(x_{i}|x_{j}^{*};\theta)\) and \(p(x_{j}^{*}|x_{i};\theta)\) to obtain the value of \(I(\mathcal{S};\mathcal{A})\).
#### 3.1.1 Estimation of \(p(x_{i}|x_{j}^{*};\theta)\)
We first denote \(\mathcal{S}_{0}\) and \(\mathcal{S}_{1}\) as the sets of samples with sensitive attributes of \(0\) and \(1\), respectively1. In other words, \(\mathcal{S}=\mathcal{S}_{0}\cup\mathcal{S}_{1}\) and \(\mathcal{S}_{0}\cap\mathcal{S}_{1}=\emptyset\). Similarly, we define sets \(\mathcal{A}_{0}\) and \(\mathcal{A}_{1}\) for the auxiliary set \(\mathcal{A}\). Then we propose to estimate \(p(x_{i}|x_{j}^{*};\theta)\) as follows:
Footnote 1: For the sake of simplicity, we focus on tasks with only binary sensitive attributes in this paper. Nevertheless, our work can be easily generalized to tasks with multiple types of sensitive attributes.
\[p(x_{i}|x_{j}^{*};\theta)=\left\{\begin{aligned} &\frac{\mathbf{p}_{i}(y_{j}^{*})}{ \sum_{x_{k}\in\mathcal{S}_{a_{i}^{*}}}\mathbf{p}_{k}(y_{j}^{*})}\;\;\text{if} \;\;a_{i}=a_{j}^{*},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad 0\;\;\;\text{else}.\end{aligned}\right. \tag{3}\]
Here \(\mathbf{p}_{i}(y_{j}^{*})\in\mathbb{R}\) denotes the classification probability of \(x_{i}\) regarding \(y_{j}^{*}\), which is the label of \(x_{j}^{*}\). Intuitively, the probability measures the alignment of the classification between the support sample \(x_{i}\) and the auxiliary sample \(x_{j}^{*}\), which (1) shares the same sensitive attribute with \(x_{i}\) and (2) is also similar to \(x_{i}\) regarding the classification output. In other words, maximizing \(p(x_{i}|x_{j}^{*};\theta)\) can increase the fairness adaptation consistency between sample \(x_{i}\) and auxiliary samples that are specifically beneficial for the fairness adaptation with \(x_{i}\), thus promoting the fairness adaptation performance.
#### 3.1.2 Estimation of \(p(x_{j}^{*}|x_{i};\theta)\)
The term \(p(x_{j}^{*}|x_{i};\theta)\) in Eq. (2) is conditioned on \(x_{i}\) and denotes the probability of \(x_{j}^{*}\) inferred by \(x_{i}\). Moreover, since the value of \(p(x_{i}|x_{j}^{*};\theta)\) becomes zero when the sensitive attributes of \(x_{i}\) and \(x_{j}^{*}\) are different, we only need to estimate \(p(x_{j}^{*}|x_{i};\theta)\) when \(x_{i}\) and \(x_{j}^{*}\) share the same sensitive attributes, i.e., \(a_{i}=a_{j}^{*}\). Therefore, since \(x_{i}\) and \(x_{j}^{*}\) maintain the same sensitive attributes, we can estimate the probability \(p(x_{j}^{*}|x_{i};\theta)\) based on the squared Euclidean distance between their embeddings without explicitly considering their fairness-aware correlation. In particular, we further normalize the probability with a softmax function to formulate term \(p(x_{j}^{*}|x_{i};\theta)\) as follows:
\[p(x_{j}^{*}|x_{i};\theta)=\frac{\exp\left(-\|\mathbf{x}_{i}-\mathbf{x}_{j}^{*} \|_{2}^{2}\right)}{\sum_{x_{k}^{*}\in\mathcal{A}_{a_{j}^{*}}}\exp\left(-\| \mathbf{x}_{i}-\mathbf{x}_{k}^{*}\|_{2}^{2}\right)}\;. \tag{4}\]
Furthermore, to ensure the consistency of sample representations in meta-training and meta-test data, we apply the \(\ell_{2}\) normalization on both \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}^{*}\), which results in \(\|\mathbf{x}_{i}-\mathbf{x}_{j}^{*}\|_{2}^{2}=2-2\mathbf{x}_{i}^{\top}\cdot \mathbf{x}_{j}^{*}\). In this manner, the logarithmic term \(\log p(x_{j}^{*}|x_{i};\theta)\) becomes:
\[\begin{split}\log\left(p(x_{j}^{*}|x_{i};\theta)\right)& =\log\left(\frac{\exp\left(-2+2\mathbf{x}_{i}^{\top}\cdot\mathbf{x}_{j}^{*} \right)}{\sum_{x_{k}^{*}\in\mathcal{A}_{a_{j}^{*}}}\exp\left(-2+2\mathbf{x}_{i}^{ \top}\cdot\mathbf{x}_{k}^{*}\right)}\right)\\ &=2\mathbf{x}_{i}^{\top}\cdot\mathbf{x}_{j}^{*}-\log\sum_{x_{k}^ {*}\in\mathcal{A}_{a_{j}^{*}}}\exp\left(2\mathbf{x}_{i}^{\top}\cdot\mathbf{x}_{k }^{*}\right).\end{split} \tag{5}\]
Finally, the MI loss \(\mathcal{L}_{MI}\) can be derived as follows:
\[\begin{split}\mathcal{L}_{MI}=&\frac{1}{|\mathcal{A}|} \sum_{j=1}^{|\mathcal{A}|}\sum_{x_{i}\in\mathcal{S}_{a_{j}^{*}}}-\frac{ \mathbf{p}_{i}(y_{j}^{*})}{\sum_{x_{k}\in\mathcal{S}_{a_{i}}}\mathbf{p}_{k}(y _{j}^{*})}\left(2\mathbf{x}_{i}^{\top}\cdot\mathbf{x}_{j}^{*}\right.\\ &\left.-\log\sum_{x_{k}^{*}\in\mathcal{A}_{a_{j}^{*}}}\exp\left(2 \mathbf{x}_{i}^{\top}\cdot\mathbf{x}_{k}^{*}\right)\right).\end{split} \tag{6}\]
Figure 1: The overall framework of FEAST. Here different shapes denote different sensitive attributes, and colors represent sample classes. Given a meta-task, the generator will output the estimated fairness adaptation direction, which is used to select an auxiliary set with the most similar direction from the candidate set. Then we conduct fairness adaptation with the auxiliary set on the current meta-task and perform predictions. The resulting fairness adaptation will be used to update the generator. Note that during training, the meta-task will be incorporated into the candidate auxiliary sets after the optimization of one episode.
The overall fairness adaptation loss can be represented as the combination of fairness regularization terms on the support set \(\mathcal{S}\) and the auxiliary set \(\mathcal{A}\) along with the MI loss between \(\mathcal{S}\) and \(\mathcal{A}\):
\[\mathcal{L}_{FA}=\mathcal{L}_{R}(\mathcal{S})+\gamma\left(\mathcal{L}_{R}( \mathcal{A})+\mathcal{L}_{MI}\right), \tag{7}\]
where \(\gamma\) is an adjustable weight hyper-parameter to control the importance of the auxiliary set. Specifically, \(\mathcal{L}_{R}\) denotes the regularized optimization loss:
\[\mathcal{L}_{R}(S)=\frac{1}{|\mathcal{S}|}\sum_{(x,y)\in\mathcal{S}}\ell(f(x),y)+\lambda R(\mathcal{S}), \tag{8}\]
where \(\ell\) is the classification loss, and \(R(\mathcal{S})\) denotes the fairness regularization term.
### Auxiliary Sets Selection
The second problem of the generalization gap between meta-training and meta-test in fair few-shot learning can also pose a significant challenge in fairness adaptation. To address this issue, we propose to select the auxiliary set based on its similarity in fairness adaptation directions to the target meta-test task. In this way, incorporating the auxiliary set with a similar fairness adaptation direction can potentially leverage beneficial learned knowledge in meta-training to enhance fairness adaptation in the target meta-task. However, it is difficult to identify the fairness adaptation direction of the auxiliary set that aligns with the target meta-task. It is possible that the auxiliary set holds a different or even opposite fairness adaptation direction from the target meta-task. As such, the incorporation of such an auxiliary set can even harm the fairness adaptation performance. Therefore, to select the auxiliary set with a similar fairness adaptation direction to the target meta-test task, we introduce a _dynamic dictionary_, \(\mathcal{A}_{can}\), which stores all candidate auxiliary sets for selection, with the keys being their corresponding fairness adaptation directions. This allows us to efficiently identify and select an auxiliary set with a similar adaptation direction for the target meta-test task, thereby improving the fairness adaptation performance in the presence of the generalization gap.
Notably, this dictionary will be dynamically updated by adding a new auxiliary set after each meta-training step and meanwhile removing the oldest auxiliary set, of which the fairness adaptation direction is the most outdated. In this manner, the dictionary also acts like a queue, which means that the size can be flexible and independent to fit various scenarios. Specifically, after each step on a meta-training task \(\mathcal{T}=\{\mathcal{S},\mathcal{Q}\}\), we will enqueue the support set \(\mathcal{S}\) as a candidate auxiliary set2 into \(\mathcal{A}_{can}\) and remove the oldest auxiliary set. The key of enqueued \(\mathcal{S}\), which is the fairness adaptation direction of \(\mathcal{S}\), is set as the gradient of \(\mathcal{L}_{R}(\mathcal{S})\), i.e., \(\nabla_{\theta}\mathcal{L}_{R}(\mathcal{S})\), where \(\theta\) denotes the model parameters of \(f(\cdot)\).
Footnote 2: Note that the auxiliary set size is controllable via randomly removing samples in \(\mathcal{S}\) or incorporating new samples before enqueuing.
**Identifying the true fairness adaptation direction.** With the help of the dynamic dictionary as a queue during meta-training, it may still remain difficult to obtain the fairness adaptation direction of the target meta-test task \(\mathcal{T}\). This is because the fairness adaptation direction of \(\mathcal{S}\) cannot faithfully reveal the true direction due to potentially imbalanced sensitive attributes. Therefore, to identify the true fairness adaptation direction without directly conducting fairness adaptation on the support set \(\mathcal{S}\), we propose the use of a generator \(g(\cdot)\), parameterized by \(\phi\), to estimate the fairness adaptation results for each meta-test task. In particular, the generator \(g(\cdot)\) takes the support set \(\mathcal{S}\) as input and outputs an estimation of the gradient of \(\mathcal{L}_{R}(\mathcal{S})\), i.e., \(\nabla_{\theta}\mathcal{L}_{R}(\mathcal{S})\). To optimize the generator \(g(\cdot)\), we introduce the Mean Squared Error (MSE) loss as the objective function as follows:
\[\mathcal{L}_{E}=\left\|g(\mathcal{S})-\nabla_{\theta}\mathcal{L}_{R}(\mathcal{ S})\right\|_{2}^{2}, \tag{9}\]
where \(g(\mathcal{S})\in\mathbb{R}^{d_{\theta}}\) is the generator output, and \(d_{\theta}\) is the size of the classification model parameter \(\theta\). It is worth mentioning that the input of the generator \(g(\cdot)\) is an entire support set \(\mathcal{S}\), which means that the generator should be able to capture the contextual information within the support set. For this reason, we propose to leverage the transformer encoder architecture [38] followed by a Multiple Layer Perceptron (MLP) as the implementation of the generator. In specific, the output of the generator can be expressed as:
\[g(\mathcal{S})=\text{MLP}\left(\text{Mean}\left(\text{Transformer}\left( \mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{|\mathcal{S}|}\right)\right) \right). \tag{10}\]
In this manner, the generator can estimate the corresponding fairness adaptation direction from \(\mathcal{S}\), where the result can be used for selecting an auxiliary set.
After the meta-training process on a series of meta-training tasks \(\{\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{T}\}\), we can obtain a dictionary of candidate auxiliary sets in \(\mathcal{A}_{can}=\{\mathcal{A}_{1},\mathcal{A}_{2},\ldots,\mathcal{A}_{| \mathcal{A}_{can}|}\}\) along with their fairness adaptation directions as keys. Here we denote their corresponding keys as \(\mathbf{k}(\mathcal{A})\in\mathbb{R}^{d_{\theta}}\). Then given a new meta-test task \(\mathcal{T}_{\text{rest}}=\{\mathcal{S}_{\text{test}},\mathcal{Q}_{\text{test}}\}\), the corresponding selected auxiliary set \(\mathcal{A}^{\star}\) can be selected via the following criterion:
\[\mathcal{A}^{\star}=\operatorname*{argmin}_{\mathcal{A}\in\mathcal{A}_{can}} \text{dist}\left(g(\mathcal{S}_{\text{test}}),\mathbf{k}(\mathcal{A})\right), \tag{11}\]
where \(\text{dist}(\cdot,\cdot)\) is a function to measure the distance between two vectors. In the experimentation, we implement it as the Euclidean distance. We can then efficiently select an auxiliary set from a significantly large dictionary based on the keys. It is noteworthy that to keep consistency between meta-training and meta-test, we will also select an auxiliary set for each meta-training task for optimization.
### Meta-optimization
Our framework is optimized under the episodic meta-learning paradigm [18]. Specifically, let \(\theta\) denote the total parameters of the
classification model \(f(\cdot)\). In order to perform fairness adaptation, we first initialize the model parameters as \(\theta_{0}\leftarrow\theta\). After that, given a specific meta-task \(\mathcal{T}=\left\{\mathcal{S},\mathcal{Q}\right\}\), we conduct \(\tau\) steps of gradient descent based on the fairness adaptation loss \(\mathcal{L}_{FA}\) calculated on the support set \(\mathcal{S}\). Thus, the fairness adaptation process in \(\mathcal{T}\) can be formulated as follows:
\[\theta_{t}\leftarrow\theta_{t-1}-\alpha\nabla_{\theta_{t-1}}\mathcal{L}_{FA} \left(\mathcal{S};\theta_{t-1}\right), \tag{12}\]
where \(t\in\left\{1,2,\dots,\tau\right\}\) and \(\mathcal{L}(\mathcal{S};\theta_{t-1})\) denotes the loss calculated based on the support set \(\mathcal{S}\) with the parameters \(\theta_{t-1}\). \(\tau\) is the number of fine-tuning steps applied, and \(\alpha\) is the learning rate in each fine-tuning step. After conducting \(\tau\) steps of fine-tuning, we will meta-optimize the classification model \(f(\cdot)\) with the loss calculated on the query set \(\mathcal{Q}\). In specific, we meta-optimize the model parameters \(\theta\) with the following update function:
\[\theta=:\theta-\beta_{1}\nabla_{\theta}\mathcal{L}_{FA}(\mathcal{Q};\theta_{ \tau}), \tag{13}\]
where \(\beta_{1}\) is the meta-learning rate for the classification model \(f(\cdot)\).
For the optimization of the generator \(g(\cdot)\), parameterized by \(\phi\), the update can be formulated as follows:
\[\phi=:\phi-\beta_{2}\nabla_{\phi}\mathcal{L}_{E}(\mathcal{S};\theta_{\tau}), \tag{14}\]
where \(\mathcal{L}_{E}\) is the MSE loss introduced in Eq. (9), and \(\beta_{2}\) is the meta-learning rate for the generator \(g(\cdot)\). In this way, the model parameters \(\phi\) of \(g(\cdot)\) will be updated based on loss \(\mathcal{L}_{E}\) after the fairness adaptation of the classification model \(f(\cdot)\). The detailed training process of our framework is demonstrated in Algorithm 1.
## 4 Experimental Evaluations
### Datasets
In this subsection, we introduce the datasets used in our experiments. To evaluate the performance of FEAST on fair few-shot learning, we conduct experiments on three prevalent real-world datasets: Adult [15], Crime [22], and Bank [26]. The detailed dataset statistics are provided in Table 1.
* The Adult dataset contains information from 48,842 individuals from the 1994 US Census, where each instance is represented by 14 features and a binary label. Here the label indicates whether the income of a person is higher than 50K dollars. Following the data split setting in PDFM [49], we split the dataset into 34 subsets based on the country information of instances. We consider gender as the sensitive attribute.
* The Crime dataset includes information on 2,216 communities from different states in the U.S., where each instance consists of 98 features. Following [31], the binary label of each instance is obtained by converting the continuous crime rate based on whether the crime rate of a community is in the top 50% within the state. The sensitive attribute is whether African-Americans are among the highest or second highest populations in each community. We further split this dataset into 46 subsets by considering each state as a subset.
* The Bank dataset consists of 41,188 individual instances in total. Specifically, each instance maintains 20 features along with a binary label that indicates whether the individual has subscribed to a term deposit. Here, we consider marital status as the binary sensitive attribute. Moreover, the dataset is split into 50 subsets based on the specific date records of instances.
### Experimental Settings
To achieve a fair comparison of FEAST with competitive baselines, we conduct experiments with the state-of-the-art fair few-shot learning methods and other few-shot learning methods with fairness constraints. The details are provided below.
* MAML [18]: This method utilizes a classic meta-learning framework to deal with the fair few-shot learning problem without explicitly applying fairness constraints.
* M-MAML [18]: This method uses the same framework as MAML while modifying datasets by removing the sensitive attribute of each instance to enhance fairness during optimization.
* Pretrain [49]: This method learns a single model on all meta-training data without episodic training. Moreover, a fairness constraint is added to the training objective.
* F-MAML [50]: This method applies a fairness constraint in each episode and tunes a Lagrangian multiplier shared across different episodes for fair few-shot learning tasks.
* FM-dp and FM-eop (Fair-MAML) [31]: These two baselines provide a regularization term for each episode based on demographic parity (DP) and equal opportunity (EOP), respectively.
* PDFM [49]: This method leverages a primal-dual subgradient approach to ensure that the learned model can be fast adapted to a new episode in fair few-shot learning.
Particularly, we use the average classification accuracy (ACC) over \(T_{\text{test}}\) meta-test tasks to evaluate the prediction performance. For fairness performance, we propose to utilize demographic parity (DP) and equalized odds (EO), which are commonly used in existing works [8, 48, 16, 44]. Since we consider the binary classification datasets, the output \(f(x)\in\mathbb{R}\) denotes the prediction score of a specific sample \(x\). In this manner, the metrics can be calculated over \(T_{\text{test}}\) meta-test tasks sampled from the meta-test task distribution \(P_{te}\) as follows:
\[\Delta\text{DP}=\mathbb{E}_{\mathcal{T}\sim P_{te}}\left|\frac{1}{\left| \mathcal{Q}_{0}\right|}\sum_{x\in\mathcal{Q}_{0}}f(x)-\frac{1}{\left|\mathcal{Q }_{1}\right|}\sum_{x\in\mathcal{Q}_{1}}f(x)\right|, \tag{15}\]
\[\Delta\text{EO}=\mathbb{E}_{\mathcal{T}\sim P_{te}}\sum_{y\in\{0,1\}}\left| \frac{1}{\left|\mathcal{Q}_{0}^{y}\right|}\sum_{x\in\mathcal{Q}_{0}^{y}}f(x)- \frac{1}{\left|\mathcal{Q}_{1}^{y}\right|}\sum_{x\in\mathcal{Q}_{1}^{y}}f(x) \right|, \tag{16}\]
where \(\mathcal{Q}_{0}\) and \(\mathcal{Q}_{1}\) denote the query samples with a sensitive attribute of 0 and 1, respectively. Similarly, \(\mathcal{Q}_{0}^{y}\) (or \(\mathcal{Q}_{1}^{y}\)) denotes the query samples in \(\mathcal{Q}_{0}\) (or \(\mathcal{Q}_{1}\)) with label \(y\). \(P_{te}\) is the meta-test task distribution of meta-test sets \(\mathcal{D}^{n}\). Our code is released at [https://github.com/SongW-SW/FEAST](https://github.com/SongW-SW/FEAST).
\begin{table}
\begin{tabular}{c|c c c} \hline
**Dataset** & Adult & Crime & Bank \\ \hline Sensitive Attribute & Gender & Race & Marital Status \\ Label & Income & Crime Rate & Deposit \\ \# Instances & 48,482 & 2,216 & 41,188 \\ \# Features & 12 & 98 & 17 \\ \# Subsets & 34 & 46 & 50 \\ \# Training Subsets & 22 & 30 & 40 \\ \# Validation Subsets & 6 & 8 & 5 \\ \# Test Subsets & 6 & 8 & 5 \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of three real-world datasets.
### Performance Comparison
Table 2 presents the fairness and prediction performance comparison of FEAST and all other baselines on fair few-shot learning. Specifically, we report the results of \(\Delta\)DP, \(\Delta\)EO, and classification accuracy over 500 meta-test tasks for 10 repetitions. We conduct experiments on both 5-shot and 10-shot settings (i.e., \(K=5\) and \(K=10\)). From Table 2, we can have following observations:
* Our framework FEAST consistently outperforms other baselines in terms of fairness in all datasets under both 5-shot and 10-shot settings. These results provide compelling evidence for the effectiveness of our framework FEAST in fair few-shot learning.
* The performance improvement of FEAST over other baselines is more significant on the Crime dataset. This is due to that in this dataset, each subset consists of fewer samples. Consequently, the learned fairness-aware meta-knowledge will be more difficult to be transferred in baselines. Nevertheless, our proposed fairness adaptation strategy based on mutual information can effectively deal with this scenario.
* The accuracy of FEAST is comparable with other baselines, demonstrating that FEAST can substantially reduce biases without sacrificing its classification capability. This is because our framework FEAST can select the auxiliary set with similar fairness adaptation directions and thus will not harm model performance regarding accuracy.
* FEAST is more robust to the changes of the number of support samples per class, i.e., when the number decreases from 10 to 5, FEAST has the least performance drop in comparison to other baselines. We believe this is primarily because, with fewer support samples, the problem of insufficient samples becomes more significant. Nevertheless, FEAST can effectively address this issue with the incorporation of auxiliary sets into fairness adaptation.
### Impact of Each Component in FEAST
In this subsection, we conduct an ablation study on three datasets under the 5-shot setting to evaluate the effectiveness of different components in our framework by comparing FEAST with three degenerate versions: (1) FEAST without fairness adaptation based on MI, referred to as FEASTF. In this variant, the fairness adaptation process is simplified such that only fairness constraints are applied. (2) FEAST without auxiliary set selection, i.e., the auxiliary set is randomly sampled. We refer to this variant as FEASTA. (3) FEAST without both fairness adaptation and auxiliary set selection, referred to as FEASTVA. The results, as presented in Fig. 2, show that FEAST outperforms all other variants, validating the importance of both fairness adaptation and auxiliary set selection components in fair few-shot learning. Of particular interest is that the removal of the MI fairness adaptation has a more significant adverse impact on the Crime dataset, which contains significantly fewer meta-training samples. This result highlights the crucial role of this component in addressing the issue of insufficient training samples. In addition, when the two components are both removed, the fairness performance drops greatly. Such results indicate that the mutual impact brought by these two components is also critical for our proposed framework FEAST.
### Effect of Loss Weight \(\gamma\)
Given the significance of the auxiliary sets in the fairness adaptation, in this subsection, we further examine in-depth how the auxiliary sets will influence the performance of FEAST. Specifically, we vary the value of \(\gamma\), which controls the importance of the auxiliary set loss during fairness adaptation. A higher value of \(\gamma\) implies a larger importance weight on the auxiliary set and a smaller importance weight on the target task. Due to the limitation of space, we
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c|c c} \hline Dataset & \multicolumn{4}{c|}{Adult} & \multicolumn{4}{c|}{Crime} & \multicolumn{4}{c}{Bank} \\ \hline Setting & \multicolumn{4}{c|}{5-shot} & \multicolumn{4}{c|}{10-shot} & \multicolumn{4}{c|}{5-shot} & \multicolumn{4}{c|}{10-shot} & \multicolumn{4}{c|}{5-shot} & \multicolumn{4}{c}{10-shot} \\ \hline Metric & \(\Delta\)DP & \(\Delta\)EO & ACC & \(\Delta\)DP & \(\Delta\)EO & ACC & \(\Delta\)DP & \(\Delta\)EO & ACC & \(\Delta\)DP & \(\Delta\)EO & ACC & \(\Delta\)DP & \(\Delta\)EO & ACC \\ \hline MAML & 0.473 & 0.706 & 0.801 & 0.409 & 0.584 & **0.886** & 0.558 & 0.952 & 0.718 & 0.443 & 0.832 & 0.792 & 0.214 & 0.573 & **0.603** & 0.185 & 0.496 & 0.619 \\ \hline M-MAML & 0.447 & 0.689 & **0.826** & 0.381 & 0.555 & 0.857 & 0.359 & 0.732 & 0.711 & 0.300 & 0.569 & 0.757 & 0.214 & 0.544 & 0.600 & 0.175 & 0.459 & 0.619 \\ \hline F-MAML & 0.339 & 0.432 & 0.825 & 0.310 & 0.353 & 0.840 & 0.503 & 0.871 & 0.719 & 0.463 & 0.707 & 0.762 & 0.207 & 0.585 & 0.575 & 0.181 & 0.528 & **0.650** \\ \hline FM-dp & 0.313 & 0.502 & 0.814 & 0.241 & 0.438 & 0.844 & 0.385 & 0.722 & 0.741 & 0.329 & 0.604 & 0.771 & 0.238 & 0.614 & 0.586 & 0.187 & 0.553 & 0.604 \\ \hline FM-cop & 0.430 & 0.703 & 0.812 & 0.370 & 0.601 & 0.846 & 0.352 & 0.706 & 0.739 & 0.311 & 0.591 & 0.804 & 0.289 & 0.683 & 0.581 & 0.245 & 0.600 & 0.640 \\ \hline Pretrain & 0.365 & 0.513 & 0.806 & 0.310 & 0.450 & 0.885 & 0.390 & 0.692 & **0.746** & 0.354 & 0.582 & 0.776 & 0.248 & 0.659 & 0.594 & 0.208 & 0.539 & 0.642 \\ \hline PDFM & 0.261 & 0.461 & 0.815 & 0.276 & 0.401 & 0.869 & 0.402 & 0.784 & 0.722 & 0.325 & 0.669 & **0.816** & 0.210 & 0.585 & 0.589 & 0.180 & 0.493 & 0.645 \\ \hline FEAST & **0.258** & **0.355** & 0.820 & **0.235** & **0.256** & 0.861 & **0.203** & **0.309** & 0.739 & **0.164** & **0.217** & 0.797 & **0.190** & **0.524** & 0.583 & **0.154** & **0.414** & 0.641 \\ \hline \end{tabular}
\end{table}
Table 2: Results w.r.t. fairness and prediction performance of FEAST and baselines under different settings for all three datasets.
Figure 3: Results of FEAST on Adult (left) and Crime (right) with different values of \(\gamma\).
Figure 2: Ablation study on our framework FEAST on three datasets under the 5-shot setting.
evaluate the model's performance on two datasets, Adult and Crime, using various values of \(\gamma\) (similar results on the Bank dataset) on the 5-shot setting. The results, as shown in Fig. 3, indicate that a value around 0.5 for \(\gamma\) generally yields better fairness performance for both datasets. This is mainly because a small \(\gamma\) can be insufficient to leverage the fairness-aware meta-knowledge in auxiliary sets, while an excessively large value of \(\gamma\) can result in the loss of crucial fairness information in the target meta-task. Moreover, the effect of different \(\gamma\) values is more significant on the Adult dataset. The reason is that this dataset contains a larger number of samples in meta-training data. As a result, the learned fairness-aware knowledge is richer in the auxiliary sets, thus propagating the benefits from auxiliary sets.
### Effect of Auxiliary Set Size
In this section, we conduct experiments to evaluate the impacts brought by varying the size of the auxiliary set \(\mathcal{A}\). Intuitively, the auxiliary set size \(|\mathcal{A}|\) should be at least comparable with the support set, since an excessively small auxiliary set can be potentially insufficient for fairness adaptation. Specifically, we conduct experiments on dataset Adult under both 5-shot and 10-shot settings to evaluate the effect of auxiliary set size \(|\mathcal{A}|\). From the results presented in Fig. 4, we can make the following observations: (1) The fairness results are less satisfactory with a smaller value of \(|\mathcal{A}|\), indicating that the capacity of \(\mathcal{A}\) can be important in FEAST. With a small auxiliary set \(\mathcal{A}\), the fairness adaptation effect will be reduced due to insufficient knowledge in \(\mathcal{A}\). (2) When further increasing the size of \(\mathcal{A}\), the fairness performance does not accordingly increase. This demonstrates that knowledge in a larger auxiliary set may not be helpful for fairness adaptation. (3) When the number of shots increases from 5 to 10, the best value of \(|\mathcal{A}|\) also increases, implying that with a larger support set, the auxiliary set should also be expanded to provide more knowledge for fairness adaptation. In consequence, the fairness performance can be further improved.
## 5 Related Work
### Few-shot Learning
Few-shot learning aims to obtain satisfactory classification performance with only a few labeled samples as references [37, 36]. The typical approach is to accumulate transferable knowledge from meta-training tasks, which contain abundant labeled samples. Then such knowledge is generalized to meta-test tasks with limited labeled samples. Particularly, existing few-shot learning methods can be divided into two main categories: (1) _Metric-based_ methods propose to learn a metric function that matches samples in the query set with the support samples to conduct classification [23, 34, 42, 41]. For example, Prototypical Networks [32] learn a prototype (i.e., the average embedding of samples in the same class) for each class and then classify query samples according to the Euclidean distances between query samples and each prototype. Matching Networks [39] output predictions for query samples via the similarity between query samples and each support sample. (2) _Optimization-based_ methods aim to first fine-tune model parameters based on gradients calculated on support samples and then conduct meta-optimization on each meta-task [25, 28, 43, 40]. As a classic example, MAML [18] learns a shared model parameter initialization for various meta-tasks with the proposed meta-optimization strategy. LSTM-based meta-learner [28] proposes an adjustable step size to update model parameters.
### Fairness-aware Machine Learning
Various fairness-aware algorithms have been proposed to mitigate the unwanted bias in machine learning models. Generally, there are two categories of statistical fairness notions: _individual fairness_ and _group fairness_. In particular, individual fairness requires that the model results for similar individuals should also be similar [16, 44, 13, 12]. Here, the similarity between individuals can be measured via specific metrics (e.g., Euclidean distance) learned during training or from prior knowledge. On the other hand, group fairness refers to the statistical parity between subgroups (typically defined by sensitive attributes, e.g., gender and race) via specific algorithms [46, 24, 19, 14]. Common fairness learning tasks include fair classification [45, 17], regression [2, 5], and recommendations [30]. Although these methods have demonstrated satisfactory performance in mitigating unfairness, it is noteworthy that existing works mainly focus on the settings where sufficient labeled samples are provided. As a result, it is challenging for these methods to accommodate few-shot scenarios with limited labeled samples.
More recently, several methods are proposed to deal with the fair few-shot learning problem [31, 50]. For example, PDFM [49] utilizes a primal-dual subgradient approach to ensure fast adaptation to a novel meta-task. In [48], the authors propose to address fairness in supervised few-shot meta-learning models that are sensitive to discrimination in historical data by detecting and controlling the dependency effect of sensitive attributes on target prediction. Moreover, F-MAML [50] provides a fairness constraint for each episode and tunes a Lagrangian multiplier shared across different episodes based on a meta-learning mechanism. However, these methods cannot effectively solve the problem of insufficient samples and the generalization gap.
## 6 Conclusion
In this paper, we propose a novel problem of fair few-shot learning, which focuses on accurately and fairly predicting labels for samples in unseen data while using limited labeled samples as references. To tackle the challenges posed by insufficient samples and the generalization gap between meta-training and meta-test, we propose an innovative framework FEAST that utilizes learned fairness-aware meta-knowledge by incorporating auxiliary sets. In particular, our framework maximizes the mutual information between meta-tasks and the auxiliary sets to enhance fairness adaptation. Moreover, we select auxiliary sets based on the estimated fairness adaptation direction of meta-tasks to improve the fairness performance. We conduct extensive experiments on three real-world datasets, and the results validate the superiority of FEAST over the state-of-the-art baselines. For future work, it is important to consider expanding the candidate auxiliary set with external knowledge, since samples in the dataset can be insufficient. In this case, incorporating external information for fairness adaptation can be crucial.
Figure 4: Results of FEAST on Adult under 5-shot (left) and 10-shot (right) settings with different values of \(|\mathcal{A}|\).
Acknowledgements
The work in this paper is supported by the National Science Foundation under grants (IIS-2006844, IIS-2144209, IIS-2223769, CNS2154962, and BCS-2228534), the Commonwealth Cyber Initiative awards (VV-1Q23-007 and HV-2Q23-003), the JP Morgan Chase Faculty Research Award, the Cisco Faculty Research Award, the Jefferson Lab subcontract 23-D0163, and the UVA 4-VA collaborative research grant.
|
2303.01321 | Statistical analysis of the total magnetic flux decay rate in solar
active regions | We used line-of-sight magnetograms acquired by the Helioseismic and Magnetic
Imager on board the Solar Dynamics Observatory to derive the decay rate of
total unsigned magnetic flux for 910 ephemeral and active regions (ARs)
observed between 2010 and 2017. We found that: i) most of the ARs obey the
power law dependence between the peak magnetic flux and the magnetic flux decay
rate, $DR$, so that $DR\sim \Phi^{0.70}$; ii) larger ARs lose smaller fraction
of their magnetic flux per unit of time than the smaller ARs; iii) there exists
a cluster of ARs exhibiting significantly lower decay rate than it would follow
from the power law and all of them are unipolar sunspots with total fluxes in
the narrow range of $(2 - 8) \times 10^{21}$ Mx; iv) a comparison with our
previous results shows that the emergence rate is always higher than the decay
rate. The emergence rate follows a power law with a shallower slope than the
slope of the decay-rate power law. The results allowed us to suggest that not
only the maximum total magnetic flux determines the character of the decaying
regime of the AR, some of the ARs end up as a slowly decaying unipolar sunspot;
there should be certain physical mechanisms to stabilize such a sunspot. | Andrei A. Plotnikov, Valentina I. Abramenko, Alexander S. Kutsenko | 2023-03-02T14:51:42Z | http://arxiv.org/abs/2303.01321v2 | # Statistical analysis of the total magnetic flux decay rate in solar active regions
###### Abstract
We used line-of-sight magnetograms acquired by the _Helioseismic and Magnetic Imager_ on board the _Solar Dynamics Observatory_ to derive the decay rate of total unsigned magnetic flux for 910 ephemeral and active regions (ARs) observed between 2010 and 2017. We found that: i) most of the ARs obey the power law dependence between the peak magnetic flux and the magnetic flux decay rate, \(DR\), so that \(DR\sim\Phi^{0.70}\); ii) larger ARs lose smaller fraction of their magnetic flux per unit of time than the smaller ARs; iii) there exists a cluster of ARs exhibiting significantly lower decay rate than it would follow from the power law and all of them are unipolar sunspots with total fluxes in the narrow range of \((2-8)\times 10^{21}\) Mx; iv) a comparison with our previous results shows that the emergence rate is always higher than the decay rate. The emergence rate follows a power law with a shallower slope than the slope of the decay-rate power law. The results allowed us to suggest that not only the maximum total magnetic flux determines the character of the decaying regime of the AR, some of the ARs end up as a slowly decaying unipolar sunspot; there should be certain physical mechanisms to stabilize such a sunspot.
keywords: Sun:magnetic fields - Sun:photosphere
## 1 Introduction
One of the most outstanding manifestations of the solar activity is the appearance of active regions (ARs) on the solar surface, places with much stronger magnetic flux than that in the surrounding areas. In white-light images ARs appear as groups of sunspots with low intensity. These features are not static: their shape varies during their lifetime.
A comprehensive overview of the AR's evolution was given in van Driel-Gesztelyi & Green (2015). The life-cycle of an AR can be divided into consequent phases of growth (called as the emergence phase) and disappearing (or the decay phase). The emergence phase was explored in a variety of publications (e.g. Ugarte-Urra et al., 2015; Norton et al., 2017; Kutsenko et al., 2019, to mention a few). At the same time, the decay phase got much less attention. As argued by Norton et al. (2017) the reason for this is a long time interval of the decay lasting for weeks: in most cases one cannot observe the entire process since the sunspot group rotates off the limb. Usually individual ARs exist from several days up to several weeks. According to the Gnevyshev-Waldmeier rule (Gnevyshev, 1938; Waldmeier, 1955), the lifetime of a sunspot group is proportional to the maximal area of the group
\[T=bA_{0}, \tag{1}\]
where \(T\) is the time interval between the sunspot group's appearance and disappearance, \(A_{0}\) is the maximal area reached by the sunspot group, and \(b\) is a constant.
Through the decades, various models based on different ideas were suggested to explain the decay of the magnetic flux in ARs, for example, the self-similar sunspot model (Gokhale & Zwaan, 1972), the turbulent diffusion model (Meyer et al., 1974), the turbulent erosion model (Petroway & Moreno-Insertis, 1997). The second and the third models are based on a hypothesis that the turbulence in the solar plasma plays a major role in the dissipation of the magnetic flux tube forming an AR. The difference between the diffusion model and the erosion model is in the treatment of the processes inside the tube. In the turbulent diffusion model, the key role is attributed to the turbulent diffusion of magnetic elements inside the tube, whereas the turbulent erosion model suggests that diffusion is mainly frozen inside the sunspot due to the strong magnetic field, and the outer turbulence gnav a border of the sunspot (Petroway & Moreno-Insertis, 1997).
The erosion model results in the parabolic area versus time dependence:
\[A=A_{0}-2\sqrt{\pi A_{0}}w(t-t_{0})+\pi w^{2}(t-t_{0})^{2}, \tag{2}\]
where \(w\) and \(t\) stand for the spot boundary decrease rate (which is assumed to be a constant) and time, respectively. This dependence was confirmed in the statistical analysis by Petrovay & van Driel-Gesztelyi (1997) and Murakozy (2021). Svanda et al. (2021) suggested additional proofs for the erosion mechanism based on the morphological changes through the evolution of an AR. This makes the decay phase to be completely different from the emergence phase, which is thought to be driven by the turbulent diffusion mechanism. The turbulent erosion mechanism also implies sharp sunspot boundaries, which agrees with white-light observations of sunspots.
Sunspots are the observable manifestation of strong magnetic fields on the solar surface. Modern solar instruments allow us to
use high-resolution data on the magnetic field. Therefore, the spatial distribution of the magnetic field can be used instead of the sunspot area in order to track the evolutionary changes in an AR. In this case, the total unsigned magnetic flux over the AR
\[\Phi=\int_{S}|(\vec{B}\cdot\vec{dS})|. \tag{3}\]
can be used instead of the total sunspot group area. In Eq. 3\(\vec{B}\) is the magnetic field vector and \(S\) is the area occupied by magnetic structures of the AR.
In the framework of the turbulent erosion model (Petrovay and Moreno-Insertis, 1997), the current sheets formed around the sunspot can maintain the magnetic field strength inside the sunspot nearly unchanged. Adopted in this theory Gaussian-like distribution of the magnetic field inside the sunspot leads the magnetic flux to fall slower than the sunspots' area during the decay phase. This is in accordance with the results by Li et al. (2021) who showed that the mean vertical magnetic field strength increases during the decay phase. Observations show that weak magnetic structures still exist after sunspots disappear. This means that an AR's lifetime will always be longer than that of the corresponding sunspot group.
Moving magnetic features (MMFs; Harvey and Harvey, 1973) are often mentioned as a phenomenon, accompanying the decay of ARs. MMFs are described as small magnetic elements (usually less than 2 arcsec in size) detaching from a large magnetic concentration in an AR, running outside, and dissipating during several hours. Kubo et al. (2008) showed that the magnetic flux transported by MMFs can be higher than the sunspot's losses of the magnetic flux. (Imada et al., 2020) found a slight asymmetry in the magnetic flux carried by MMF from leading sunspots: approximately 5% more magnetic flux is transported to the equator side than to the pole side, and about 3% more magnetic flux is carried out to the East side than to the West side.
As we have already mentioned before, the analysis of the entire evolution of a large AR is obstructed by the Sun's rotation: the presence of the AR on the visible disc is shorter than its typical lifetime. Ugarte-Urra et al. (2015) overcame this obstacle by combining the UV data acquired by the _Solar TErererstial RElations Observatory_ (STEREO, Kaiser et al., 2008) and by the _Atmospheric Images Assembly_ (AIA; Lemen et al., 2012) on board the _Solar Dynamics Observatory_ (SDO; Pesnell et al., 2012). STEREO gives an opportunity to observe the solar surface from two different vantage points. Although the satellites have no equipment for magnetic field measurements, the UV intensity can be used as a proxy for the total unsigned magnetic flux (e.g. Schrijver, 1987). To study the long-term AR evolution, Ugarte-Urra et al. (2015) measured the UV intensity of 9 ARs during their entire lifetime. The normalised intensity versus time profiles for all ARs exhibited similarity (see fig. 1 in Ugarte-Urra et al., 2015). Consequently, it is reasonable to hypothesize that the lifetime of an AR is proportional to the peak magnetic flux of the AR, and the decay rate is constant for all ARs, regardless of their maximal flux.
Here we present a statistical analysis of the AR decay rates using a large data set of 910 ephemeral and ARs.
## 2 Data and Methods
SDO/HMI provides high-cadence (720 s) line-of-sight (LOS) full-disc 4096\(\times\)4096 pixel magnetograms with continuous coverage since 2010. The spatial resolution of the instrument is 1 arcsec with the pixel size of 0.5\(\times\)0.5 arcsec\({}^{2}\). High spatial resolution of the instrument allowed us to analyse small ephemeral regions exhibiting no signatures in white-light images.
The magnetographic data used in this work was prepared in Kutsenko (2021). We visually analysed full-disc SDO/HMI magnetograms and manually enclosed active and ephemeral regions by a rectangular box (Fig. 1). The box was large enough to keep the whole AR inside the boundaries during the entire interval of observations. Thus, we visually examined the selected patches as the AR evolved. If there was a significant dispersion of the flux beyond the box boundaries, we re-selected the region and increased the box size. Consequently, the dispersed network magnetic flux that appeared during active region decay was also mostly kept within the bounding box. We selected isolated active regions in the sense that no significant portions of magnetic flux of external ARs crossed the boundaries. Each region was tracked back and forth in time in the consecutive magnetograms by a cross-correlation technique. The size of the box in CCD pixels remained unchanged. In order to minimize the uncertainties due to projection effect and noise in magnetograms, the tracking was stopped as soon as the longitude of any corner of the bounding box was equal to or exceeded 60 degrees and a thresholding was applied during the magnetic flux calculations, see below:
For unipolar active regions, the following magnetic polarity was usually dispersed over vast areas "contaminated" by other ARs and the magnetic connections within the region were not obvious. Hence, for these objects we selected exclusively the leading polarity of the active region.
Ephemeral regions were selected by the same manual selection. We searched for small magnetic dipoles emerging and decaying amidst quiet-Sun regions (without any pre-existing magnetic flux). We did not set requirements for ephemeral region lifetime or peak flux. In order to diminish the influence of the projection effect, we selected only ephemeral regions evolving near the disc centre.
Thus, for each active and ephemeral region the bounded patches were cropped and stored in a data cube. In total, we prepared data cubes for 323 ephemeral and 854 active regions observed between 2010 and 2017.
Using the prepared data cubes, we calculated the total unsigned magnetic flux needed to explore the decay of ARs. Equation (3) can be approximated as a sum over the magnetogram:
\[\tilde{\Phi}=\sum|B_{r}|\Delta S, \tag{4}\]
where \(B_{r}\) and \(\Delta S\) stands for the radial component of the magnetic field and pixel area on the solar surface, respectively. The radial component of the magnetic field was evaluated from the observed LOS-component via the \(\mu\)-correction. Namely, for each pixel of the patch, we calculated the angle \(\mu\) between the line-of-sight and the vector pointing from the centre of the Sun to the pixel. Both magnetic flux density and the area of the pixel were divided by the cosine of this angle. Leka et al. (2017) argued that exactly this procedure provides the best estimation of the radial magnetic field. The summation in equation 4 was performed only over pixels with absolute magnetic flux density exceeding 30 Mx cm\({}^{-2}\). This threshold is a fivefold noise level of SDO/HMI 720-s LOS magnetograms (Liu et al., 2012).
For each ephemeral and AR we derived temporal profiles of the total unsigned magnetic flux. To derive the decay rate, we need to pick out a time interval of the decay. Our requirement to the decay time interval are as follows:
1. The magnetic flux must decrease during the time interval (small oscillations of the magnetic flux can be ignored).
2. The interval must start after the AR's emergence is finished.
(iii) The interval must end either by a plateau in the total flux profile, or by a significant increase of the total flux, or by the end of observations.
To avoid the human bias in determination of the decay segment in the magnetic flux versus time profiles, we elaborated an automatic iteration routine, which is described in details in the Appendix A. Certain profiles were rejected by the algorithm. Finally, we calculated decay rates for 241 ephemeral and 669 sunspot-containing ARs.
Fig. 2 shows a set of examples showing the decay intervals determined using our algorithm. The decay rate, \(DR\), was calculated as the slope of the linear fitting within the decay time interval. The peak magnetic flux of an AR was adopted as the maximal value of total magnetic flux along the entire temporal profile.
Fig. 3 shows three examples of magnetic flux versus time profiles rejected by the algorithm. As one can see, the algorithm failed in finding the decay interval in case of long-lasting significant emergence and in the case of jagged profiles. The total number of rejected ARs is 185 that is 22% of all ARs in our set.
Using the continuum intensity images acquired by SDO/HMI, all ARs in the data set were sorted out into classes of unipolar, bipolar, multipolar ARs, and ephemeral regions. Every selected AR was processed as the independent one. Thus, recurrent ARs were considered as independent ones at each solar rotation.
To analyse the magnetic flux variations of opposite magnetic polarities within an AR, we calculated the total magnetic fluxes within each polarity separately:
\[\Phi_{+}=\sum B_{r}\Delta S, \tag{5}\] \[\Phi_{-}=\sum-B_{r}\Delta S.\]
The positive and negative magnetic fluxes were calculated within the decay time interval determined by our algorithm. The decay rates for opposite polarities were also calculated by fitting the time profiles by a linear approximation within the decay interval.
In order to define the preceding polarity within an AR, we calculated the center-of-gravity for each magnetic polarity:
\[CG_{x}=\sum xB_{r}\Delta S, \tag{6}\] \[CG_{y}=\sum yB_{r}\Delta S,\]
where \(x\) and \(y\) are the longitudes and latitudes of the pixels in CCD coordinates, respectively. The western polarity was assigned as the preceding one. To avoid the ambiguity, the preceding and following polarities were defined only for bipolar ARs. Similar to equation 4, all the summations in equations 5 and 6 were performed over the pixels with the magnetic flux density exceeding 30 Mx cm\({}^{-2}\)
In 192 ARs, we were unable to reveal the magnetic flux peak at the total magnetic flux versus time profile. The examples are shown in top-right, center-center and bottom-center panels of Fig. 2. For these ARs we adopted the maximum magnetic flux observed within the observational interval as the peak value. Hence, this set of ARs will be referred to as the ARs without the observed peak. Moreover, one should keep in mind that the observed total magnetic flux peak in the rest of ARs might be a local rather than a global maximum. In such a case, we analyse the decay rate of this newly emerged magnetic structure.
## 3 Results
The double-logarithmic plot of the decay rate (\(y\)-axis) versus the peak total magnetic flux (\(x\)-axis) for 718 ARs and ephemeral regions with the observed peak magnetic flux is shown in Fig. 4.
The data points are distributed along a linear fitting implying the power-law relationship between the parameters. The power index of the law is \(0.70\pm 0.01\). This means that, as a whole, the larger the AR, the higher the decay rate.
Fig 5 shows the same plot with the addition of the set of 192 ARs without the observed peaks. The total number of ARs in this plot is 910. The black line displays the power-law relationship calculated over the data shown in Fig 4. A small cluster of outstanding ARs can be revealed in the middle-bottom part of the plot. These ARs exhibit an abnormally slow decay rate (up to \(\approx 10\) times slower than it could be expected from the power law). Since the total magnetic flux profile of these ARs do not exhibit a peak, the true maximum magnetic flux value is unavailable for the ARs. However, the true maximum magnetic flux value is larger (or at least not less) than the value shown in the plot. In this case, the low decay rate of these ARs is even more deviated from the values expected from the power law. Therefore, in the visual representation in Fig 5, the red circles in the plot are expected to be shifted to the right in the \(x\)-direction. It makes the data points to be even farther from the power-law line.
Our previous experience hints that there exist outstandingly long-living unipolar ARs. We explored the magnetic morphology of the ARs in the "outstanding" cluster in Fig 5 and revealed that these ARs were unipolar. All unipolar ARs are shown by red circles in Fig 5. Note that not all unipolar ARs belong to the cluster: a part of unipolar ARs obey the common power-law relationship.
Fig. 6 shows the relative decay rate versus the peak magnetic flux. The relative decay rate, \(RDR\), was calculated as the ratio of the decay rate to the peak magnetic flux. In other words, this value shows a fraction of the peak magnetic flux lost by an AR during a unit time (an hour). The linear fitting is derived for the set of ARs with the observed peak only. The figure shows that most of ARs satisfy the power-law with the power index of \(-0.30\pm 0.01\). The negative index implies that small ARs lose their magnetic flux faster as compared to larger ones. For example, ephemeral regions with the peak flux of \(10^{20}\) Mx tend to lose more than 10% of their magnetic flux per hour, whereas the largest ARs lose only about 1% of their flux during the same time. Fig. 6 also shows the cluster of outstanding long-living unipolar ARs. Some of them lose the magnetic flux extremely slow: the relative decay rate drops down to \(10^{-3}\) that is more than order of magnitude lower than \(RDR\) observed for bi/multipolar ARs.
Another interesting feature of the "outstanding" cluster is the narrow range of the peak magnetic fluxes. The magnetic fluxes are located in the \((2-8)\times 10^{21}\) Mx range, whereas the magnetic fluxes for all unipolar ARs could differ by 50 times.
Fig. 7 shows the decay rate versus the peak magnetic flux for preceding and following polarities in 399 bipolar ARs. The fittings yield the power law indices of \(0.70\pm 0.02\) and of \(0.66\pm 0.02\) for preceding and following polarities, respectively. Seemingly, the magnetic flux losses in the preceding and following polarities obey the same power law within the uncertainties. Very close decay rates revealed for the preceding and following polarities also implies the correctness of our data reduction: the entire AR is enclosed within our bounding box and there is no significant magnetic flux loss across the boundaries.
We have also compared the magnetic flux change rate during emergence and decay. The flux emergence rate was measured in Kutsenko et al. (2019) for a set of 423 emerging sunspot-containing ARs by the procedures similar to those applied in this work. We supplemented the data by the flux emergence rates measured for 323 ephemeral regions from the data set compiled for this work. The results are presented in Fig. 8. One can see that the emergence rate always prevails the decay rate and demonstrates the power law with more shallow slope: the power index for the emergence rate
is 0.48 whereas the power index for the decay rate is 0.70. In our opinion, this difference emphasizes different physical mechanisms of magnetic flux emergence and decay.
## 4 Conclusions and Discussion
In our statistical study based on SDO/HMI data acquired between 2010 and 2017, we explored the magnetic flux decay rates for 241 ephemeral and 669 active regions of different morphology. Our inferences can be summarized as follows:
1. Most of ARs obey the power-law dependence between the magnetic flux decay rate and the peak total magnetic flux: \[DR=7.18\cdot 10^{5}\Phi_{max}^{0.70},\] where the decay rate, \(DR\), is in Mx h\({}^{-1}\). \(\Phi_{max}\) is normalized by 1.0 Mx to have unitless quantity under the exponent.
2. Generally, larger ARs lose a smaller fraction of their magnetic flux per unit of time as compared to smaller ones.
3. Preceding and following polarities exhibit the same power-law dependence between the magnetic flux decay rate and the peak total unsigned magnetic flux.
4. There exists a cluster of ARs exhibiting significantly lower decay rate. The cluster consists of unipolar ARs only. The peak magnetic fluxes of ARs in the cluster vary in a narrow range of \((2-8)\times 10^{21}\) Mx. Not all of the unipolar ARs belong to this cluster.
5. A comparison of magnetic flux emergence and decay rates confirmed that the emergence rate always prevails the decay rate and demonstrates the power law with a more shallow slope: the power index for the emergence rate is 0.48 while the power index for the decay rate is 0.70. This inference indicates that the emergence proceeds faster than the decay and they are intrinsically different processes.
Our results on emergence and decay rates are quite similar to that reported by Norton et al. (2017): they found the power-law dependencies with the slopes of 0.35 and 0.57 for emergence and decay, respectively. It should be mentioned, however, that in Norton et al. (2017) the polarity-divided flux values were used. Their finding is in favour of our suggestion that emergence and decay are intrinsically different processes: emergence is mostly defined by the sub-photospheric convection whereas decay is governed by processes in the photosphere and above, where the physical conditions are different.
The revealing of a subset of extremely slow-decaying unipolar ARs implies that there exist some physical mechanism preventing regular decay in such ARs. According to Petrovay & van Driel-Gesztelyi (1997), time-area relations are likely regulated by a parabolic law (Equation 2), which means that area decay rate is not constant and decreases with time. Extrapolating this relations to the magnetic flux study, we could make a suggestion, that the long-life unipolar ARs could be remains of large ARs visible on the following rotation of the Sun. On the other hand, we can notice that not all of large ARs behave that way. Fig. 9 shows two series of recurrent ARs observed
Figure 1: Line-of-sight SDO/HMI magnetograms of the Sun, which were acquired between 2012.05.09 and 2012.05.13. Red rectangular boxes show the selected patches of NOAA AR 11476. The size of the box was set large enough to keep most of the magnetic flux of an AR inside the boundaries during the entire observations. The size of the box was kept constant in CCD coordinates.
in continuum intensity by SDO/HMI. These ARs have similar peak magnetic fluxes and similar areas. At the same time, ARs' lifetimes are completely different: NOAA AR 12674 lasted at least for three rotations, while NOAA AR 12241 exhibited only a small pore on the second rotation. So the decay process must depend on more than just the peak magnetic flux. This can be further illustrated by the following experiment. Fig. 10 represents the unsigned magnetic flux against time profile for five recurrent ARs during three solar rotations. The ARs with close peak magnetic fluxes are selected. All of the time profiles are centered so that the peak of the magnetic flux occurs at \(t=0\). Indeed, some hint of the parabolic flux decay can be tracked along the three Carrington rotations. Nevertheless, the magnetic fluxes of the ARs widely differ during the second rotation, and only two of them (NOAA ARs 12673 and 12674) survive by the third rotation. One of them, NOAA AR 12216, has no remain that could be defined as NOAA AR even at the second rotation. Therefore, not all large ARs produce abnormally long-living sunspots, and the
Figure 3: Examples of ARs with the total unsigned magnetic flux versus time profiles rejected by the algorithm. Note the individual scales in the vertical axes.
Figure 2: The total unsigned magnetic flux versus time profiles for several ARs analysed in this work (orange curves). Examples of the decay interval detection by the algorithm (see text) are shown as highlighted parts of the curves (blue). Dashed line represents the linear fitting of the curve within the decay interval. The slope of the fitting was adopted as the decay rate, \(DR\). Note the individual scales in the vertical axes.
parabolic law of the decay could not be the only explanation for the existence of such sunspots.
The foregoing study allows us to suggest that there should be a mechanism responsible for the appearance of the long-living ARs and their stability. Such factors as the magnetic flux imbalance, the configuration of the magnetic lines of force above the sunspots might be relevant to this phenomenon. Anyway, the results motivate further studies.
## Acknowledgements
We are grateful to the anonymous referee whose comments helped us to improve the paper significantly. SDO is a mission for NASA's Living With a Star (LWS) programme. The SDO/HMI data were provided by the Joint Science Operation Center (JSOC). Python programming language with NumPy (Harris et al., 2020), SciPy (Virtanen et al., 2020) and SunPy (The SunPy Community et al., 2020) libraries was used for the numerical analysis. All plots were made with using of Matplotlib (Hunter, 2007) library.
Figure 4: The magnetic flux decay rate versus the peak magnetic flux for 718 ARs with the observed total magnetic flux peak. Black line represents the linear fitting of the distribution. The power index of the fitting is \(0.70\pm 0.01\).
Figure 5: The magnetic flux decay rate versus the peak magnetic flux for 910 active and ephemeral regions. Both sets of ARs (with the observed peaks and without the observed peaks) are included. Unipolar ARs are shown by red circles while all the rest of data points are shown by grey circles. Black line represents the linear fitting of the distribution shown in Fig. 4.
## Data Availability
The HMI data that support the findings of this study are available in the JSOC ([http://jsoc.stanford.edu/](http://jsoc.stanford.edu/)) and can be accessed under open for all data policy. Derived data products supporting the findings of this study are available in the article and from the corresponding author (AAP) on request.
|
2306.04487 | Vague Preference Policy Learning for Conversational Recommendation | Conversational recommendation systems (CRS) commonly assume users have clear
preferences, leading to potential over-filtering of relevant alternatives.
However, users often exhibit vague, non-binary preferences. We introduce the
Vague Preference Multi-round Conversational Recommendation (VPMCR) scenario,
employing a soft estimation mechanism to accommodate users' vague and dynamic
preferences while mitigating over-filtering. In VPMCR, we propose Vague
Preference Policy Learning (VPPL), consisting of Ambiguity-aware Soft
Estimation (ASE) and Dynamism-aware Policy Learning (DPL). ASE captures
preference vagueness by estimating scores for clicked and non-clicked options,
using a choice-based approach and time-aware preference decay. DPL leverages
ASE's preference distribution to guide the conversation and adapt to preference
changes for recommendations or attribute queries. Extensive experiments
demonstrate VPPL's effectiveness within VPMCR, outperforming existing methods
and setting a new benchmark. Our work advances CRS by accommodating users'
inherent ambiguity and relative decision-making processes, improving real-world
applicability. | Gangyi Zhang, Chongming Gao, Wenqiang Lei, Xiaojie Guo, Shijun Li, Hongshen Chen, Zhuozhi Ding, Sulong Xu, Lingfei Wu | 2023-06-07T14:57:21Z | http://arxiv.org/abs/2306.04487v4 | # Adaptive Vague Preference Policy Learning for Multi-round Conversational Recommendation
###### Abstract.
Conversational recommendation systems (CRS) effectively address information asymmetry by dynamically eliciting user preferences through multi-turn interactions. Existing CRS widely assumes that users have clear preferences, i.e., users have a firm belief about the fine-grained preference for one or multiple target items. This assumption leads the agent to overly trust user feedback, treating accepts/rejects as definitive signals to filter items and reduce the candidate space, potentially causing over-filtering. However, in reality, users' preferences are often vague and volatile, with vagueness about their devices and changing decisions during interactions.
To address this issue, we introduce a novel scenario called Vague Preference Multi-round Conversational Recommendation (VPMCR), which considers users' vague and volatile preferences in CRS. VPMCR employs a soft estimation mechanism to assign a non-zero confidence score for all candidate items to be displayed, naturally avoiding the over-filtering problem. In the VPMCR setting, we introduce a solution called Adaptive Vague Preference Policy Learning (AVPPL), which consists of two main components: Ambiguity-aware Soft Estimation (ASE) and Dynamism-aware Policy Learning (DPL). ASE estimates the vagueness of users' vague feedback and captures their dynamic preferences using a choice-based preferences extraction module and a time-aware decaying strategy. DPL leverages the preference distribution estimated by ASE to guide the conversation and adapt to changes in users' preferences to make recommendations or ask for attributes.
Our extensive experiments demonstrate the effectiveness of our method in the VPMCR scenario, highlighting its potential for practical applications and improving the overall performance and applicability of CRS in real-world settings, particularly for users with vague or dynamic preferences.
Conversational Recommendation; Vague Preference; Policy Learning +
Footnote β : journal: Computer Vision and Image Understanding
+
Footnote β : journal: Computer Vision and Image Understanding
of the subsequent conversation, leading to the wrong preference estimation (i.e., in Fig. 1 (a), the "black" color of "item-1" was not displayed in the third turn).
To address over-filtering in MIMCR (or MGMCR) and maintain diversity and accuracy in the CRS, we propose a new scenario called **Vague Preference Multi-round Conversational Recommendation (VPMCR)**. This scenario uses a soft estimation mechanism to account for users' vague or dynamic preferences by assigning non-zero confidence scores to all candidate items, avoiding the rigid filtering strategy of MIMCR (or MGMCR) and MCR. Fig. 1 (c) shows an example of the VPMCR, which, in contrast to MIMCR, captures changes in preference distribution of the entire item space as shown in the right side of Fig. 1 (b).
In the VPMCR scenario, several challenges need to be addressed, including estimating the vagueness of the user's vague feedback, capturing the user's dynamic preference throughout the conversation, and making conversational decisions that consider the user's vague or dynamic preferences. To tackle these challenges, we propose an enhanced solution called **Adaptive Vague Preference Policy Learning (AVPPL)**, which consists of:
1. **Ambiguity-aware Soft Estimation (ASE)**: ASE estimates the vagueness of the user's vague feedback in each turn using a choice-based preference extraction method. It captures both explicit and implicit preferences (distinguished based on whether the user explicitly clicks the choices), effectively estimating the vagueness of users' vague feedback. To capture users' dynamic preferences, ASE employs a time-aware preference decay strategy, which gives more weight to recent preferences while gradually reducing the influence of historical preferences.
2. **Dynamism-aware Policy Learning (DPL)**: DPL implements a policy learning framework, leveraging the preference distribution from ASE, to guide the conversation. It constructs a dynamic heterogeneous graph representing the conversation, with ASE's soft estimation scores as edge weights. To expedite graph modeling and policy learning, we introduce a graph sampling strategy and preference-guided action pruning.
In summary, our contributions are as follows:
* We identify the limitations of existing CRS settings and introduce the VPMCR scenario, which accounts for users' vague and volatile preferences in CRS.
* We propose the AVPPL solution for the VPMCR setting, utilizing a unified policy learning framework to make decisions that consider users' current vague preferences and account for their fading historical preferences.
* Our extensive experiments on four real-world datasets demonstrate the effectiveness of AVPPL in the VPMCR scenario, highlighting its potential for practical applications.
## 2. Related Work
We briefly introduce the related works in conversational recommendation, reinforcement learning, and graph learning.
### Conversational recommendation system
(CRSs) is a novel solution to recommendation that leverage natural language to effectively elicit dynamic user preferences that align with their real needs through multiple rounds of real-time interaction. CRS is considered to be a cutting-edge discipline that incorporates dialogue systems, recommendation systems, and interactive systems (Kang et al., 2017). According to the focus on different functions and settings, existing CSR methods can be roughly divided into two
Figure 1. A realistic user simulation example
types: dialogue-based recommendation (Han et al., 2017; Chen et al., 2017; Chen et al., 2018; Wang et al., 2018; Wang et al., 2018) and multi-round conversational recommendation (MCR) (Han et al., 2017; Chen et al., 2017; Chen et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). In this work, we focus on the MCR setting.
MCR is considered to be the most realistic setting in CRS. Unlike dialogue-based recommenders that need to extract information or generate responses through raw natural language (Wang et al., 2018), MCR focuses on the core logic of the interaction strategy which involves asking questions (Han et al., 2017; Chen et al., 2018; Wang et al., 2018) and making recommendations. The traditional MCR setting allows users to select only one preferred attribute value at a time, which restricts users' expression in the interaction. To overcome this issue, Zhang et al. (Zhang et al., 2018) propose the MIMCR setting, where a user is allowed to select multiple options for a certain attribute. Though effective, they follow the recommendation philosophy in MCR to directly filter out the items that the user has not mentioned by attributes, which leads to failure as users may not be sure what they want precisely. In our proposed VPMCR setting, we specifically consider users' vague preferences and adjust the recommendation mechanism to consider the items with unmentioned attributes, which better reflect users' needs.
### RL-based Recommendation
Reinforcement Learning (RL) is a type of Machine Learning. It considers how an agent (e.g., a machine) should automatically make decisions within a specific context to pursue a long-term goal. The agent learns and adjusts its policy based on the reward feedback (i.e., reinforcement signals) given by the environment. Recently, RL has shown its effectiveness in recommendation (Chen et al., 2017; Chen et al., 2017; Chen et al., 2018). As fitting user interest is not a bottleneck for now, recommenders care more about users' long-term satisfaction (Han et al., 2017; Chen et al., 2018; Wang et al., 2018). For instance, Montazeralghaem and Allan (Montazeralghaem and Allan, 2018) use RL to generate the proper questions that can maximally make the system help users search desired products. Gao et al. (Gao et al., 2018) integrate causal inference into offline RL to maximize users' long-term satisfaction by removing filter bubbles. Sadeghi Eshkweari et al. (Sadeghi Eshkweari et al., 2019) propose an RL-based dispatching solution for ride-hailing platforms that can conduct robust and efficient on-policy learning and inference while being adaptable for full-scale deployment. In this work, we use RL to learn a policy that can automate question-asking and item recommendation.
### Graph-based Recommendation
Graph-based recommender systems have drawn a lot of research attention (Han et al., 2017; Chen et al., 2018; Wang et al., 2018; Wang et al., 2018). By arranging the various entities (e.g., users, items, and attributes) in a heterogeneous graph, we can leverage lots of properties in modeling the collaborative signals. In CRS, the knowledge graph is utilized in enriching the system with additional knowledge (Han et al., 2017; Chen et al., 2018; Wang et al., 2018; Wang et al., 2018). For example, to better understand concepts that a user mentioned, Zhou et al. (Zhou et al., 2018) propose to incorporate two external knowledge graphs (KGs): a word-oriented KG providing relations (e.g., synonyms, antonyms, or co-occurrence) between words and an item-oriented KG carrying structured facts regarding the attributes of items. With the increasing of nodes, the computational overhead is too large to satisfy the requirement of real-time interaction. Hence, we propose a pruning strategy to overcome this work.
## 3. Problem Definition
**Vague Preference Multi-round Conversational Recommendation (VPMCR).** In the VPMCR scenario, we consider a dynamic conversation between a user and a conversational recommendation system (CRS). The user has a clear preference space, denoted as \(\mathcal{C}_{CI}\) (e.g., "style" in Fig. 1), and a vague preference space, denoted as \(\mathcal{C}_{VI}\) (e.g., "color" and "pattern" in Fig. 1).
The conversation begins with the user specifying a query attribute \(p_{0}\) (e.g., "T-shirt"), which initializes the candidate item set containing all relevant items (e.g., all "T-shirts") and the candidate attribute set containing all attributes of those items.
During the conversation, the CRS can either ask questions about attributes or provide recommendations. When the CRS asks questions, the user responds accordingly with their behavior depending on whether the attribute type \(c\) belongs to their clear or vague preference space. If \(c\in\mathcal{C}_{CI}\), the user _honestly_ accepts or rejects the displayed attributes. However, if \(c\in\mathcal{C}_{VI}\), the user may _randomly_ accept or reject a potentially preferred attribute. When the CRS provides recommendations, the user can accept or reject one or more items from the recommended set \(\mathcal{V}_{rec}\).
The conversation proceeds through multiple iterations of the CRS asking/recommending and the user responding, until a successful recommendation is made or the maximum number of turns is reached. The VPMCR scenario differs from previous MCR or MIMCR settings in that it does not filter \(\mathcal{V}_{cand}\) based on the user's clicking or non-clicking attributes. Instead, it only removes \(\mathcal{V}_{rec}\) from \(\mathcal{V}_{cand}\) when the recommendation fails. Additionally, all candidate attributes linked to candidate items are maintained in \(\mathcal{P}_{cand}\).
The main challenges in the VPMCR scenario include estimating the vagueness of the user's vague feedback, capturing the user's dynamic preference throughout the conversation, and making conversational decisions that consider the user's vague or dynamic preferences.
## 4. Methodology
To address the challenges in the Vague Preference Multi-round Conversational Recommendation (VPMCR) scenario, we propose the _Adaptive Vague Preference Policy Learning (AVPPL)_ solution. AVPPL consists of two main components: Ambiguity-aware Soft Estimation (ASE) and Dynamism-aware Policy Learning (DPL). The ASE component estimates the vagueness of users' vague feedback and captures their dynamic preferences, while the DPL component leverages the preference distribution estimated by ASE to guide the conversation and adapt to changes in users' preferences. By incorporating the VPMCR scenario and the AVPPL solution, we aim to improve the overall performance and applicability of conversational recommendation systems in real-world settings, particularly for users with vague or dynamic preferences.
### Ambiguity-aware Soft Estimation
Ambiguity-aware Soft Estimation (ASE) aims to estimate the vagueness of the user's vague feedback in each turn by considering both explicit and implicit preferences. ASE focuses on understanding users' decision-making processes (Chen et al., 2018), which reflect the trade-offs they make when providing non-binary feedback. To capture users' dynamic preferences throughout the conversation, ASE employs a
time-aware preference decay strategy that combines users' recent preferences with fading historical preferences.
In the VPMCR setting, we model the signals of clicking and non-clicking separately based on the decision-making consciousness of users in choice-based questions. For each turn, preference implied by clicking and non-clicking choices is extracted, then the decay mechanism is used to weaken the preference of historical turns. Finally, in the soft estimation, we derive the user's preference distribution toward items and attributes.
#### 4.1.1. Preference Extraction with Choice-based Approach
In each turn of interaction, user preference can be divided into personalized user preference and choice-based preference. We adopt a common personalization modeling strategy (Kang et al., 2017) to represent the static preference of user \(u\) for item \(v\) as:
\[w_{v\text{-}u}=e_{u}^{\top}e_{v}, \tag{1}\]
where \(e_{u}\) and \(e_{v}\) denote the embedding vectors of user \(u\) and item \(v\), respectively.
To model users' decision-making processes, ASE employs a choice-based preference extraction method that considers the trade-offs users make when providing non-binary feedback. This approach captures both _explicit preferences_ (when users actively select an attribute) and _implicit preferences_ (when users do not select an attribute but may still have some preference for it) by estimating the importance of clicking choices and non-clicking choices separately.
For item \(v\), we estimate the importance of clicking choices and non-clicking choices, respectively. In turn \(t\), the formula for capturing the user's explicit preference towards clicking choices \(\mathcal{P}_{\text{click}}^{(t)}\) and implicit preference towards non-clicking choices \(\mathcal{P}_{\text{noclick}}^{(t)}\) are shown as follows:
\[w_{v\text{-}click}^{(t)}=\frac{1}{|\mathcal{P}_{\text{noclick}}^{(t)}|}\sum_ {p\in\mathcal{P}_{\text{noclick}}^{(t)}}(e_{v}^{\top}e_{p}-w_{v\text{-}avg}^{( t)}), \tag{2}\]
where \(|\mathcal{P}_{\text{click}}|\) and \(|\mathcal{P}_{\text{noclick}}|\) indicates the number of attributes related to clicked items and non-clicked items, respectively. \(w_{v\text{-}avg}^{(t)}\) measures the average preference towards all unshown attribute types and is used to mitigate over-estimation of the system-displayed choices, which is defined as:
\[w_{v\text{-}avg}^{(t)}=\sum_{p\in\mathcal{P}_{\text{nobow}}^{(t)}}e_{v}^{\top} e_{p}\bigg{/}|\mathcal{P}_{\text{noshow}}^{(t)}|, \tag{3}\]
where \(e_{v}\) and \(e_{p}\) represent the embedding vectors of item \(v\) and attribute \(p\), respectively, and \(\mathcal{P}_{\text{noshow}}^{(t)}\) refers to the set of all unshown attributes associated with the specified attribute type in turn \(t\).
By considering both the personalized preferences and the choice-based preference in turn \(t\), the users' preference for item \(v\) in turn \(t\) can be calculated as:
\[w_{v}^{(t)}=\sigma(w_{v\text{-}u}+\lambda_{1}w_{v\text{-}click}^{(t)}+\lambda_ {2}w_{v\text{-}noclick}^{(t)}), \tag{4}\]
where \(\sigma\) is the sigmoid function. \(\lambda_{1}\) and \(\lambda_{2}\) represent the information intensity coefficients of the information contained in the user's clicked attribute and the user's unclicked attribute, respectively.
#### 4.1.2. Time-aware Preference Decay
In dynamic conversation interactions, the user's global preferences should be viewed as a combination of preferences across all turns. We employ a decay mechanism to adjust the influence of historical preferences, enabling the model to focus more on the user's real-time feedback in the current turn and mitigating the over-emphasized impact related to the user's clicking behavior.
Figure 2. Adaptive Vague Preference Policy Learning (AVPPL) solution for VPMCR scenario.
To combine the user's current preference with historical decay preferences, the user's global preference toward the item is estimated as follows:
\[w_{o}^{(t)}=w_{o}^{(t)}+\gamma w_{o}^{(t-1)}, \tag{5}\]
which can be unfolded as:
\[w_{o}^{(t)}=\sum_{i=0}^{t-1}\gamma^{t-i-1}w_{o}^{(i)}, \tag{6}\]
where \(\gamma\) is a decay factor satisfying \(0\leq\gamma\leq 1\). The farther the interaction history is from the current turn, the less impact it will have on the current turn. \(\gamma\) should be carefully chosen to balance the influence of historical preferences and the user's real-time feedback.
Finally, for turn \(t\), the user's global preference distribution for items \(f_{u}^{(t)}(v)\) can be calculated by estimating the user's global preference \(w\) for each item \(v\) in the candidate item set \(\mathcal{V}_{\text{cand}}\). When the size of the candidate item set is \(n\), the soft estimation distribution for items is shown as follows:
\[f_{u}^{(t)}(v)=\{w_{o_{1}}^{(t)},w_{o_{2}}^{(t)},...,w_{o_{m}}^{(t)}\} \tag{7}\]
Similarly, by replacing items with attributes in the aforementioned equations, we derive the user's global preference distribution towards the candidate attribute set \(\mathcal{P}_{\text{cand}}\). When the size of the candidate attribute set is \(m\), the soft estimation for attributes is depicted by the following distribution:
\[f_{u}^{(t)}(p)=\{w_{p_{1}}^{(t)},w_{p_{2}}^{(t)},...,w_{p_{m}}^{(t)}\} \tag{8}\]
### Dynamism-aware Policy Learning (DPL)
The Dynamism-aware Policy Learning (DPL) module utilizes the preference distribution estimated by the Ambiguity-aware Soft Estimation (ASE) module to guide the conversation and adapt to preference changes. The DPL module, as part of the Adaptive Vague Preference Policy Learning (AVPPL) solution, aims to enhance CRS performance for users with vague or dynamic preferences.
#### 4.2.1. Graph-based Conversation Modeling
In the Graph-based Conversation Modeling section, we build on previous work (Gan et al., 2017; Wang et al., 2018) to represent the current conversation state at turn \(t\) using a dynamic undirected graph \(\mathcal{G}_{u}^{(t)}=(\mathcal{N}^{(t)},\mathbf{A}^{(t)})\). we represent the current state of the conversation at turn \(t\) using a dynamic undirected graph \(\mathcal{G}_{u}^{(t)}=(\mathcal{N}^{(t)},\mathbf{A}^{(t)})\). This graph is a subgraph of the heterogeneous graph, which consists of users, items, and attributes.
The nodes in the graph, \(\mathcal{N}^{(t)}\), are defined as follows:
\[\mathcal{N}^{(t)}=\{u\}\cup\mathcal{P}_{\text{click}}\cup\mathcal{P}_{n\text {-click}}\cup\mathcal{P}_{\text{cand}}^{(t)}\cup\mathcal{V}_{\text{sample}}^ {(t)} \tag{9}\]
The node set \(\mathcal{N}^{(t)}\) contains the user, clicked attributes, non-clicked attributes, current candidate attributes, and current sampled candidate items.
To address the issue of a large number of candidate items in the VPMCR setting, we implement a sampling strategy for candidate items \(\mathcal{V}_{\text{sample}}^{(t)}\) by randomly selecting from the candidate items in each turn \(t\).
The weighted adjacency matrix, \(\mathbf{A}^{(t)}\), is defined as:
\[A_{i,j}^{(t)}=\left\{\begin{array}{ll}w_{o}^{(t)},&\text{if }n_{i}=u,n_{j}\in \mathcal{V}\\ 1,&\text{if }n_{i}\in\mathcal{V},n_{j}\in\mathcal{P}\\ 0,&\text{otherwise}\end{array}\right. \tag{10}\]
The weight \(w_{o}^{(t)}\) denotes the user's estimated vague preference for the item \(o\), which is calculated via Eq. (6) within the ASE module. The weights of the edge between the item and its associated attributes are set to 1.
A Graph Convolutional Network (GCN) (Gan et al., 2017) enhances node representations \(\mathcal{E}_{\text{node}}\), by capturing the changing interrelationships within the current conversation state \(\mathcal{G}_{u}^{(t)}\). To encode the node representation of the clicking history, \(\mathcal{P}_{\text{click}}\), we employ a Transformer (Gan et al., 2017) to learn sequential patterns of the conversation history. Lastly, we obtain the conversation state \(s_{\text{conv}}^{(t)}\) by applying mean pooling to the node embeddings of the Transformer output, as shown in follows:
\[s_{\text{conv}}^{(t)}=\text{MeanPool}(\text{Transformer}(\mathcal{E}_{ \mathcal{P}_{\text{click}}})). \tag{11}\]
#### 4.2.2. Vague Preference Policy Learning
We employ a Deep Q-Network (DQN) algorithm to address the challenge of making conversational decisions that consider users' vague or dynamic preferences in CRS. The DQN algorithm has been proven effective in learning action policies in dynamic environments, such as Markov Decision Processes (MDPs), making it well-suited for predicting the next decision based on a series of historical choices.
The Q-value function \(Q\left(s_{t},a_{t}\right)\) of a policy \(\pi\) is defined to measure the expectation of the accumulated rewards based on the state \(s\) and the action \(a\). We adopt the same Dueling DQN and prioritized experience replay as in UNICORN (Gan et al., 2017) to optimize the Q-function \(Q^{*}\left(s_{t},a_{t}\right)\):
\[Q^{*}(s_{t},a_{t})=\max_{\pi}\mathbb{E}[R_{t+1}+\gamma\max_{a}Q^{\pi}(s_{t+1},a )|s_{t},a_{t}] \tag{12}\]
where \(\pi\) is the policy, \(R_{t+1}\) is the reward at turn \(t+1\), \(\gamma\) is the discount factor, and \(Q^{\pi}(s_{t+1},a)\) is the estimated action-value function for the next state and action.
To enhance sampling efficiency, we employ a preference-guided action pruning strategy. Specifically, we select the top-N items \(\mathcal{V}_{\text{top}}^{(t)}\) and attributes \(\mathcal{P}_{\text{top}}^{(t)}\) with the highest preference scores from ASE to construct a pruned action space. Focusing on likely preferred items and attributes improves learning efficiency. To maintain a balance between efficiency and performance, we adopt the action space size configuration from previous work (Gan et al., 2017; Wang et al., 2018), setting it as \(N=10\). The pruning action space is defined as:
\[\mathcal{A}_{\text{action}}^{(t)}=\mathcal{V}_{\text{top-N}}^{(t)}+\mathcal{P}_{ \text{top-N}}^{(t)} \tag{13}\]
For policy learning, the conversation state \(s_{\text{conv}}^{(t)}\) captures the user's dynamic conversation state. The pruning action space \(\mathcal{A}_{\text{action}}^{(t)}\) is determined by employing a preference-guided action pruning strategy, which partially estimate the user's vague preference distribution. The reward \(R\) follows the previous MCR setting (Kang et al., 2017), and the detailed settings will be described in the Section 5.2.4.
## 5. Experiments
In this section, we evaluate the proposed method in VPMCR. We use the following research questions (RQs) to guide our experiment.
* **RQ1.** How does our AVPPL method perform in comparison to state-of-the-art CRS methods in the VPMCR scenario?
* **RQ2.** How do the key components contribute to the overall performance of our AVPPL method?
* **RQ3.** How do the hyperparameters of our method affect its performance?
### Dataset Description
We introduce four datasets, whose statistics are shown in table 1.
* **Yelp and LastFM (Kumar et al., 2017):** Yelp1 and LastFM2 datasets are used for business and music artist recommendations, respectively. We follow the multiple attribute question settings, retaining the original attribute instances and extracting the attribute types they depend on. In Yelp, we utilize the 2-layer taxonomy designed by (Kumar et al., 2017), resulting in 29 categories in the first layer as attribute types and 590 attributes in the second layer as attribute instances. For LastFM, we follow (Kumar et al., 2017), retaining the original 8,438 attributes as attribute instances and employing clustering to obtain 34 attribute types.
Footnote 1: [https://www.yelp.com/dataset/](https://www.yelp.com/dataset/)
* **Amazon-Book (Kumar et al., 2017):** Amazon Book3 is a widely used product recommendation dataset. We retain users and items with at least 10 interaction records and consider entities (e.g., science fiction) and relations (e.g., genre) in the knowledge graph as attribute instances and attribute types, respectively. Footnote 3: [https://grouplens.org/datasets/hetrec-2011/](https://grouplens.org/datasets/hetrec-2011/)
Footnote 3: [http://jmculey.ucsd.edu/data/amazon](http://jmculey.ucsd.edu/data/amazon).
### MovieLens:
Movielens is a movie rating dataset. We adopt MovieLens-20M4 dataset, following (Kumar et al., 2017), retaining interactions \(>\) 3 and selecting knowledge graph (KG) entities and relations as attribute instances and attribute types.
Footnote 4: [https://grouplens.org/datasets/movielens/](https://grouplens.org/datasets/movielens/)
### Experimental Setup
#### 5.2.1. User Simulator in VPMCR
Conversational recommendation systems (CRSs) are interactive and require training and evaluation through user interactions. However, obtaining data directly from users in a research lab is impractical, so employing a user simulator is a common practice (Beng et al., 2015). The user simulator simulates users' interaction records in the training and test sets.
In the VPMCR scenario, we adopt a user simulation strategy similar to that in MIMCR (Kumar et al., 2017), considering the reasonableness of the multi-interest setting. For a given observed user-items interaction pair \((u,\mathcal{V}_{u})\), we simulate a conversation session. Each item \(v\) in \(\mathcal{V}_{u}\) is treated as a ground-truth target item, and the union of attribute types and attributes associated with each item are considered as the user's ground-truth intent space \(\mathcal{C}_{u}\) and ground-truth attribute space \(\mathcal{P}\), respectively. The conversation session is initialized when the user specifies a common attribute \(p_{0}\) to all \(\mathcal{V}_{u}\), and the user's clear preference space \(\mathcal{C}_{CI}\) and user's vague preference space \(\mathcal{C}_{VI}\) are randomly initialized from the ground-truth intent space \(\mathcal{C}_{u}\).
During the interaction, we use the ground-truth attribute space \(\mathcal{P}\) as a criterion for the user simulator's acceptance or rejection. The detailed interaction process follows the "system asks or recommends and user responds" rules outlined in Section 3.
#### 5.2.2. Action Inference
The action inference involves either recommending items or asking an attribute-related question.
(1) **Recommendation**: If an item \(v\) in the action space has the highest Q-value, the CRS make a recommendation, resulting in a new action space \(\mathcal{A}^{(t)}=\mathcal{V}^{(t)}_{top}\).
(2) **Questioning**: If an attribute \(p\) in the action space has the highest Q-value, the CRS asks a question. In a multiple-choice setting, a two-level decision process is employed: first selecting an attribute type, then presenting several attributes within that type. A sum-based strategy (Kumar et al., 2017) is used to determine the attribute type for questioning. Specifically, Q-values of all attributes within the attribute action space \(\mathcal{P}^{(t)}_{top}\) are summed and allocated to their respective attribute types. The attribute type with the highest total value is selected for questioning, and the top \(K\) attributes with the highest Q-values within that type are presented to the user.
#### 5.2.3. Baselines
We use the following baselines. For fairness, all baselines are compared in the VPMCR scenario.
* **Max Entropy**. It selects the attribute with the maximum information entropy and inversely relates the probability of making a recommendation to the length of candidate items.
* **CRM**(Kumar et al., 2017). It employs a belief tracker to record user preferences as conversation state representation vectors and applies them to a reinforcement learning decision module and factorization machine (FM) recommendation modules.
* **EAR**(Kumar et al., 2017). This method adopts the three-stage solution framework to enhance the interaction between the conversation component and the recommendation component.
* **SCPR**(Kumar et al., 2017). SCPR leverages graph-based path reasoning to prune useless candidate attributes. It separates attribute selection from reinforcement learning, which is only used for determining when to ask and recommend.
* **UNICORN**(Kumar et al., 2017). A state-of-the-art method for the MCR scenario that proposes a unified policy learning framework using dynamic graphs to model conversation states and employs a preference-based scoring to reduce reinforcement learning action space.
* **MCMIPL**(Kumar et al., 2017). It considers the user's multi-interest space and extends the MCR scenario to a more realistic MIMCR scenario. This method also follows the graph-based unified reinforcement learning framework and employs the multi-interest encoder to learn the conversation state.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline
**Dataset** & **Yelp** & **LastFM** & **Amazon-Book** & **MovieLens** \\ \hline \#Users & 27,675 & 1,801 & 30,291 & 20,892 \\ \#Items & 70,311 & 7,432 & 17,739 & 16,482 \\ \#Interactions & 1,368,609 & 76,693 & 478,099 & 454,011 \\ \#Attributes & 590 & 8,438 & 988 & 1,498 \\ Attribute-types & 29 & 34 & 40 & 24 \\ \hline \#Entities & 98,576 & 17,671 & 49,018 & 38,872 \\ \#Relations & 3 & 4 & 2 & 2 \\ \#Triplets & 2,533,827 & 228,217 & 565,068 & 380,016 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of datasets.
#### 5.2.4. Training Details
We split each dataset into training, validation, and testing sets (7:1:5:1.5). In the user simulator, we set the maximum conversation turn \(T\) to 15 and the number of target item sets \(\mathcal{V}_{u}\) for the user to 2. We use uniform sampling to initialize the user's vague and clear preference spaces.
In the ASE module, we set the intensity coefficients \(\lambda_{1}\) and \(\lambda_{2}\) to 0.1 and 0.01, respectively, and the decay discount factor to 0.1. In the DPL module, when constructing the dynamic graph, random sampling is employed to select candidate items when the available number of candidates exceeds 5000. The graph-based conversation modeling architecture consists of two GNN layers and one Transformer layer. We fix the embedding size and hidden size at 64 and 100, respectively. For action pruning in RL, we set the size of the item space and attribute space to 10 (i.e., \(N=10\)). For action inference, we set the number of attributes displayed to the user to 2 (i.e., \(K=2\)). Following (Beng et al., 2015), we use TransE (Beng et al., 2015), implemented throughKE (Kang et al., 2015), to pre-train the graph node embeddings. For fair comparison, we train the DQN for 10,000 episodes using the same reward settings as the benchmarks: \(r_{\text{rec-succ}}=1\), \(r_{\text{rec-fail}}=-0.01\), \(r_{\text{ask-succ}}=-0.1\), \(r_{\text{ask-fail}}=-0.1\), and \(r_{\text{quit}}=-0.3\). The experience replay buffer size is 50,000, mini-batch size is 128, learning rate is 1e-4 with L2 regularization of 1e-6, optimized using Adam.
#### 5.2.5. Evaluation Metrics
We evaluate performance using success rate (SR@\(T\)) and average turns (AT). SR@\(T\) measures the percentage of successful recommendations within \(T\) turns; higher is better. AT measures the average conversation length; lower indicates greater efficiency.
We use hierarchical normalized discounted cumulative gain (hDCG@\((T,K)\)) to evaluate the ranking of the top-\(K\) recommendations within \(T\) turns. hDCG assigns higher scores to recommendations that are more relevant to the user. A higher nDCG@\((T,K)\) indicates a better ranking performance.
### Performance comparison of AVPPL with existing models (RQ1)
Table 2 reports the SR@\(15\), AT and hDCG@\((15,10)\) for AVPPL and baseline models. AVPPL achieved significantly higher scores on all metrics and datasets, demonstrating its effectiveness in the VPMCR scenario. The performance gap was largest on MovieLens, likely because movie recommendations are a relatively simple task and AVPPL better models user preferences for items.
Fig. 3 shows the relative success rate (SR*) of each model at every turn compared to the MCMIPL baseline (represented by the dark green line at \(y=0\)). Observing the variation trend of curves in Fig. 3, we have the following findings:
* AVPPL almost consistently and substantially surpassed all baselines over the entire conversation session across datasets. Specifically, AVPPL achieved a high success rate in the first a few turns on MovieLens, demonstrating its ability to precisely capture users' preferences.
* As the conversation continues, the performance gap between AVPPL and other baselines widened, especially compared to Max Entropy. The lack of an adaptive policy caused Max Entropy to require excessive turns, while AVPPL dynamically predicts the best action via personalized policies learned through RL.
* Reinforcement learning-based methods like CRM and EAR lag behind more advanced models, as they directly apply RL to a large decision space without effectively representing the conversation state, hindering optimal policy learning. In contrast, graph-based models like SCPR, UNICORN and MCMIPL achieve state-of-the-art performance on some datasets, but underperform AVPPL.
### Evaluating Key Design in AVPPL (RQ2)
#### 5.4.1. Key Components of AVPPL
We examine the effectiveness of Ambiguity-aware Soft Estimation (ASE), our framework's main design, in guiding conversations and adapting to user preference changes in VPMCR scenarios. We separately remove the ASE module for items and attributes (Section 4.1) and replace them with a preference-based scoring strategy (Beng et al., 2015; Beng et al., 2015), which models user preferences using historical click or non-click attributes as mixed signals.
Table 3 rows (a-b) display the ablation study results. Removing the ASE module for both items and attributes significantly degrades performance across all datasets, emphasizing the importance of considering user preference vagueness. The ASE module allows our model to learn a sophisticated conversational state representation and prune a more reasonable action space for the Dynamism-aware Policy Learning (DPL) module, enhancing the upper bound for unified policy learning.
We also find that the ASE component is more effective in measuring user preferences for items than attributes in VPMCR scenarios, suggesting that click behavior provides more direct item-related information.
#### 5.4.2. Key Components of ASE
Table 3 rows (c-e) present the ablation experiments for the ASE component. Row (c) shows that personalized information for user modeling is crucial; without it, the model cannot capture personalized preferences, severely limiting performance. Removing the average preference in Equation 3 (Row (d)) degrades performance across all datasets, with LastFM suffering the most. This may be due to LastFM's numerous attributes and
Figure 3. SR* of compared methods at different turns on four datasets (RQ1)
the significant impact of non-displayed attribute information on user preference estimation. Additionally, we remove the historical decay preference in time-aware preference decay (Row (e)), leading to performance degradation on three datasets except for MovieLens. On MovieLens, ASE without decaying information reliably estimates preferences in the current turn, and recommendations succeed within 1-2 rounds. Thus, introducing historical decay preference in short interactive rounds may weaken preference inference on MovieLens.
Overall, the results confirm the ASE module's importance and the proposed AVPPL framework's effectiveness.
#### 5.4.3. VPMCR vs. MIMCR Scenarios
To comprehensively evaluate AVPPL's effectiveness in modeling user preferences based on click behaviors, we relax the scenario assumption and employ the MIMCR scenario involving multi-choice question interactions. In MIMCR, user feedback signals are treated as strong indicators to filter items.
Table 3 compares AVPPL's performance with advanced baselines in the MIMCR scenario. Our method shows significant advantages on Yelp, Amazon-book, and Movielens datasets. On LastFM, although slightly inferior to MCMIPL in SR and AT, AVPPL outperforms all w.r.t. hDCG. These results confirm AVPPL's effectiveness in eliciting user preferences in multi-choice question scenarios, demonstrating its universality and effectiveness in handling both VPMCR and MIMCR scenarios.
### Model Parameter Analysis RQ3
The previous work on graph-based policy learning (Beng et al., 2017), has conducted relevant hyperparameter analysis regarding policy learning. Here we focus on the analysis of the hyperparameter impact of the core module (ASE) in AVPPL in the VPMCR scenario. Due to the
\begin{table}
\begin{tabular}{c c|c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c}{**Yelp**} & \multicolumn{3}{c}{**Amazon-Book**} \\ \cline{2-10} & \(\lambda_{2}\) & 0.01 & 0.1 & 1 & 0.01 & 0.1 & 1 \\ \hline \multirow{4}{*}{\(\lambda_{1}\)} & 0.01 & **0.414** & 0.408 & 0.328 & 0.424 & **0.430** & 0.400 \\ & 0.1 & 0.398 & 0.410 & 0.344 & 0.424 & 0.414 & 0.384 \\ \cline{1-1} & 1 & 0.394 & 0.370 & 0.302 & 0.420 & 0.398 & 0.406 \\ \hline \hline \end{tabular}
\end{table}
Table 4. The impact of the coefficient of information intensity w.r.t. **SR@15**.
Figure 4. Comparative performance analysis of Success Rate with varying decay factor (left) and proportion of vague preference (right) hyperparameters.(RQ3).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{3}{c}{**Yelp**} & \multicolumn{3}{c}{**LastFM**} & \multicolumn{3}{c}{**Amazon-Book**} & \multicolumn{3}{c}{**MovieLens**} \\ \cline{2-10} & **SR@15** & **AT** & **hDCG** & **SR@15** & **AT** & **hDCG** & **SR@15** & **AT** & **hDCG** & **SR@15** & **AT** & **hDCG** \\ \hline Max Entropy & 0.062 & 14.44 & 0.030 & 0.376 & 11.25 & 0.189 & 0.180 & 12.91 & 0.107 & 0.448 & 9.93 & 0.315 \\ CRM & 0.212 & 13.27 & 0.070 & 0.372 & 12.26 & 0.126 & 0.296 & 12.34 & 0.109 & 0.780 & 5.96 & 0.341 \\ EAR & 0.232 & 13.05 & 0.080 & 0.414 & 11.61 & 0.146 & 0.324 & 12.14 & 0.119 & 0.792 & 5.50 & 0.361 \\ SCPR & 0.322 & 12.34 & 0.115 & 0.596 & 10.18 & 0.206 & 0.374 & 11.62 & 0.139 & 0.806 & 4.90 & 0.387 \\ UNICORN & 0.314 & 12.11 & 0.140 & 0.632 & 9.17 & 0.280 & 0.396 & 11.05 & 0.193 & 0.810 & 4.81 & 0.548 \\ MCMIPL & 0.322 & 12.16 & 0.136 & 0.634 & 9.52 & 0.267 & 0.412 & 10.90 & 0.205 & 0.820 & 4.39 & 0.579 \\ \hline
**AVPPL** & **0.398** & **11.26** & **0.175** & **0.686** & **8.58** & **0.306** & **0.424** & **10.75** & **0.206** & **1.000** & **1.60** & **0.689** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Performance comparison of different models in VPMCR scenario. hDCG stands for hDCG@(15, 10).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Yelp**} & \multicolumn{3}{c}{**LastFM**} & \multicolumn{3}{c}{**Amazon-Book**} & \multicolumn{3}{c}{**MovieLens**} \\ \cline{2-10} & **SR@15** & **AT** & **hDCG** & **SR@15** & **AT** & **hDCG** & **SR@15** & **AT** & **hDCG** & **SR@15** & **AT** & **hDCG** \\ \hline \multicolumn{10}{c}{**AVPPL - (VPMCR)**} \\ \hline
**(a)** - w/o ASE Item.Score & 0.328 & 12.04 & 0.144 & 0.618 & 9.35 & 0.271 & 0.386 & 11.17 & 0.189 & 0.852 & 3.84 & 0.593 \\ (b)** - w/o ASE Att.Score & 0.354 & 11.88 & 0.149 & 0.614 & 9.44 & 0.267 & 0.412 & 10.91 & 0.199 & 1.000 & 1.75 & 0.663 \\ (c)** - w/o Personalized Preference & 0.142 & 13.84 & 0.060 & 0.444 & 10.79 & 0.211 & 0.284 & 12.10 & 0.142 & 0.858 & 5.22 & 0.492 \\ (d)** - w/o Average Preference & 0.368 & 11.38 & 0.169 & 0.630 & 9.24 & 0.269 & 0.416 & 10.84 & 0.199 & 1.000 & 1.77 & 0.668 \\ (e)** - w/o Decaying Preference & 0.382 & 11.56 & 0.163 & 0.628 & 9.15 & 0.280 & 0.410 & 11.05 & 0.190 & 1.000 & **1.49** & **0.708** \\ \hline
**AVPPL - (MIMCR)** & **0.636** & **10.68** & **0.210** & 0.840 & 7.33 & **0.350** & **0.610** & **9.81** & **0.251** & **0.988** & **2.42** & **0.640** \\ \hline MCMIPL - (MIMCR) & 0.552 & 10.95 & 0.204 & **0.856** & **7.21** & 0.342 & 0.544 & 10.32 & 0.239 & 0.838 & 4.23 & 0.602 \\ UNICORN - (MIMCR) & 0.454 & 11.01 & 0.188 & 0.832 & 7.42 & 0.350 & 0.530 & 10.23 & 0.231 & 0.832 & 4.35 & 0.567 \\ SCPR - (MIMCR) & 0.452 & 12.52 & 0.136 & 0.688 & 10.27 & 0.220 & 0.450 & 11.10 & 0.167 & 0.834 & 4.80 & 0.392 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Ablation study of AVPPL in VPMCR (top) and comparison of AVPPL with other baselines in MIMCR (bottom).
limited space, we only present results for Yelp and Amazon-Book, but note that LastFM and Movielens exhibit similar trends.
#### 5.5.1. Hyperparameter Analysis in ASE
We identified two key hyperparameters: (1) The information intensity coefficients \(\lambda_{1}\) and \(\lambda_{2}\) control the importance of explicit versus implicit preferences. The results presented in Table 4 show that larger \(\lambda_{1}\) and smaller \(\lambda_{2}\) resulted in higher success rates, indicating that explicit preferences (\(\lambda_{1}\)) are more crucial than implicit preferences (\(\lambda_{2}\)) in VPMCR. Notably, performance decreases when both \(\lambda_{1}\) and \(\lambda_{2}\) are large, especially for sparser datasets like Yelp, posing a challenge to the model's robustness. (2) The decay factor \(\gamma\) controls the trade-off between recent and historical preferences. Fig. 4 shows that a moderate decay factor (0.6-0.8) performs best, suggesting that a balance between recent and historical preferences is optimal. Extreme values (0.1 and 1.0) perform poorly, indicating that disregarding historical preferences or solely relying on recent ones is suboptimal.
|
2310.19456 | Boundary Sidewise Observability of the Wave Equation | The wave equation on a bounded domain of $\R^{n}$ with non homogeneous
boundary Dirichlet data or sources supported on a subset of the boundary is
considered. We analyze the problem of observing the source out of boundary
measurements done away from its support.
We first show that observability inequalities may not hold unless an infinite
number of derivatives are lost, due to the existence of solutions that are
arbitrarily concentrated near the source.
We then establish observability inequalities in Sobolev norms, under a
suitable microlocal geometric condition on the support of the source and the
measurement set, for sources fulfilling pseudo-differential conditions that
exclude these concentration phenomena.
The proof relies on microlocal arguments and is essentially based on the use
of microlocal defect measures. | Belhassen Dehman, Enrique Zuazua | 2023-10-30T11:31:10Z | http://arxiv.org/abs/2310.19456v2 | # Boundary sidewise observability of the wave equation
###### Abstract.
The wave equation on a bounded domain of \(\mathbb{R}^{n}\) with non homogeneous boundary Dirichlet data or sources supported on a subset of the boundary is considered. We analyze the problem of observing the source out of boundary measurements done away from its support.
We first show that observability inequalities may not hold unless an infinite number of derivatives are lost, due to the existence of solutions that are arbitrarily concentrated near the source.
We then establish observability inequalities in Sobolev norms, under a suitable microlocal geometric condition on the support of the source and the measurement set, for sources fulfilling pseudo-differential conditions that exclude these concentration phenomena.
The proof relies on microlocal arguments and is essentially based on the use of microlocal defect measures.
2010 Mathematics Subject Classification: 34L20, 35Pxx, 35Q93 58J40, 93Dxx.
###### Contents
* 1 Introduction
* 1.1 General setting
* 1.2 Geometry of the domain \(\Omega\)
* 1.3 Motivation
* 1.4 Extensions and open problems.
* 1.5 Structure of the paper
* 2 Statement of the results
* 2.1 Sidewise observability
* 2.2 On the lack of sidewise observability
* 3 Some Geometric Facts, Operators and Measures
* 3.1 Geometry
* 3.2 Generalized bicharacteristic rays
* 3.3 Pseudo-differential operators
* 3.4 Microlocal defect measures
* 4 Preliminary results
* 4.1 A Geometric Lemma
* 4.2 First computations
* 5 Proof of Theorem 2.3
* 5.1 Relaxed observation and unique continuation
* 5.2 Proof of the relaxed observation
* 5.3 Properties of the measures
* 5.4 End of the proof of Theorem 2.3
* 6 Proof of Theorem 2.5
## 1. Introduction
### General setting
Let \(\Omega\) be a bounded open domain of \(\mathbb{R}^{n}\) with boundary \(\partial\Omega\) of class \(\mathcal{C}^{\infty}\). We set
\[\mathcal{L}=\mathbb{R}\times\Omega\quad\text{and}\quad\ \partial\mathcal{L}= \mathbb{R}\times\partial\Omega.\]
We also introduce \(A=(a_{ij}(x))\), a \(n\times n\) matrix of \(\mathcal{C}^{\infty}\) coefficients, symmetric, uniformly definite positive on a neighborhood of \(\Omega\).
Finally, we take \(g\in H^{1}(\partial\mathcal{L})\) and we assume that \(g\) is compactly supported in time in the interval \((0,+\infty)\).
We consider then the following wave system
\[\left\{\begin{array}{c}P_{A}u=\partial_{t}^{2}u-\sum_{i,j=1}^{n}\partial_{x _{j}}(a_{ij}(x)\partial_{x_{i}}u)=0\quad\text{in $\mathcal{L}$}\\ \\ u(t,.)=g(t,.)\quad\text{on $\partial\mathcal{L}$}\\ \\ u(0,.)=\partial_{t}u(0,.)=0\quad\text{in $\Omega$}.\end{array}\right. \tag{1.1}\]
This system is well posed in the classical energy space \(C^{0}(\mathbb{R},H^{1}(\Omega))\cap C^{1}(\mathbb{R},L^{2}(\Omega))\) equipped with the energy norm \(\sup_{t\in\mathbb{R}}Eu(t)\), where
\[Eu(t)=\|u(t,.)\|_{H^{1}(\Omega)}^{2}+\|\partial_{t}u(t,.)\|_{L^{2}(\Omega)}^{2},\]
and
\[\|u(t,.)\|_{H^{1}(\Omega)}^{2}=\sum_{i,j=1}^{n}\int_{\Omega}a_{ij}(x)\partial _{x_{i}}u\partial_{x_{j}}udx,\]
see [14]. Actually, the solution \(u\) vanishes for \(t\leq 0\).
More precisely, the following energy estimate holds
\[\sup_{t\in\mathbb{R}}Eu(t)\leq C||g||_{H^{1}(\partial\mathcal{L})}^{2}, \tag{1.2}\]
together with the added hidden regularity property of the trace of the normal derivative
\[\|\partial_{n}u_{|\partial\Omega}\|_{L^{2}((0,a)\times\partial\Omega)}\leq C _{a}\|g\|_{H^{1}(\partial\mathcal{L})}, \tag{1.3}\]
valid for all \(a>0\).
**Remark 1.1**.: _The constant appearing in estimate (1.2) and (1.3) depend on the metric attached to \(A=(a_{ij}(x))_{ij}\), on the geometry of the domain \(\Omega\) and, for (1.3), also on on the time-horizon \(a>0\)._
### Geometry of the domain \(\Omega\)
In this paper, we will deal with a particular class of domains \(\Omega\). This fact is made precise in the following condition.
**Assumption A1**
_We assume that there exists a strictly concave (with respect to the metric attached to the matrix \(A=(a_{ij}(x))_{ij}\)) open non empty subset \(O\) of the boundary \(\partial\Omega\), \(\overline{O}\neq\partial\Omega\)._
Geometrically, this guarantees that every geodesic of \(\Omega\) that is tangent to \(O\) at some point \(m_{0}\), has an order of tangency equal to \(1\); locally near this point and except for \(m_{0}\), this geodesic lives in \(\Omega\).
For instance, if \(A=Id\), this simply says that there exists a neighborhood \(V\) of \(O\) in \(\mathbb{R}^{n}\), such that the set \(V\setminus\Omega\) is strictly convex. See Fig.1.
**Remark 1.2**.:
1. _Assumption A1, implicitly, substantially limits the class of domains_ \(\Omega\) _under consideration. For example, this condition excludes convex domains_ \(\Omega\)_. Indeed, for subsets_ \(O\) _of the boundary of_ \(\Omega\) _to exist, so that they fulfil the assumption_ A1_, the geometry of_ \(\Omega\) _needs to allow for some concavity zones of its boundary, as illustrated in Figure_ 1_, and this excludes many domains_ \(\Omega\)_._
2. _In the literature, sets_ \(O\) _fulfilling assumption_ A1 _are sometimes said to be diffractive with respect to the metric attached to_ \(A=(a_{ij}(x))_{ij}\)_._
### Motivation
From now, we will work under assumption A1. Let then \(O^{\prime}\) be a non empty open subset of \(\partial\Omega\) such that \(\overline{O}\cap\overline{O^{\prime}}=\emptyset\). We set
\[\Gamma=\mathbb{R}\times O,\qquad\Gamma^{\prime}=\mathbb{R}\times O^{\prime},\]
and for \(a>0\),
\[\mathcal{L}_{a}=(0,a)\times\Omega,\quad\Gamma_{a}=(0,a)\times O\quad\text{and} \quad\Gamma^{\prime}_{a}=(0,a)\times O^{\prime}.\]
In addition, we assume throughout the whole paper that the boundary data \(g\) is supported in \(\overline{\Gamma}_{M}=[0,M]\times\overline{O}\) for some \(M>0\).
Figure 1. Examples of strictly concave boundary subset \(O\)
The aim of this paper is to analyze whether it is possible to observe the boundary data or source \(g\) in (1.1) from measurements done on the normal derivative \(\partial_{n}u_{|\Gamma^{\prime}}\) on the subset \(\Gamma^{\prime}\) of the boundary. In other words, we are seeking for an estimate of the type
\[\|g\|_{H^{1}(\Gamma_{M})}\leq C\|\partial_{n}u_{|\partial\Omega}\|_{L^{2}( \Gamma^{\prime}_{a})}, \tag{1.4}\]
for some \(a\geq M\).
Estimate (1.4) is the sidewise observability inequality object of analysis in this paper.
According to the Rellich inequality it is well known that the right hand side term of (1.4) is bounded above by
\[\|u\|_{a}^{2}=:\sup_{t\in[0,a]}Eu(t)=\sup_{t\in[0,M]}Eu(t)=\|u\|_{M}^{2}.\]
More precisely, for every \(a>0\), there exists \(C_{a}>0\) such that every solution \(u\) of (1.1) satisfies
\[\|\partial_{n}u_{|\partial\Omega}\|_{L^{2}(\Gamma^{\prime}_{a})}\leq C_{a}\|u \|_{M}. \tag{1.5}\]
Therefore, a necessary condition for an estimate of the form (1.4) to hold is that the boundary data \(g\) under consideration needs to be observable out of the total interior energy \(\|u\|_{M}\), namely, the existence of a constant \(C>0\) such that
\[\|g\|_{H^{1}(\Gamma_{M})}\leq C\|u\|_{M}. \tag{1.6}\]
However, as we shall see, this inequality does not hold without additional structural conditions on the source term \(g\) under consideration. Indeed, in Theorem 2.5 and Theorem 7.1, we construct sequences of invisible sources \((g_{k})\) whose energy is essentially localized on the elliptic and/or glancing set of the boundary, such that
\[\|g_{k}\|_{H^{1}(\Gamma_{M})}\to 1,\quad g_{k}\rightharpoonup 0\quad\text{in} \;H^{1},\quad\|u_{k}\|_{M}\longrightarrow 0, \tag{1.7}\]
which, of course, are an impediment for (1.6) to occur.
In fact, as we shall see, even the weaker version
\[\|g\|_{H^{s}(\Gamma_{M})}\leq C\|\partial_{n}u_{|\partial\Omega}\|_{H^{1}( \Gamma^{\prime}_{a})} \tag{1.8}\]
Figure 2. Cylindrical domain where waves evolve. In green the support of the source \(g\) to be identified, and in red the subset of the boundary where measurements are done.
may not for hold for any \(s\leq 1\).
The lack of such sidewise observability inequalities is genuinely a multi-d phenomenon (see sections 6 and 7). By the contrary, as shown in [22] and [24] by means of sidewise energy estimates, in 1-d, inequality (1.6) holds for \(BV\) coefficients and under natural conditions on the length of the time-interval. Counterexamples generated by waves concentrated on the support of the source may not arise in 1-d since light rays hitting the boundary are only of hyperbolic type.
Going back to the multi-d case under consideration, the lack of observability inequalities of the form (1.8) shows that, necessarily, an infinite number of derivatives may be lost on the measurement of the sources \(g\), and thus, one has to impose some added restrictions on them to prevent concentration phenomena like (1.7) (see the pseudo-differential condition in assumption A3 below).
Within this class of sources \(g\), the sidewise observability inequality (1.4) will be proved under a microlocal geometrical condition (see assumption A2 below), inspired (but different!) from the Geometric Control Condition introduced in [3]. Roughly, it guarantees that all rays emanating from the support of the source reach the observation region without earlier bouncing on the support of the source. This condition is sharp in terms of the geometry of the support of the sources \(O\) and the measurement subset \(O^{\prime}\) and also in what concerns the sidewise observability time.
### Extensions and open problems
The methods of this paper could be employed to handle other related problems such as:
* The simultaneous initial and boundary source sidewise observation. We refer to [24] for a complete analysis in 1-d.
* The problem treated in [4] where, on an annular domain \(\Omega=A(R_{1},R_{2})=\{x\in\mathbb{R}^{n},\,R_{1}<|x|<R_{2}\}\) of \(\mathbb{R}^{n}\), initial data are observed out of measurements on the exterior part of the boundary, under suitable conditions on the sources with support on the interior boundary.
Similar questions on the sidewise boundary observability and source identification are also of interest for other models such as, for instance Schrodinger, plate and heat equations, the elasticity system and thermoelasticiy, all of them rather well understood in the control of classical boundary control. But their analysis would require of significant further developments.
### Structure of the paper
The paper is organized as follows. In Section 2 we state the main results, and Section 3 is devoted to present some preliminary results. Most of the tools presented here are classical and we recall them in order to standardize the notations and make the paper self-contained. We start with the geometrical setting and we present in particular the generalized bicharacteristic curves and the partition of the cotangent space of the boundary \(T^{*}\partial\mathcal{L}\). We also introduce the spaces of pseudo-differential symbols that will play the role of test functions on which we build the microlocal defect measures, of great importance in the proof. In Section 4, we present a geometric consequence of Assumption A2 and we perform a pseudo-differential multiplier calculus up to the boundary, in the spirit of [16], that will play a central role in the proof. Section 5 is mostly devoted to the proof of the main result namely Theorem 2.3. In Section 6, we present the proof of Theorem 2.5, essentially based on the microlocal behavior of the solutions to (1.1). Finally, in Section 7, we present the proof of Theorem 7.1 where we construct an explicit sequence of boundary data \((g_{k})\) concentrating on the glancing set.
**Acknowledgements.** The authors thank Nicolas Burq for fruitful discussion and for indicating the classical approach of M.Taylor namely, the factorization of the wave symbol near elliptic points of the boundary. The authors also thank Nicola de Nitti for his help on designing and executing the figures of the paper.
The research of the first author was partially supported by the Tunisian Ministry for Higher Education and Scientific Research within the LR-99-ES20 program. The second author has been funded by the Alexander von Humboldt-Professorship program, the Transregio 154 Project "Mathematical Modelling, Simulation and Optimization Using the Example of Gas Networks" of the DFG, the ModConFlex Marie Curie Action, HORIZON-MSCA-2021-\(d\)N-01, the COST Action MAT-DYN-NET, grants PID2020-112617GB-C22 and TED2021-131390B-I00 of MINECO (Spain), and by the Madrid Government - UAM Agreement for the Excellence of the University Research Staff in the context of the V PRICIT (Regional Programme of Research and Technological Innovation).
## 2. Statement of the results
### Sidewise observability
Let \(\Omega\) be a domain of \(\mathbb{R}^{n}\) admissible in the sense of assumption A1, and \(O\) a subset of the boundary \(\partial\Omega\) strictly concave. And consider \(O^{\prime}\) a subset of \(\partial\Omega\) such that \(\overline{O}\cap\overline{O^{\prime}}=\emptyset\). We start with the geometric condition we will impose to the pair \(\{O,O^{\prime}\}\).
First, we recall that given the cylinder \(\mathcal{L}=\mathbb{R}\times\Omega\) with \(\Omega\) of class \(\mathcal{C}^{\infty}\), we can define the Melrose-Sjostrand compressed cotangent bundle of \(\mathcal{L}\), \(T^{*}_{b}\mathcal{L}=T^{*}\mathcal{L}\cup T^{*}\partial\mathcal{L}\). In addition, the matrix \(A=(a_{ij}(x))\) being also of class \(\mathcal{C}^{\infty}\), we have a flow on \(T^{*}_{b}\mathcal{L}\), constituted of generalized bicharacteristic curves of the wave operator
\[P_{A}=\partial_{t}^{2}-\sum_{i,j=1}^{n}\partial_{x_{j}}(a_{ij}(x)\partial_{x_ {i}}),\]
the celebrated Melrose-Sjostrand flow (see [19]). We refer the reader to Section 3.2 for further details and precise definitions of these facts.
In particular, we recall the partition of the cotangent bundle of the boundary \(T^{*}\partial\mathcal{L}\) into elliptic, hyperbolic and glancing sets :
\[T^{*}\partial\mathcal{L}=\mathcal{E}\cup\mathcal{H}\cup\mathcal{G}. \tag{2.1}\]
Now, consider an open subset \(\mathcal{O}\) of \(\partial\Omega\), strictly concave in the sense of assumption A1, such that \(\overline{O}\subset\mathcal{O}\) and \(\overline{O}\cap\overline{O^{\prime}}=\emptyset\). One can easily check that this is possible since A1 is an open condition.
**Assumption A2: SGCC**
We assume that there exists a time \(T_{0}>0\) such that every generalized bicharacteristic curve issued from the boundary \(\mathcal{O}\) at \(t=0\), intersects the boundary \(O^{\prime}\) at a strictly gliding point, without intersecting \(\overline{\Gamma}\), and before the time \(T_{0}\).
**Remark 2.1**.:
1. _The definition of strictly gliding point of the boundary will be given in Section_ 3.2_._
2. _The notation (SGCC) stands for sidewise geometric control condition. In what follows, we provide some precisions._
3. _Set_ \(\mathcal{U}=\mathbb{R}\times\mathcal{O}\)_. The generalized bicharacteristic curves issued from points of the boundary_ \(\mathcal{U}\) _are of two types and can be described through their projection on the basis, i.e the_ \((t,x)-\)_space. On one hand we have the curves that are transverse to_ \(\partial\mathcal{L}\) _and in
this case we have two hyperbolic fibers issued from the same hyperbolic point \(m_{0}\in\partial\mathcal{L}\). At \(m_{0}\), we have a hyperbolic reflection. On the other hand, the curve is tangent to \(\partial\mathcal{L}\) at \(m_{0}\) ( one order tangency ) and lies in \(\mathcal{L}=\mathbb{R}\times\Omega\) otherwise. In the latter case, the generalized bicharacteristic curve can be interpreted as a "free bicharacteristic curve" since it's an integral curve of the hamiltonian field attached to the wave symbol ( see Section 3.2). Condition (SGCC) requires that each one of these curves starting from \(\mathcal{U}\) at \(t=0\), to intersect the boundary \(\Gamma^{\prime}\) at a strictly gliding point, without intersecting \(\overline{\Gamma}\), and before the time \(T_{0}\). In this sense, this condition is stronger than the classical (GCC) of Bardos, Lebeau and Rauch [3] that needs the rays to hit \(\partial\Omega\) at non diffractive points.
4. For instance if \(\gamma=\gamma(s)\) is a ray issued from \(\mathcal{U}\), we have \(\gamma(0)=\rho\in T_{b}^{*}\mathcal{L}_{|\mathcal{U}}\), \(\gamma(s_{0})=\rho_{1}\in T_{b}^{*}\mathcal{L}_{|\Gamma^{\prime}}\) for some \(s_{0}\in]0,T_{0}[\), where \(\rho_{1}\) is a strictly gliding point, and moreover \(\gamma(s)\notin T_{b}^{*}\mathcal{L}_{|\overline{\Gamma}}\) for \(0<s<s_{0}\). In particular we can allow \(\gamma(s)\) to live on the boundary, outside \(T_{b}^{*}\mathcal{L}_{|\overline{\Gamma}}\) for some values of \(s\in]0,s_{0}[\).
5. Notice that we don't make any assumption on the rays that don't intersect the open set \(\mathcal{U}\) of the boundary. From this point of view, (SGCC) is weaker than the classical condition (GCC).
6. Remark that if \(O\) is strictly convex, then obviously, (SGCC) cannot be satisfied ( see Fig.4). Therefore, assumption A1 seems to be a well adapted framework to set up the microlocal condition A2.
Finally, we introduce the last assumption, namely a boundary condition on the data \(g\). For this purpose, we recall that the lateral boundary \(\partial\mathcal{L}\) of the cylinder \(\mathcal{L}=\mathbb{R}\times\Omega\) is a submanifold of \(\mathbb{R}^{n+1}\), of dimension \(n\) and class \(\mathcal{C}^{\infty}\). We will denote by \((t,x^{\prime})=(t,x^{\prime}_{1},...,x^{\prime}_{n-1})\) a system of local coordinates on \(\partial\mathcal{L}\).
**Assumption A3: Boundary condition fulfilled by observable sources**
We assume one of the following conditions :
Figure 3. Bicharacteristic rays passing throw \(O\)
**A3.a** There exists a polyhomogeneous pseudo-differential operator \(B_{\alpha}=b_{\alpha}(t,x^{\prime};D_{t},D_{x^{\prime}})\) on \(\partial\mathcal{L}\), of order \(\alpha>0\), such that \(CharB_{\alpha}\subset\mathcal{H}\) and
\[b_{\alpha}(t,x^{\prime};D_{t},D_{x^{\prime}})g=0. \tag{2.2}\]
**A3.b** There exists a family of polyhomogeneous pseudo-differential operators \(c_{\alpha}(t,x^{\prime};D_{x^{\prime}})\) in the \(x^{\prime}\)-variable on \(\partial\mathcal{L}\), smooth with respect to respect to \(t\), elliptic of order \(\alpha>0\) such that
\[c_{\alpha}(t,x^{\prime};D_{x^{\prime}})g=0. \tag{2.3}\]
**A3.c** There exists \(\mathcal{U}_{M}\) an open neighborhood of \(\overline{\Gamma}_{M}\) in \(\partial\mathcal{L}\), there exists \(\alpha>0\) and a constant \(C_{\alpha}>0\) such that for every \(u\) solution of system (1.1), the boundary trace
\[\|(\partial_{n}u+\partial_{t}u)_{|\partial\mathcal{L}}\|_{H^{\alpha}( \mathcal{U}_{M})}\leq C_{\alpha}\|g\|_{H^{1}(\Gamma_{M})}. \tag{2.4}\]
**Remark 2.2**.: _For the definition of polyhomogeneous pseudo-differential operators on \(\partial\mathcal{L}\), see Section 3.3. In particular, we recall that the characteristic set of \(B_{\alpha}=b_{\alpha}(t,x^{\prime};D_{t},D_{x^{\prime}})\) is given by_
\[CharB_{\alpha}=\{(t,x^{\prime};\tau,\xi^{\prime})\in T^{*}\partial\mathcal{L}, \ \sigma(b_{\alpha})(t,x^{\prime};\tau,\xi^{\prime})=0\}\]
_where \(\sigma(b_{\alpha})\) is the principal symbol of \(B_{\alpha}\)._
We are now ready to state our main theorem.
**Theorem 2.3**.: _Under assumptions A1, A2 and A3, for every \(T>T_{0}\), there exists \(C>0\) such that every solution of (1.1), satisfies the observability estimate_
\[\|g\|_{H^{1}(\Gamma_{M})}\leq C\|\partial_{n}u_{|\Gamma^{\prime}}\|_{L^{2}( \Gamma_{M+T}^{\prime})}. \tag{2.5}\]
**Remark 2.4**.:
1. _In case assumption A3.a is satisfied, we can relax assumptions A1 and A 2. Indeed, we may only assume the subset_ \(O\) _of the boundary_ \(\partial\Omega\) _to be concave and not necessarily strictly concave. In particular, it can be locally a hyperplane. In addition, we may assume A2 only for transverse ( hyperbolic ) rays._
2. _Condition A3.b ensures some a priori spatial regularity on the data_ \(g\)_, yielding microlocal regularity of_ \(g\) _near the elliptic and the glancing sets of the boundary. For instance, it is fulfilled if_ \(g\) _doesn't depend on the space variable_ \(x^{\prime}\)_, i.e_ \(g=g(t)\)_. In the same spirit, if we assume_ \[\|\nabla_{x^{\prime}}u_{|\partial\mathcal{L}}\|_{H^{\alpha}(\mathcal{U}_{M})} \leq C_{\alpha}\|g\|_{H^{1}(\Gamma_{M})},\] _for some_ \(\alpha>0\)_, we get the same positive conclusion, as a byproduct of the previous argument._
Figure 4. Convex boundary. In blue, a geodesic ray.
_._
3. _In Assumption A3.c, the open set_ \(\mathcal{U}_{M}\) _can be taken in the form_ \((-\varepsilon,M+\varepsilon)\times\mathcal{O}\)_, where_ \(\mathcal{O}\) _is an open neighborhood of_ \(\overline{O}\) _in_ \(\partial\Omega\)_. This condition can be interpreted as a conditional stability assumption. See for instance V. Isakov_ _[_13_]__._
4. _Obviously, the three conditions a), b) and c) of Assumption A3 are each of them sufficient and complementary. One could consider other assumptions guaranteeing the conclusion of Theorem_ 2.3_._
5. _In the setting of assumption A3.a, one can for instance, consider the case where the boundary data_ \(g\) _is subject to a wave equation. With_ \(\chi=\chi(t,x)\in\mathcal{C}_{0}^{\infty}(\Gamma_{M})\)_, consider the system_ (2.6) \[\left\{\begin{array}{c}P_{A}u=\partial_{t}^{2}u-\sum_{i,j=1}^{n}\partial_{x_ {j}}a_{ij}(x)\partial_{x_{i}}u=0\quad\text{in $\mathcal{L}$}\\ \\ u(t,.)=\chi(t,x)g(t,.)\quad\text{on $\partial\mathcal{L}$}\\ \\ P_{A}^{\prime}g=\partial_{t}^{2}g-\beta\sum_{i,j=1}^{n-1}\partial_{x_{j}^{ \prime}}a_{ij}(x^{\prime},0)\partial_{x_{i}^{\prime}}g=0\quad\text{on $\partial\mathcal{L}$}\\ \\ u(0,.)=\partial_{t}u(0,.)=0\quad\text{on $\Omega$}\\ \\ g(0,.)=g_{0}\in H^{1}(\partial\mathcal{L}),\quad\text{and}\quad\partial_{t}g( 0,.)=g_{1}\in L^{2}(\partial\mathcal{L})\end{array}\right.\] _where_ \(\beta>0\)_. One can easily check that assumption A3.a is fullfilled as soon as_ \(\beta>1\)_._
_However, if \(\beta\leq 1\), the characteristic set of \(P_{A}^{\prime}\) is contained in the union \(\mathcal{E}\cup\mathcal{G}\) of the elliptic set and the glancing set. In this case, one can construct a sequence of sources \((g_{k})\) such that the corresponding sequence of solutions \((u_{k})\) to system (2.6) violates the observability estimate (2.5), with a loss of compactness located in \(\mathcal{E}\) or \(\mathcal{G}\), see Theorems 2.5 and 7.1._
6. _To summarize: Even if, thanks to (SGCC), we can microlocally control the source_ \(g\) _near the hyperbolic set of_ \(\partial\mathcal{L}\)_, it still may develop singularities on the elliptic set, and/or travelling along some characteristic curves of the glancing set. In fact, as we will see in the proof of Theorem_ 2.3 _the analysis on these sets requires a special attention. Assumption_ A3.a _, A3.b or A3.c above are set to insure additional regularity on_ \(g\) _that avoids the rising of such singularities._
### On the lack of sidewise observability
We present now the first theorem concerning the lack of observability, even in the weaker version (1.8). This negative result ensures a loss of an infinite number of derivatives for all possible geometric configurations. Here we do not need any of the geometric conditions A1 or A2, that is, we work on a general bounded and smooth domain \(\Omega\) and any partition of its boundary.
The proof of this theorem will be given in Section 6.
**Theorem 2.5**.: _For every \(s<1\), there exists a sequence of soruces \((g_{k})_{k\geq 1}\subset H^{1}(\partial\mathcal{L})\) supported in \(\overline{\Gamma}_{M}\), such that the solutions \((u_{k})\) of system (1.1) satisfy_
\[\lim_{k\to\infty}\|g_{k}\|_{H^{s}(\Gamma_{M})}=1\quad\text{and}\quad\lim_{k \to\infty}\|\partial_{n}u_{k_{|\partial\Omega}}\|_{L^{2}(\Gamma_{M+T}^{\prime} )}=0, \tag{2.7}\]
_for every \(T>0\). In particular, the lack of compactness of the sequence \((g_{k})\) in \(H^{s}(\Gamma_{M})\) is located in the elliptic set \(\mathcal{E}\) of the boundary._
**Remark 2.6**.: _Actually, as we will see in the proof (cf. Section 6), we choose a sequence \((g_{k})\) supported in \(\overline{\Gamma}_{M}=[0,M]\times\overline{O}\) such that for some fixed \(\alpha>1\), \(\|g_{k}\|_{H^{\alpha}}\) is bounded outside the
elliptic set \(\mathcal{E}\) of the boundary. The propagation of the \(H^{\alpha}\)-wave front will then provide the desired result. In other words, the invisible sources are concentrated on the elliptic set \(\mathcal{E}\) of the boundary._
**Remark 2.7**.: _In view of Theorem 2.5, we can not expect the sidewise observability estimate (2.5) to hold, unless an infinite number of derivatives is lost. Therefore, in order to get sidewise observability estimates in Sobolev norms, structural conditions on the sources need to be imposed, such as those of assumption A3._
_Notice also that if we consider data microlocally concentrated on the glancing set of the boundary (compare to system (2.6) with \(\beta=1\)), we may observe a loss of 3 derivatives at least. Theorem 7.1 in Section 7 is devoted to this result. Notice however that the problem of proving sidewise observability with a loss of 3 or more derivatives for such sources is open._
**Remark 2.8**.: _To close this section and before going into the proofs, let us summarize the strategy one should follow to obtain sidewise observability for system (1.1)._
_First, we have to adress the problem only on well designed domains \(\Omega\), i.e those satisfying assumption A1. Secondly, we choose the measurements domain, i.e a subset \(O^{\prime}\) of the boundary \(\partial\Omega\), \(\overline{O}\cap\overline{O^{\prime}}=\emptyset\), as sharp as possible, such that (SGCC) is fullfilled. For instance, in the case of the annular domain ( Fig.1), if \(O\) is the interior boundary, then \(O^{\prime}\) is the exterior boundary. And finally, we make sure that the boundary source \(g\) we aim to observe is admissible, i.e it satisfies some a priori condition in the spirit of condition A3, that prevents the presence of invisible solutions._
## 3. Some Geometric Facts, Operators and Measures
### Geometry
Near a point \(m_{0}\) of the boundary \(\partial\Omega\), taking advantage of the regularity of \(\Omega\), we can define a system of geodesic local coordinates \(x=(x_{1},x_{2},....,x_{n})\longrightarrow y=(y_{1},y_{2},....,y_{n})\) such that
\[\Omega=\{(y_{1},y_{2},....,y_{n}),\ y_{n}>0\},\quad\partial\Omega=\{(y_{1},y_ {2},....,y_{n-1},0)\}=\{(y^{\prime},0)\}\]
where the wave operator is given by
\[P_{A}=-\partial_{t}^{2}+\Big{(}\partial_{y_{n}}^{2}+\sum_{1\leq i,j\leq n-1} \partial_{y_{j}}b_{ij}(y)\partial_{y_{i}}\Big{)}+M_{0}(y)\partial_{y_{n}}+M_{ 1}(y,\partial_{y^{\prime}}).\]
Here, the matrix \((b_{ij}(y))_{ij}\) is of class \(\mathcal{C}^{\infty}\), symmetric, uniformly definite positive on a neighborhood of \(m_{0}\), \(M_{0}(y)\) is a real valued function of class \(\mathcal{C}^{\infty}\), and \(M_{1}(y,\partial_{y^{\prime}})\) is a tangential differential operator of order 1 with \(\mathcal{C}^{\infty}\) coefficients.
In the sequel, we will come back to the notation \((t,x)=(t,x^{\prime},x_{n})=(t,y^{\prime},y_{n})\), and we shall write
\[P_{A}=\partial_{n}^{2}+R(x_{n},x^{\prime},D_{x^{\prime},t})+M_{0}(x)\partial_ {n}+M_{1}(x,\partial_{x^{\prime}})\]
Notice that, in this coordinates system, the principal symbol of the wave operator \(P_{A}\) is given by
\[\sigma(P_{A})=-\xi_{n}^{2}+r(x,\tau,\xi^{\prime})=-\xi_{n}^{2}+\Big{(}\tau^{2 }-\sum_{1\leq i,j\leq n-1}a_{ij}(x)\xi_{i}\xi_{j}\Big{)}.\]
We shall set \(r_{0}(x^{\prime},\tau,\xi^{\prime})=r(x^{\prime},0,\tau,\xi^{\prime})\) and we denote \(m_{1}=m_{1}(x,\xi^{\prime})\) the symbol of the vector field \(M_{1}\).
### Generalized bicharacteristic rays
Let us introduce the compressed cotangent bundle of Melrose-Sjostrand \(T^{*}_{b}\mathcal{L}=T^{*}\mathcal{L}\cup T^{*}\partial\mathcal{L}\). We recall that we have a natural projection
\[\pi\ :\ T^{*}\mathbb{R}^{n+1}\mid_{\overline{\Omega}}\to T^{*}_{b}\mathcal{L} \tag{3.1}\]
and we equip \(T^{*}_{b}\mathcal{L}\) with the induced topology.
Given the matrix \(A(x)=(a_{ij}(x))\), we denote by \(p_{A}(x;\tau,\xi)=\tau^{2}-^{t}\,\xi A(x)\xi\), the principal symbol of the wave operator, and
\[\text{Char}(P_{A})=\{(t,x;\tau,\xi),p_{A}(x,\tau,\xi)=\tau^{2}-^{t}\,\xi A(x) \xi=0\},\]
the characteristic set, and \(\Sigma_{A}=\pi(Char(P_{A}))\). In addition, we recall the hamiltonian field associated to \(p_{A}\)
\[H_{p_{A}}=2\tau\partial_{t}-2^{t}\xi A(x)\partial_{x}+\sum_{k=1}^{n}{}^{t}\xi \partial_{x_{k}}A(x)\xi\partial_{\xi_{k}}.\]
Also, we recall the following partition of \(T^{*}(\partial\mathcal{L})\) into elliptic, hyperbolic and glancing sets:
\[\#\Big{\{}\pi^{-1}(\rho)\cap Char(P_{A})\Big{\}}=\left\{\begin{array}{ccc}0& if&\rho\in\mathcal{E}\\ 1&if&\rho\in\mathcal{G}\\ 2&if&\rho\in\mathcal{H}\end{array}\right. \tag{3.2}\]
For the sake of simplicity, we will develop the rest of this section in a system of local geodesic coordinates as introduced in section 3.1. We recall that we have locally
\[\mathcal{L}=\{(t,x)\in\mathbb{R}^{n+1},\,x_{n}>0\}\quad\text{and}\quad\partial \mathcal{L}=\{(t,x)\in\mathbb{R}^{n+1},\,x_{n}=0\}.\]
We also get :
\[\mathcal{E}=\{r_{0}<0\},\qquad\mathcal{H}=\{r_{0}>0\},\qquad\mathcal{G}=\{r_{ 0}=0\}.\]
Notice that using the projection \(\pi\), one can identify the glancing set \(\mathcal{G}\) with a subset of \(T^{*}\mathbb{R}^{n+1}\).
**Definition 3.1**.:
1. _A point_ \(\rho\in T^{*}\partial\mathcal{L}\backslash 0\) _is nondiffractive if_ \(\rho\in\mathcal{H}\) _or if_ \(\rho\in\mathcal{G}\) _and the free bicharacteristic_ \((\exp sH_{p_{A}})\widetilde{\rho}\) _passes over the complement of_ \(\overline{\mathcal{L}}\) _for arbitrarily small values of_ \(s,\) _where_ \(\widetilde{\rho}\) _is the unique point in_ \(\pi^{-1}(\rho)\cap Char(P_{A})\)_._
2. \(\rho\in T^{*}\partial\mathcal{L}\backslash 0\) _is strictly gliding if_ \(\rho\in\mathcal{H}\) _or if_ \(\rho\in\mathcal{G}\) _and_ \(H^{2}_{p_{A}}(x_{n})(\rho)<0\)_._ _In the latter case, the projection on the_ \((t,x)-\)_space of the free bicharacteristic ray_ \(\gamma\) _issued from_ \(\rho\) _leaves the boundary_ \(\partial\mathcal{L}\) _and enters in_ \(T^{*}(\mathbb{R}^{n+1}\setminus\overline{\mathcal{L}})\) _at_ \(\widetilde{\rho}=\pi^{-1}(\rho)\)_._
3. \(\rho\in T^{*}\partial\mathcal{L}\backslash 0\) _is strictly diffractive if_ \(\rho\in\mathcal{G}\) _and_ \(H^{2}_{p_{A}}(x_{n})(\rho)>0\)_._ _This means that there exists_ \(\varepsilon>0\) _such that_ \((\exp sH_{p_{A}})\widetilde{\rho}\in T^{*}\mathcal{L}\) _for_ \(0<|s|<\varepsilon\)_._
**Definition 3.2**.: _We shall denote by \(\mathcal{G}_{d}\) the set of strictly diffractive points and by \(\mathcal{G}_{sg}\) the set of strictly gliding points._
**Remark 3.3**.:
1. _Under assumption A1, we notice that over_ \(\Gamma\)_, the glancing set_ \(\mathcal{G}\) _is reduced to_ \(\mathcal{G}_{d}\)_, i.e_ \[\mathcal{G}_{|\Gamma}\subset\mathcal{G}_{d}.\] _Namely all generalized bicharacteristic curves issued from points of_ \(\mathcal{G}_{|\Gamma}\) _have a first order tangency with the boundary._
2. _In local geodesic coordinates, the sets_ \(\mathcal{G}_{d}\) _and_ \(\mathcal{G}_{sg}\setminus\mathcal{H}\) _are given by_ (3.3) \[\mathcal{G}_{d}=\{\xi_{n}=r_{0}=0,\,\partial_{n}r_{|x_{n}=0}>0\},\qquad\text{ and}\qquad\mathcal{G}_{sg}\setminus\mathcal{H}=\{\xi_{n}=r_{0}=0,\,\partial_{n}r_{|x_{n}=0}<0\}.\]
**Definition 3.4**.: _A generalized bicharacteristic ray is a continuous map_
\[\mathbb{R}\supset I\setminus B\ni s\mapsto\gamma(s)\in T^{*}\mathcal{L}\cup \mathcal{G}\subset T^{*}\mathbb{R}^{n+1}\]
_where \(I\) is an interval of \(\mathbb{R}\), \(B\) is a set of isolated points, for every \(s\in I\setminus B\), \(\gamma(s)\in\Sigma_{A}\) and \(\gamma\) is differentiable as a map with values in \(T^{*}\mathbb{R}^{n+1}\), and_
1. _If_ \(\gamma(s_{0})\in T^{*}\mathcal{L}\cup\mathcal{G}_{d}\) _then_ \(\dot{\gamma}(s)=H_{p_{A}}(\gamma)(s)\)_._
2. _If_ \(\gamma(s_{0})\in\mathcal{G}\setminus\mathcal{G}_{d}\) _then_ \(\dot{\gamma}(s_{0})=H_{p_{A}}^{G}(\gamma(s_{0}))\)_, where_ \(H_{p_{A}}^{G}=H_{p_{A}}+(H_{p_{A}}^{2}x_{n}/H_{x_{n}}^{2}p_{A})H_{x_{n}}\)_._
3. _For every_ \(s_{0}\in B\)_, the two limits_ \(\gamma(s_{0}\pm 0)\) _exist and are the two different points of the same hyperbolic fiber of the projection_ \(\pi\)_._
**Remark 3.5**.:
1. _We recall that if_ \(\Omega\) _has no contact of infinite order with its tangents, the Melrose-Sjostrand flow is globally well defined._
2. _In the interior, i.e in_ \(T^{*}\mathcal{L}\)_, a generalized bicharacteristic is simply a classical bicharacteristic ray of the wave operator whose projection on the basis is a geodesic of_ \(\Omega\) _equipped with the metric_ \((a^{ij})=(a_{ij})^{-1}\) _._
3. _Finally,_ \(\gamma\) _can be considered as a continuous map on the interval_ \(I\) _with values in_ \(T^{*}_{b}\mathcal{L}\)_._
### Pseudo-differential operators
In this section, we introduce the classes of pseudo-differential operators we shall use in this paper. We start with the operators on the cylinder \(\mathcal{L}\).
Let \(\mathcal{A}\) be the set of pseudo-differential operators of the form \(Q=Q_{i}+Q_{\partial}\) where \(Q_{i}\) is a classical pseudo-differential operator, compactly supported in \(\mathcal{L}\) and \(Q_{\partial}\) is a classical tangential pseudo-differential operator, compactly supported near \(\partial\mathcal{L}\). More precisely, \(Q_{i}=\varphi Q_{i}\varphi\) for some \(\varphi\in\mathcal{C}_{0}^{\infty}(\mathcal{L})\) and \(Q_{\partial}=\psi Q_{\partial}\psi\) for some \(\psi(t,x_{n})\in\mathcal{C}^{\infty}(\mathbb{R}\times]-\alpha,\alpha[)\). \(\mathcal{A}^{s}\) will denote the elements of \(\mathcal{A}\) of order s.
On the other hand, the boundary \(\partial\mathcal{L}=\mathbb{R}\times\partial\Omega\) is a smooth manifold of dimension \(n\) without boundary. Following L.Hormander [12] and using a system of local charts, we can define for \(m\in\mathbb{R}\), the space of polyhomogeneous pseudo-differential operators \(\Psi^{m}_{phg}(\partial\mathcal{L})\) on \(\partial\mathcal{L}\), associated with symbols in \(S^{m}_{phg}(T^{*}\partial\mathcal{L})\). These operators enjoy all classical properties of continuity and composition.
### Microlocal defect measures
Here we use notations of section 3.2. Denote
\[\left\{\begin{array}{ll}Z=\pi(CharP_{A}),\qquad\hat{Z}=Z\cup\pi(T^{*} \overline{\mathcal{L}}_{|x_{n}=0}),\\ \\ SZ=(Z\setminus\overline{\mathcal{L}})/\mathbb{R}_{+}^{*},\qquad S\hat{Z}=(\hat {Z}\setminus\overline{\mathcal{L}})/\mathbb{R}_{+}^{*}.\end{array}\right.\]
and for \(Q\in\mathcal{A}^{0}\) with principal symbol \(\sigma(Q)=q\), set
\[\kappa(q)(\rho)=q(\pi^{-1}(\rho)).\]
We define also for \(u\in H^{1}(\mathcal{L})\)
\[\phi(Q,u)=(Qu,u)_{H^{1}}=\int_{\mathcal{L}}\Big{(}\nabla_{t,x}Qu.\nabla_{t,x} \overline{u}+Qu.\overline{u}\Big{)}dxdt.\]
Finally, let \((u_{k})\) be a sequence of functions weakly converging to \(0\) in \(H^{1}_{loc}(\mathcal{L})\). In [15] and [8], the authors prove the following result:
**Theorem 3.6** (Burq-Lebeau [8]).: _There exists a subsequence of \((u_{k})\) (still denoted by \((u_{k})\)) and a positive Radon measure \(\mu\) on \(S\hat{Z}\) such that_
\[\lim_{k\to\infty}\phi(Q,u_{k})=\langle\mu,\kappa(q)\rangle,\qquad\forall Q\in \mathcal{A}^{0}.\]
We will refer to \(\mu\) as a microlocal defect measure associated to the sequence \((u_{k})\).
On the other hand, on the boundary \(\partial\mathcal{L}\), we can make use of the classical notion of microlocal defect measure introduced by P. Gerard in [9]. More precisely, for every sequence of functions \((v_{k})\) weakly converging to \(0\) in \(H^{1}_{loc}(\partial\mathcal{L})\), there exists a positive Radon measure \(\tilde{\mu}\) on \(S^{*}(\partial\mathcal{L})\) such that we have, up to a subsequence
\[\lim_{k\to\infty}(Qv_{k},v_{k})_{L^{2}(\partial\mathcal{L})}=\langle\tilde{ \mu},|\eta|^{-2}\sigma(Q)\rangle,\qquad\forall Q\in\Psi^{2}_{phg}(\partial \mathcal{L}).\]
Here we have denoted by \((y,\eta)\) the standard element of \(T^{*}(\partial\mathcal{L})\setminus 0\).
We will remind the properties of these measures in some steps of the proof later, see Section 5.3.
## 4. Preliminary results
### A Geometric Lemma
Let \(O\) (resp. \(\mathcal{O}\)) be the open subset of \(\partial\Omega\) introduced in the statement of Assumption A1 (resp. A2 ), and set \(\mathcal{U}=\mathbb{R}\times\mathcal{O}\). Consider \(V\) a neighborhood of \(\overline{O}\) in \(\mathbb{R}^{n}\) such that \(V\cap\partial\Omega\subset\mathcal{O}\). \(\mathbb{R}\times V\) is an open neighborhood of \(\overline{\Gamma}=\mathbb{R}\times\overline{O}\) in \(\mathbb{R}^{n+1}\). In this setting \(W=\mathbb{R}\times(V\cap\Omega)=(\mathbb{R}\times V)\cap\mathcal{L}\) is an interior neighborhood of the boundary \(\overline{\Gamma}\) ( see Figure 5). On the other hand, consider \(\rho\in T^{*}W\cap Char(P_{A})\) and denote \(\gamma=\gamma(s)\) the generalized bicharacteristic issued from \(\rho\), i.e \(\gamma(0)=\rho\). In addition, we define by \(\gamma^{+}=\{\gamma(s),s>0\}\), resp. \(\gamma^{-}=\{\gamma(s),s<0\}\) the outcoming half bicharacteristic and the incoming half bicharacteristic at \(\rho\), see Figure 5.
**Lemma 4.1**.: _With the notations above and under assumptions A1 and A2, for every \(T>T_{0}\), there exists \(V\) neighborhood of \(\overline{O}\) in \(\mathbb{R}^{n}\), \(V\cap\partial\Omega\subset\mathcal{O}\), such that for every \(\rho\in T^{*}(W)\cap Char(P_{A})\), one of the two half bicharacteristics issued from \(\rho\), the outcoming one or the incoming one, travelling at speed one, intersects the boundary \(\Gamma^{\prime}\) at a strictly gliding point, without intersecting the boundary \(\overline{\Gamma}\), and before the time \(T\)._
_We will say that this half bicharacteristic satisfies (SGCC)._
Figure 5. On the left interior neighborhood of \(\Gamma\).
On the right tangent ( black ) and hyperbolic ( blue ) half bicharacteristic rays
Proof.: For \(\rho\in T^{*}W\cap Char(P_{A})\), denote by \(\gamma_{\rho}=\{\gamma_{\rho}(s),s\in\mathbb{R}\}\) the generalized bicharacteristic issued from \(\rho\). In particular, \(\gamma_{\rho}(0)=\rho\). Assume that \(\gamma_{\rho}\) intersects \(\mathcal{U}\) for some value \(s_{1}<0\) at a hyperbolic or at a glancing point. According to assumption A2, we then get that for some \(s\in\mathbb{R}\) such that \(s-s_{1}<T_{0}\), \(\gamma_{\rho}(s)\) is a strictly gliding point of the boundary \(\Gamma^{\prime}\) and, in addition \(\{\gamma_{\rho}(s^{\prime}),s_{1}<s^{\prime}<s\}\cap T_{b}^{*}\mathcal{L}_{| \overline{\Gamma}}=\emptyset\). In this case, we see that the statement of Lemma 4.1 is satisfied by the outcoming half bicharacteristic issued from \(\rho\). Obviously, the case \(s_{1}>0\) can be treated in a similar way. According to this, we may only focus on the points \(\rho\) close to \(\overline{\Gamma}\) such that \(\gamma_{\rho}=\{\gamma_{\rho}(s),s\in\mathbb{R}\}\) doesn't intersect \(\overline{\Gamma}\) for \(s\in]-T_{0},T_{0}[\). In addition, due to the compactness of \(\overline{O}\), it suffices to prove that every glancing point \(\rho\in\mathcal{G}_{|\mathcal{U}}\subset T^{*}\partial\mathcal{L}_{|\mathcal{ U}}\) admits a neighborhood \(V_{\rho}\) in \(T^{*}(\mathbb{R}^{n+1})\) such that conclusion of Lemma 4.1 is valid for every \(\rho^{\prime}\in V_{\rho}\cap T^{*}\mathcal{L}\).
Before entering in the details of the proof, we warn the reader that if a generalized bicharacteristic \(\gamma_{\rho}\) hits the boundary transversally for some value \(s_{0}\), that is at a hyperbolic point, we will denote this point by \(\gamma_{\rho}(s_{0})\), by abuse of notation.
Consider then \(\rho\in\mathcal{G}_{|\mathcal{U}}\subset T^{*}\partial\mathcal{L}_{|\mathcal{ U}}\) and let \(s_{0}\in]0,T_{0}[\) be a time such that the generalized bicharacteristic \(\gamma_{\rho}\) hits the boundary \(\Gamma^{\prime}\) at a strictly gliding point. Here we have two possibilities : a) \(\gamma_{\rho}(s_{0})\) is a hyperbolic point or b) \(\gamma_{\rho}(s_{0})\) a glancing strictly gliding point. We will discuss each one of these cases, and in order to simplify the argument, we will work in local geodesic coordinates.
* Case a) : \(\gamma_{\rho}(s_{0})\) is a hyperbolic point. With the notations of Definition 3.4, \(s_{0}\in B_{\rho}\) where \(B_{\rho}\) is a set of isolated points in \(\mathbb{R}\) such that the two limits \(\gamma_{\rho}(s_{0}\pm 0)\) exist and are the two different points of the same hyperbolic fiber of the projection \(\pi\). Furthermore, we have (4.1) \[H_{p_{A}}x_{n}(\gamma_{\rho}(s_{0}-0))=\frac{dx_{n}}{ds}(\gamma_{\rho}(s_{0}-0 ))=-2\xi_{n}(\gamma_{\rho}(s_{0}-0))<0.\] Consequently, for \(\varepsilon>0\) small enough, \(\gamma_{\rho}(s_{0}-\varepsilon)\) is an interior point, moreover, the \(x_{n}\) and \(\xi_{n}\)- coordinates satisfy (4.2) \[-2\xi_{n}(\gamma_{\rho}(s))=\frac{dx_{n}}{ds}(\gamma_{\rho}(s))\leq-c,\quad \forall s\in[s_{0}-\varepsilon,s_{0}[,\qquad\text{for some}\quad c>0.\] This yields (4.3) \[\xi_{n}(\gamma_{\rho}(s))\geq c/2,\quad\forall s\in[s_{0}-\varepsilon,s_{0}[.\] In addition, we may assume that \(0<x_{n}(\gamma_{\rho}(s_{0}-\varepsilon))<\eta\) for some \(\eta>0\) to be chosen later. Now we fix \(\varepsilon>0\). Taking into account the continuity of the Melrose-Sjostrand flow, it's clear that for \(0<\alpha<\frac{1}{4}x_{n}(\gamma_{\rho}(s_{0}-\varepsilon))\), one can find \(V_{\rho}\) a small enough neighborhood of \(\rho\) in \(T^{*}\mathbb{R}^{n+1}\), such that for all \(\rho^{\prime}\in V_{\rho}\cap T^{*}\mathcal{L}\cap Char(P_{A})\),
(4.4) \[|x_{n}(\gamma_{\rho}(s_{0}-\varepsilon))-x_{n}(\gamma_{\rho^{\prime}}(s_{0}- \varepsilon))|\leq\alpha,\] and (4.5) \[\xi_{n}(\gamma_{\rho^{\prime}}(s))\geq c^{\prime},\quad\forall s\in[s_{0}- \varepsilon,s_{0}[,\] for some \(c^{\prime}>0\). In particular, this means that \(\gamma_{\rho^{\prime}}(s_{0}-\varepsilon)\) is an interior point since
\[x_{n}(\gamma_{\rho^{\prime}}(s_{0}-\varepsilon))\geq\frac{3}{4}x_{n}(\gamma_{\rho }(s_{0}-\varepsilon))>0. \tag{4.6}\]
In addition, notice that estimate (4.5) is valid as long as \(x_{n}(\gamma_{\rho^{\prime}}(s))>0\), so possibly for \(s\in]s_{0}-\varepsilon,s_{0}+\beta[\), \(\beta>0\) small. Finally,
\[\left\{\begin{array}{c}x_{n}(\gamma_{\rho^{\prime}}(s))\leq x_{n}(\gamma_{ \rho^{\prime}}(s_{0}-\varepsilon))-2c^{\prime}(s-s_{0}+\varepsilon)\\ \\ \leq\frac{5}{4}x_{n}(\gamma_{\rho}(s_{0}-\varepsilon))-2c^{\prime}(s-s_{0}+ \varepsilon)\leq\frac{5}{4}\eta-2c^{\prime}(s-s_{0}+\varepsilon)\end{array}\right. \tag{4.7}\]
Consequently, we obtain that \(x_{n}(\gamma_{\rho^{\prime}}(s))\) vanishes for some \(s\geq s_{0}+\frac{5}{8c^{\prime}}\eta-\varepsilon\), which means that the bicharacteristic ray \(\gamma_{\rho^{\prime}}\) leaves \(\mathcal{L}\) at a hyperbolic point before the time \(T>T_{0}\), as soon as \(\frac{5}{8c^{\prime}}\eta-\varepsilon<T-T_{0}\).
* Case b) : \(\gamma_{\rho}(s_{0})\) is a glancing strictly gliding point. According to Definition 3.1, we know in this case that (4.8) \[x_{n}(\gamma_{\rho}(s_{0}))=r(\gamma_{\rho}(s_{0}))=0\qquad\text{and}\qquad \frac{\partial r}{\partial x_{n}}(\gamma_{\rho}(s_{0}))<0.\] Let then \(B(\gamma_{\rho}(s_{0}),\varepsilon)\) be the open ball of \(T^{*}\mathbb{R}^{n+1}\) with center \(\gamma_{\rho}(s_{0})\) and radius \(\varepsilon\). It's clear that for \(\varepsilon\) and \(c>0\) suitable, one has (4.9) \[\frac{\partial r}{\partial x_{n}}(\zeta)\leq-c,\qquad\forall\zeta\in B(\gamma _{\rho}(s_{0}),\varepsilon).\]
Moreover, for \(\eta\in]0,\varepsilon[\) small enough, using again the continuity of the Melrose-Sjostrand flow, we may find \(V_{\rho}\), a neighborhood of \(\rho\) in \(T^{*}\mathbb{R}^{n+1}\) such that for all \(\rho^{\prime}\in V_{\rho}\cap T^{*}\mathcal{L}\cap Char(P_{A})\),
\[\gamma_{\rho^{\prime}}(s_{0})\in B(\gamma_{\rho}(s_{0}),\eta). \tag{4.10}\]
In this setting, two cases may occur :
i) \(\gamma_{\rho^{\prime}}(s_{0})\) is a boundary point and necessarily \(r(\gamma_{\rho^{\prime}}(s_{0}))\geq 0\). If \(r(\gamma_{\rho^{\prime}}(s_{0}))>0\) then \(\gamma_{\rho^{\prime}}(s_{0})\) is a hyperbolic point. Otherwise, \(r(\gamma_{\rho^{\prime}}(s_{0}))=0\) and then it's a glancing strictly gliding point thanks to (4.9).
ii) \(\gamma_{\rho^{\prime}}(s_{0})\) is an interior point (see Figure 6 below ).
Figure 6. Strictly gliding points
In this case, using the Hamiltonian field \(H_{p_{A}}\), we get :
\[\frac{dx_{n}}{ds}(\gamma_{\rho^{\prime}}(s_{0}))=-2\xi_{n}(\gamma_{\rho^{\prime}} (s_{0}))\leq 2\eta. \tag{4.11}\]
Thus, if we denote in short \(x_{n}(s)=x_{n}(\gamma_{\rho^{\prime}}(s))\), we can perform a Taylor expansion and get in vue of (4.9) :
\[\left\{\begin{array}{c}x_{n}(s)=x_{n}(s_{0})+\frac{dx_{n}}{ds}(s_{0})(s-s_{0} )+\frac{1}{2}\frac{d^{2}x_{n}}{ds^{2}}(s_{0})(s-s_{0})^{2}+o(s-s_{0})^{2}\\ \leq\eta+2\eta(s-s_{0})-c(s-s_{0})^{2}+o(s-s_{0})^{2}.\end{array}\right. \tag{4.12}\]
Similarly, we obtain for the \(\xi_{n}\) - component of \(\gamma_{\rho^{\prime}}(s)\) :
\[\left\{\begin{array}{c}\xi_{n}(s)=\xi_{n}(s_{0})+\frac{d\xi_{n}}{ds}(s_{0}) (s-s_{0})+o(s-s_{0})\\ \geq-\eta+c(s-s_{0})+o(s-s_{0})\end{array}\right. \tag{4.13}\]
From (4.12) we deduce that \(\gamma_{\rho^{\prime}}(s)\) intersects the boundary before the time \(s_{1}\) such that \(s_{1}-s_{0}\approx\frac{1}{\sqrt{c}}\eta^{1/2}\). Furthermore, we conclude from (4.13) that \(\xi_{n}(s)\geq\frac{\sqrt{c}}{2}\eta^{1/2}\) for \(s\) close to \(s_{1}\), which means that \(\gamma_{\rho^{\prime}}(s_{1})\) is a hyperbolic point of the boundary \(\Gamma^{\prime}\). Finally, we finish the argument by taking \(\eta>0\) such that \(\frac{1}{\sqrt{c}}\eta^{1/2}<T-T_{0}\).
The proof of Lemma 4.1 is now complete.
### First computations
We consider a family of pseudo-differential symbols in the class \(\mathcal{A}^{0}\) introduced in section 3.3 above, tangential and classical. Since the result we seek is of local nature, we work in a system of geodesic coordinates near the boundary \(\partial\mathcal{L}\) and choose these symbols in the form \(q=q(x_{n},x^{\prime},t,\xi^{\prime},\tau)\), and of class \(C^{\infty}\) with respect to \(x_{n}\), real valued, compactly supported in \((t,x^{\prime},x_{n})\), and independent of \(x_{n}\) in a strip \(\{|x_{n}|<\beta\}\), \(\beta>0\) small enough. For instance, one may take \(q\) in the form \(q(x_{n},x^{\prime},t,\xi^{\prime},\tau)=\varphi(x_{n})\tilde{q}(x^{\prime},t, \xi^{\prime},\tau)\), with \(\varphi\in\mathcal{C}^{\infty}_{0}(\mathbb{R})\), equal to \(1\) near \(x_{n}=0\). We shall denote by \(Q=Q(x_{n},x^{\prime},t,D_{x^{\prime},t})\) the corresponding tangential pseudo-differential operators.
In the proofs of theorem 2.3, we will make successive choices of symbols \(q\).
We recall that in the system of local geodesic coordinates, the wave equation takes the form
\[\partial_{n}^{2}u+R(x_{n},x^{\prime},D_{x^{\prime},t})u+M_{0}(x)\partial_{n}u+ M_{1}(x,\partial_{x^{\prime}})u=0. \tag{4.14}\]
We multiply the equation by \(Q^{2}\partial_{n}\overline{u}\) and we integrate over \(\mathcal{L}\).
\[\left\{\begin{array}{c}I_{1}=\int_{\mathcal{L}}\partial_{n}^{2}u\,Q^{2} \partial_{n}\overline{u}=-\int_{\partial\mathcal{L}}\partial_{n}u\,Q^{2} \partial_{n}\overline{u}\,d\sigma-\int_{\mathcal{L}}\partial_{n}u\partial_{n} \,Q^{2}\partial_{n}\overline{u}\\ =-\int_{\partial\mathcal{L}}\partial_{n}u\,Q^{2}\partial_{n} \overline{u}\,d\sigma-\int_{\mathcal{L}}\partial_{n}u[\partial_{n},\,Q^{2}] \partial_{n}\overline{u}-\int_{\mathcal{L}}\partial_{n}uQ^{2}\partial_{n}^{2} \overline{u}\\ =-\int_{\partial\mathcal{L}}\partial_{n}u\,Q^{2}\partial_{n} \overline{u}\,d\sigma-\int_{\mathcal{L}}\partial_{n}u[\partial_{n},\,Q^{2}] \partial_{n}\overline{u}-\int_{\mathcal{L}}Q^{2}\partial_{n}u\partial_{n}^{2} \overline{u}+\int_{\mathcal{L}}(Q^{2}-Q^{*2})\partial_{n}u\partial_{n}^{2} \overline{u}\\ =-\int_{\partial\mathcal{L}}\partial_{n}u\,Q^{2}\partial_{n} \overline{u}\,d\sigma-\int_{\mathcal{L}}\partial_{n}u[\partial_{n},\,Q^{2}] \partial_{n}\overline{u}-\int_{\mathcal{L}}Q^{2}\partial_{n}u\partial_{n}^{2} \overline{u}\\ -\int_{\mathcal{L}}(Q^{2}-Q^{*2})\partial_{n}uR\overline{u}-\int_{ \mathcal{L}}M_{0}(Q^{2}-Q^{*2})\partial_{n}u\partial_{n}\overline{u}-\int_{ \mathcal{L}}(Q^{2}-Q^{*2})\partial_{n}uM_{1}\overline{u}\end{array}\right. \tag{4.15}\]
\[I_{2}=\int_{\mathcal{L}}Ru\,Q^{2}\partial_{n}\overline{u}=\int_{\mathcal{L}}Ru\,[Q ^{2},\partial_{n}]\overline{u}+\int_{\mathcal{L}}Ru\,\partial_{n}Q^{2}\overline{u} \tag{4.16}\]
\[=-\int_{\partial\mathcal{L}}Ru\,Q^{2}\overline{u}d\sigma-\int_{\mathcal{L}}( \partial_{n}R)u\,Q^{2}\overline{u}-\int_{\mathcal{L}}\partial_{n}u\,R^{*}Q^{2} \overline{u}-\int_{\mathcal{L}}Ru\,[\partial_{n},\,Q^{2}]\overline{u}\]
\[=-\int_{\partial\mathcal{L}}Ru\,Q^{2}\overline{u}d\sigma-\int_{\mathcal{L}}( \partial_{n}R)u\,Q^{2}\overline{u}-\int_{\mathcal{L}}\partial_{n}u\,[R^{*},Q^{ 2}]\overline{u}-\int_{\mathcal{L}}Ru\,[\partial_{n},\,Q^{2}]\overline{u}\]
\[=-\int_{\partial\mathcal{L}}Ru\,Q^{2}\overline{u}d\sigma-\int_{\mathcal{L}}( \partial_{n}R)u\,Q^{2}\overline{u}-\int_{\mathcal{L}}\partial_{n}u\,[R^{*},Q^ {2}]\overline{u}-\int_{\mathcal{L}}Q^{2}\partial_{n}u\,R\overline{u}\]
\[-\int_{\mathcal{L}}(Q^{*2}-Q^{2})\partial_{n}u\,R\overline{u}-\int_{\mathcal{ L}}\partial_{n}u\,Q^{2}(R^{*}-R)\overline{u}-\int_{\mathcal{L}}Ru\,[\partial_{n}, \,Q^{2}]\overline{u}.r\]
Setting \(f=M_{0}(x)\partial_{n}u+M_{1}(x,\partial_{x^{\prime}})u\) and summarizing all the computations above, we obtain
\[\int_{\partial\mathcal{L}}\partial_{n}uQ^{2}\partial_{n}\overline{u}\,d\sigma +\int_{\partial\mathcal{L}}Ru\,Q^{2}\,\overline{u}\,d\sigma+\int_{\mathcal{L}} (\partial_{n}R)u\,Q^{2}\,\,\overline{u}=2\,\mathrm{Re}\int_{\mathcal{L}}fQ^{2 }\partial_{n}\overline{u}-\sum_{j=1}^{8}A_{j}. \tag{4.17}\]
We have \(\int_{\mathcal{L}}fQ^{2}\partial_{n}\overline{u}=\int_{\mathcal{L}}M_{0} \partial_{n}uQ^{2}\partial_{n}\overline{u}+\int_{\mathcal{L}}M_{1}uQ^{2} \partial_{n}\overline{u}\). The first term of the sum reads
\[\int_{\mathcal{L}}M_{0}\partial_{n}uQ^{2}\partial_{n}\overline{u}=-\int_{ \partial\mathcal{L}}M_{0}uQ^{2}\partial_{n}\overline{u}\,\partial\sigma-\int_ {\mathcal{L}}(\partial_{n}M_{0})uQ^{2}\partial_{n}\overline{u}-\int_{\mathcal{ L}}M_{0}u[\partial_{n},Q^{2}]\partial_{n}\overline{u}-\int_{\mathcal{L}}M_{0}uQ^{2} \partial_{n}^{2}\overline{u} \tag{4.18}\]
\[=-\int_{\partial\mathcal{L}}M_{0}uQ^{2}\partial_{n}\overline{u}\,d\sigma-\int _{\mathcal{L}}(\partial_{n}M_{0})uQ^{2}\partial_{n}\overline{u}-\int_{\mathcal{ L}}M_{0}u[\partial_{n},Q^{2}]\partial_{n}\overline{u}+\int_{\mathcal{L}}M_{0}uQ^{2}R \overline{u}+\int_{\mathcal{L}}M_{0}uQ^{2}\overline{f}.\]
Finally we obtain
\[\int_{\partial\mathcal{L}}\partial_{n}uQ^{2}\partial_{n}\overline{u}\,d\sigma +\int_{\partial\mathcal{L}}Ru\,Q^{2}\,\overline{u}\,d\sigma+\int_{\mathcal{L }}u\,Q^{2}\,(\partial_{n}R)\,\overline{u}=\sum_{j=1}^{14}A_{j} \tag{4.19}\]
**Remark 4.2**.: _In fact, we will see later that the remaining terms \(A_{j}\) for \(j=1,...,14\), as described below, do not play a role in our arguments, see Corollary 5.7 and Lemma 5.12._
\[\left\{\begin{array}{l}A_{1}=\int_{\mathcal{L}}\partial_{n}u[\partial_{n}, \,Q^{2}]\partial_{n}\overline{u},\quad A_{2}=-\int_{\mathcal{L}}\partial_{n}u (Q^{*2}-Q^{2})R\overline{u},\quad A_{3}=\int_{\mathcal{L}}(Q^{2}-Q^{*2}) \partial_{n}uM_{0}\partial_{n}u,\\ A_{4}=\int_{\mathcal{L}}(Q^{2}-Q^{*2})\partial_{n}uM_{1}u,\quad A_{5}=\int_{ \mathcal{L}}\partial_{n}u\,[R^{*},Q^{2}]\overline{u},\quad A_{6}=\int_{ \mathcal{L}}(Q^{*2}-Q^{2})\partial_{n}u\,R\overline{u}\\ A_{7}=\int_{\mathcal{L}}\partial_{n}u\,Q^{2}(R^{*}-R)\overline{u},\quad A_{8}=2 \,\mathrm{Re}\int_{\mathcal{L}}(\partial_{n}M_{0})uQ^{2}\partial_{n} \overline{u},\quad A_{9}=2\,\mathrm{Re}\int_{\partial\mathcal{L}}M_{0}uQ^{2} \partial_{n}\overline{u}\,d\sigma,\\ A_{10}=2\,\mathrm{Re}\int_{\mathcal{L}}M_{0}u[\partial_{n},Q^{2}]\partial_{n} \overline{u},\quad A_{11}=-2\,\mathrm{Re}\int_{\mathcal{L}}M_{0}uQ^{2}R \overline{u},\quad A_{12}=-2\,\mathrm{Re}\int_{\mathcal{L}}M_{0}uQ^{2} \overline{f},\\ A_{13}=-2\,\mathrm{Re}\int_{\mathcal{L}}M_{1}uQ^{2}\partial_{n}\overline{u},\quad A _{14}=\int_{\mathcal{L}}Ru\,[\partial_{n},\,Q^{2}]\overline{u}\end{array}\right. \tag{4.20}\]
## 5. Proof of Theorem 2.3
The proof relies on a classical strategy. We first establish a relaxed observability estimate, then we drop the compact term with the help of a unique continuation argument.
### Relaxed observation and unique continuation
**Proposition 5.1**.: _Under assumptions A1, A2 and A3, for every \(T>T_{0}\), there exists \(c>0\) such that for every \(g\in H^{1}(\partial\mathcal{L})\), \(supp(g)\subset\overline{\Gamma}_{M}\), the solution \(u\) of (1.1), satisfies the observability estimate_
\[\|g\|_{H^{1}(\Gamma_{M})}\leq c\|\partial_{n}u_{|\partial\Omega}\|_{L^{2}( \Gamma_{M+T}^{\prime})}+c\|g\|_{L^{2}(\Gamma_{M})}. \tag{5.1}\]
Also, we will need the following uniqueness result.
**Lemma 5.2**.: _Assume that estimate (5.1) holds true for all \(T>T_{0}\). Then for \(g\in H^{1}(\partial\mathcal{L})\) with \(supp(g)\subset\overline{\Gamma}_{M}\), if the solution \(u\) to system (1.1) satisfies \(\partial_{n}u_{|\partial\Omega}\equiv 0\) on \(\Gamma_{M+T}^{\prime}\), then \(u\) vanishes identically. In particular, \(g\equiv 0\)._
The proof of Lemma 5.2 is given at the end of this section and the proof of Proposition 5.1 will be the purpose of Section 5.2. Here, we first show how we can conclude the proof of Theorem 2.3 using these results.
For this, we use a contradiction argument. Assume that estimate (2.5) is false and consider a sequence of boundary data \((g_{k})\in H^{1}(\partial\mathcal{L})\), \(supp(g_{k})\subset\overline{\Gamma}_{M}\), and \((u_{k})\) the sequence of associated solutions, with
\[\|\partial_{n}u_{k|\partial\Omega}\|_{L^{2}(\Gamma_{M+T}^{\prime})}<\frac{1}{ k}\|g_{k}\|_{H^{1}(\Gamma)}. \tag{5.2}\]
The sequence \(v_{k}=\|g_{k}\|_{H^{1}(\Gamma)}^{-1}u_{k}\) then satisfies
\[\left\{\begin{array}{c}P_{A}v_{k}=0,\quad v_{k|\Gamma^{\prime}}=0,\quad\|v_{ k|\partial\Omega}\|_{H^{1}(\Gamma)}=1,\text{and}\quad\|\partial_{n}v_{k|\partial \Omega}\|_{L^{2}(\Gamma_{M+T}^{\prime})}<\frac{1}{k}\,.\end{array}\right. \tag{5.3}\]
The sequence \((v_{k})\) is bounded in the energy space \(C^{0}((0,M+T),H^{1}(\Omega))\cap C^{1}((0,M+T),L^{2}(\Omega))\) accordingly to (1.2), thus we may assume that it converges weakly in the cylinder \(\mathcal{L}_{M+T}\) to some function \(v\in H^{1}(\mathcal{L}_{M+T})\).
In the same way, we assume that the sequence \(\tilde{g}_{k}=v_{k|\partial\Omega}\) weakly converges to some \(\tilde{g}\) in \(H^{1}(\Gamma)\), with \(supp(\tilde{g})\subset\overline{\Gamma}_{M}\). Passing then to the limit \(k\to\infty\) in (5.3), we obtain
\[P_{A}v=0,\quad v_{|\partial\Omega}=\tilde{g},\quad\text{and}\quad\partial_{n} v_{|\partial\Omega}=0\quad\text{on}\,\,\Gamma_{M+T}^{\prime}. \tag{5.4}\]
The unique continuation result of lemma 5.2 then gives that the weak limits \(v\) and \(\tilde{g}\) vanish identically. Coming back then to Proposition 5.1 and plugging \(v_{k}\) and \(\tilde{g}_{k}\) in estimate (5.1), we get the contradiction
\[1\leq c\|\tilde{g}_{k}\|_{L^{2}(\Gamma_{M})}\longrightarrow 0\quad\text{as} \quad k\to\infty\]
thanks to the compact imbedding of \(H^{1}(\Gamma_{M})\) into \(L^{2}(\Gamma_{M})\).
Proof of the unique continuation.: The proof is based on a classical argument of functional analysis. For \(a\geq 0\) and \(g\in H^{1}(\partial\mathcal{L})\) with \(supp(g)\subset\overline{\Gamma}_{M}^{a}=:[-a,M]\times\overline{O}\), consider the system
\[\left\{\begin{array}{c}P_{A}u=\partial_{t}^{2}u-\sum_{i,j=1}^{n}\partial_{x_ {j}}(a_{ij}(x)\partial_{x_{i}}u)=0\quad\text{in}\,\,\mathcal{L}\\ u(t,.)=g(t,.)\quad\text{on}\,\,\partial\mathcal{L}\\ u(-a,.)=\partial_{t}u(-a,.)=0\quad\text{in}\,\,\Omega.\end{array}\right. \tag{5.5}\]
Clearly, the solutions of (5.5) satisfy a relaxed observability estimate similar to (5.1), namely
\[\|g\|_{H^{1}(\Gamma_{M}^{a})}\leq c\|\partial_{n}u_{|\partial\Omega}\|_{L^{2}( \Gamma_{M+T}^{\prime a})}+c\|g\|_{L^{2}(\Gamma_{M}^{a})}. \tag{5.6}\]
for any \(T>T_{0}\) and some \(c>0\). Here we have denoted \(\Gamma_{M}^{a}=(-a,M)\times O\) and \(\Gamma_{M+T}^{\prime a}=(-a,M+T)\times O^{\prime}\).
Let us introduce the set
(5.7) \[\mathcal{N}_{a}(T)=\Big{\{}g\in H^{1}(\partial\mathcal{L}),\ supp(g)\subset \overline{\Gamma}_{M}^{a},\ u=u(g)\ \text{solves}\ \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeqeqeq:eqeq:eqeq:eqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:
\((v_{k})\) is bounded in \(H^{1}(\mathcal{L}_{T})\) and \((v_{k|\partial\Omega})\) is bounded in \(H^{1}(\Gamma_{M})\). Therefore we may assume that \((v_{k})\) weakly converges to some \(v\) in \(H^{1}(\mathcal{L}_{T})\) and \((v_{k|\partial\Omega})\) weakly converges to some \(\tilde{g}\) in \(H^{1}(\Gamma_{M})\). Equations (5.9) then provides
\[P_{A}v=0,\quad v_{|\partial\Omega}=\tilde{g},\quad\text{and}\quad\partial_{n}v _{|\partial\Omega}=0, \tag{5.10}\]
and Lemma 5.2 implies that \(v\) and \(v_{|\partial\Omega}=\tilde{g}\) vanish identically. Thus, the weak limits are both equal to \(0\).
Our goal, will be to prove that in the contradiction setting assumed above, the sequence \((v_{k|\partial\Omega})\) strongly converges to \(0\) in \(H^{1}(\Gamma)\), which is a impossible since \(\|v_{k|\partial\Omega}\|_{H^{1}(\Gamma_{M})}=1\) accordingly to (5.9).
For this purpose, we make use of a classical strategy. Following Burq-Lebeau [8], and coming back to the notation \(u_{k}\) instead of \(v_{k}\), we attach to \((u_{k})\) a microlocal defect measure in \(H^{1}(\mathcal{L}_{M+T})\) denoted by \(\mu\).
Also, we attach to \((g_{k})\) a microlocal defect measure on the boundary, in \(H^{1}(\partial\mathcal{L})\), denoted by \(\tilde{\mu}\). Finally, the sequence \(\partial_{n}u_{k|\partial\Omega}\) weakly converges to \(0\) in \(L^{2}_{loc}(\partial\mathcal{L})\). So we attach to it a microlocal defect measure in \(L^{2}_{loc}(\partial\mathcal{L})\) denoted by \(\nu\).
Notice, that in the contradiction setting of (5.9), the measure \(\nu\) vanishes identically over \(\Gamma^{\prime}_{M+T}\).
Finally, we will prove in several steps, that in the contradiction setting assumed above, the measure \(\tilde{\mu}\) vanishes identically on \(\Gamma_{M}\). Notice that in the different intermediate results we will prove below, we use this contradiction setting, without explicitly referring to it.
### Properties of the measures
In the sequel we consider \(W\) an interior neighborhood of the boundary \(\overline{\Gamma}\) as introduced in Section 4.1. We recall that \(W=\mathbb{R}\times(V\cap\Omega)=(\mathbb{R}\times V)\cap\mathcal{L}\) where \(V\) is an open subset of \(\mathbb{R}^{n}\), neighborhood of the spatial boundary \(O\subset\partial\Omega\). We set
\[W^{\partial}=(\mathbb{R}\times V)\cap\partial\mathcal{L}. \tag{5.11}\]
In addition, for \(J\) an open interval of \(\mathbb{R}\) such that \([0,M]\subset J\), we denote
\[W_{J}=\{(t,x)\in W,\ t\in J\}\quad\text{and}\quad W_{J}^{\partial}=\{(t,x) \in W^{\partial},\ t\in J\}. \tag{5.12}\]
The neighborhood \(W\) and the interval \(J\) will be fixed in the next Proposition.
**Proposition 5.3**.: _Under assumptions A1 and A2, for every \(T>T_{0}\), there exist \(W\) and \(J\) as above such that the measure \(\mu\) vanishes identically near any interior point of \(W_{J}\)._
Proof.: Consider \(T>T_{0}\). We take the interior neighborhood \(W\) of \(\Gamma\) satisfying the conclusion of Lemma 4.1 with \(\frac{T+T_{0}}{2}\). In addition, we chose \(J=]-\alpha,M+\alpha[\), where \(0<\alpha<\frac{T-T_{0}}{2}\). And we prove that \(\rho\notin\operatorname{supp}(\mu)\) for all \(\rho\in T^{*}W_{J}\). This fact is obvious if \(\rho\) is an elliptic point, thanks to the classical property of microlocal elliptic regularity. If \(\rho\in Char(P_{A})\), let \(\gamma=\gamma(s)\) be the generalized half bicharacteristic starting at \(\rho\) and satisfying (SGCC). We know that for some \(s_{0}\) ( say \(0<s_{0}<\frac{T+T_{0}}{2}\) ), \(\gamma(s_{0})=(t_{0},x_{0},\tau_{0},\xi_{0})\) is a strictly gliding point of the boundary \(\Gamma^{\prime}_{M+T}\). Consider \(U_{0}\) a small neighborhood of \((t_{0},x_{0})\) in \(\mathbb{R}^{n+1}\) and denote by \(\underline{u}_{k}\) the canonical extension of \(u_{k}\) to \(\mathbb{R}^{n+1}\), i.e \(\underline{u}_{k}=u_{k}\) in \(\mathcal{L}\) and \(\underline{u}_{k}=0\) elsewhere. We have
\[\left\{\begin{array}{c}\underline{u}_{k}\rightharpoonup 0\quad\text{in }H^{1}(U_{0}) \quad\text{weakly}\\ \\ u_{k|\partial\Omega}=0\quad\text{on }U_{0}\cap\partial\mathcal{L}\quad\text{and }\partial_{n}u_{k|\partial\Omega}\longrightarrow 0\quad\text{on }U_{0}\cap\partial\mathcal{L}\quad\text{strongly }.\end{array}\right. \tag{5.13}\]
Accordingly to the lifting lemma of Bardos, Lebeau and Rauch [3, Theorem 2.2] or Burq [5, Lemme 2.2], we know that \(\underline{u}_{k}\) strongly converges to \(0\) in \(H^{1}\) microlocally at \(\gamma(s_{0})\). Therefore we deduce that \(\gamma(s_{0})\notin supp(\mu)\) thanks to the work of Aloui [2, Lemme 3.1]. Now, accordingly to (SGCC), for \(0\leq s\leq s_{0}\), the bicharacteristic \(\gamma(s)\) doesn't intersect the boundary \(\Gamma\). It may only intersect \(\partial\mathcal{L}\setminus\overline{\Gamma}\), on which we have homogeneous Dirichlet condition \(u_{k_{|}\partial\Omega}=0\). Consequently, the measure propagation result of Lebeau [15] or Burq-Lebeau [8] is valid. Starting then backward from \(\gamma(s_{0})\), and using the propagation of the measure \(\mu\), we obtain that \(\rho\notin\mathrm{supp}(\mu)\). Finally, the case \(s_{0}<0,\,0<|s_{0}|<\frac{T+T_{0}}{2}\), can be treated in a similar way.
**Remark 5.4**.: _In the rest of the proof, the neighborhood \(W\) and the interval \(J\) are fixed as in the proof of Proposition 5.3 above._
**Proposition 5.5**.: _Under assumptions A1 and A2, the measures \(\mu,\nu\) and \(\tilde{\mu}\) vanish on the hyperbolic set of the boundary \(W^{\partial}_{J}\)._
Proof.: The fact that \(\mu\mathbf{1}_{\mathcal{H}}=0\) is proved in Burq-Lebeau paper ( see [8, Lemma 2.6] ) and is independent of the boundary condition. It only needs the weak convergence of the sequence \((u_{k})\) to \(0\) in \(H^{1}_{loc}(\mathcal{L})\). On the other hand, since \(\mu=0\) in the interior of \(W_{J}\) thanks to Proposition 5.3, the two hyperbolic fibers incoming to and outcoming from any hyperbolic point \(\rho_{0}\) of the boundary \(W^{\partial}_{J}\) are not charged, i.e they don't intersect \(supp(\mu)\). Therefore, the Taylor pseudo-differential factorization ( see for instance Burq-Lebeau [8, Appendix] ), shows that microlocally near \(\rho_{0}\), \(g_{k}=u_{k|\partial\Omega}\to 0\) in \(H^{1}\) and \(\partial_{n}u_{k|\partial\Omega}\to 0\) in \(L^{2}\) strongly. So as a by-product, we get that \(\rho_{0}\) is not in \(\mathrm{supp}\ \tilde{\mu}\) neither in \(\mathrm{supp}\ \nu\).
At this step, we can already conclude the proof of Theorem 2.3 under assumption A3.a.
**Corollary 5.6**.: _Under assumptions A1, A2 and A3.a, the measure \(\tilde{\mu}\) identically vanishes on the boundary \(W^{\partial}_{J}\)._
Proof.: This result is a byproduct of Proposition 5.5 and we develop it for the convenience of the reader. First we recall a classical property of micolocal defect measures, namely the microlocal elliptic regularity. Let \(\chi=\chi(t,x^{\prime},\tau,\xi^{\prime})\) and \(\psi=\psi(t,x^{\prime},\tau,\xi^{\prime})\) two \(0\)-order pseudo-differential symbols supported in \(T^{*}(\partial\mathcal{L})_{|W^{\partial}_{J}}\setminus CharB_{\alpha}\), such that \(\chi\equiv 1\) on \(supp(\psi).\) It's classical that one can find a pseudo-differential operator \(B_{-\alpha}\), of order \((-\alpha)\) on \(\partial\mathcal{L}\) such that
\[B_{-\alpha}B_{\alpha}\chi(t,x^{\prime},D_{t},D_{x^{\prime}})=\psi(t,x^{\prime },D_{t},D_{x^{\prime}})+R_{-\infty} \tag{5.14}\]
where \(R_{-\infty}\) is infinitely smoothing. Consequently, can write the elliptic estimate
\[\|\psi(t,x^{\prime},D_{t},D_{x^{\prime}})g_{k}\|_{H^{1}(\partial\mathcal{L})} \leq c_{0}\|B_{\alpha}\chi(t,x^{\prime},D_{t},D_{x^{\prime}})g_{k}\|_{H^{1- \alpha}(\partial\mathcal{L})}+c_{1}\|g_{k}\|_{L^{2}(\partial\mathcal{L})} \tag{5.15}\]
for some constants \(c_{0},c_{1}>0\). Therefore
\[\|\psi(t,x^{\prime},D_{t},D_{x^{\prime}})g_{k}\|_{H^{1}(\partial\mathcal{L})} \leq c_{0}\|[B_{\alpha},\chi(t,x^{\prime},D_{t},D_{x^{\prime}})]g_{k}\|_{H^{1- \alpha}(\partial\mathcal{L})}+c_{1}\|g_{k}\|_{L^{2}(\partial\mathcal{L})}\leq c _{2}\|g_{k}\|_{L^{2}(\partial\mathcal{L})} \tag{5.16}\]
for some \(c_{2}>0\). We then deduce that \(\psi(t,x^{\prime},D_{t},D_{x^{\prime}})g_{k}\to 0\) strongly in \(H^{1}(\partial\mathcal{L})\), which expresses that \(supp(\tilde{\mu})\subset CharB_{\alpha}\). Now, \(CharB_{\alpha}\subset\mathcal{H}\) thanks to assumption A3.a, and \(\tilde{\mu}\equiv 0\) on \(\mathcal{H}\) accordingly to Proposition 5.5. Therefore, \(\tilde{\mu}\) vanishes identically.
The proof of Theorem 2.3 under assumption A3.a is complete.
Let us now continue the proof of Theorem 2.3 under assumption A3.b.
Denote by \(A_{j}^{k}\) the terms of (4.20) where we set \(u_{k}\) instead of \(u\), and consider a pseudo-differential symbol \(q=\sigma(Q)\in\mathcal{A}^{0}\) ( see Section 3.3), chosen as in Section 4.2.
**Corollary 5.7**.: _Under assumptions A1 and A2, if \(q=\sigma(Q)\) is compactly supported in \(W_{J}\), we have_
\[\lim_{k\to\infty}A_{j}^{k}=0,\qquad\forall j\in\{1,8,9,10,12,14\}. \tag{5.17}\]
Proof.: We recall that the symbol \(q=\sigma(Q)\) is independent of \(x_{n}\) in a strip \(\{|x_{n}|<\beta\}\), \(\beta>0\) small. More precisely, we take \(q\) in the form \(q(x_{n},x^{\prime},t,\xi^{\prime},\tau)=\varphi(x_{n})\tilde{q}(x^{\prime},t, \xi^{\prime},\tau)\), with \(\varphi\in\mathcal{C}_{0}^{\infty}(\mathbb{R})\), equal to \(1\) near \(x_{n}=0\). Therefore, if we choose \(\beta\) small enough, and assume that \(\tilde{q}\) is supported in time in the interval \(J\), the symbol of the bracket operator \([\partial_{n},Q^{2}]\) is of order \(0\) and compactly supported in the interior of \(W_{J}\). Thus, \(\lim_{k\to\infty}A_{j}^{k}=0\) for \(j\in\{1,10\}\) thanks to Proposition 5.3. The terms \(A_{j}^{k},\ j=8,9,12\) are trivial.
**Remark 5.8**.: _In the rest of the proof, we will work henceforth, with this choice of symbol \(q\), and we will choose successively, the localization of its support._
Now, for the convenience of the reader, we recall the following result due to Burq-Lebeau [8].
In the system of geodesic coordinates introduced above, consider the function \(\theta\) defined \(\mu\)-almost everywhere on \(S\hat{Z}\)
\[\theta=\frac{\xi_{n}}{|(\tau,\xi^{\prime})|}\ \ \text{in}\quad x_{n}>0,\qquad \qquad\theta=i\frac{\sqrt{-r_{0}}}{|(\tau,\xi^{\prime})|}\ \ \text{in}\quad\mathcal{E}\cup\mathcal{G}. \tag{5.18}\]
**Lemma 5.9**.: _[_8_, Lemma 2.7]_ _Let \(Q_{j}\in\mathcal{A}^{j}\), \(j=1,2\) be tangential pseudo-differential operators with principal symbols \(\sigma(Q^{j})=q_{j}\). Then we have with \(\lambda^{2}=|(\tau,\xi^{\prime})|^{2}(1+|\theta|^{2})\)_
\[lim_{k\to\infty}\Big{(}(Q_{2}-iQ_{1}\partial_{n})u_{k}\,|\,u_{k}\Big{)}_{L^{2 }(\mathcal{L})}=\Big{\langle}\mu,\lambda^{-2}(q_{2}+q_{1}\theta|(\tau,\xi^{ \prime})|)\Big{\rangle} \tag{5.19}\]
**Proposition 5.10**.: _The measure \(\mu\) vanishes on the elliptic set of the boundary \(W_{J}^{\partial}\)._
Proof.: The elliptic microlocal regularity for measures or wave fronts is classical for elliptic interior points \(\rho\in T^{*}W_{J}\). In what concerns the elliptic set of the boundary, we will invoke a result of Burq-Lebeau ([15, Lemma 2.6] ), and we have to introduce some additional notations. In the framework above, they define a boundary measure \(\mu_{\partial}^{0}\) given by
\[\forall Q\in\mathcal{A}^{0},\qquad lim_{k}\int_{\partial\mathcal{L}}Qu_{k}\, \partial_{n}\overline{u}_{k}d\sigma=\Big{\langle}\mu_{\partial}^{0},\sigma(Q) _{|x_{n}=0}\Big{\rangle} \tag{5.20}\]
Moreover, they provide the following link between the two measures \(\mu\) and \(\mu_{\partial}^{0}\) :
\[\mu_{\partial}^{0}=-2\frac{|\theta|^{2}}{1+|\theta|^{2}}\,\mu\mathbf{1}_{|x_{n }=0}. \tag{5.21}\]
Therefore, we get
\[\mu_{\partial}^{0}=\frac{2r_{0}(x^{\prime};\tau,\xi^{\prime})}{|(\tau,\xi^{ \prime})|^{2}-r_{0}(x^{\prime};\tau,\xi^{\prime})}\mu\,\mathbf{1}_{|x_{n}=0} \quad\text{on}\quad\mathcal{E}\cup\mathcal{G}\]
But, since \(u_{k|\partial\mathcal{L}}=g_{k}\to 0\) in \(L^{2}_{loc}(\partial\mathcal{L})\) strongly and \(\partial_{n}u_{k|\partial\mathcal{L}}\) is bounded in \(L^{2}_{loc}(\partial\mathcal{L})\), we easily get that \(\mu^{0}_{\partial}\equiv 0\). Consequently, we obtain \(\mu\equiv 0\) on \(\mathcal{E}\), since \(r_{0}<0\) on this set.
**Remark 5.11**.:
1. _Notice that for this proposition, we have used none of the assumptions_ \(A_{j}\)_,_ \(j=1,2,3\)_. We have only used the weak convergence_ \(g_{k}\rightharpoonup 0\) _in_ \(H^{1}(\partial\mathcal{L})\) _and subsequently_ \(u_{k}\rightharpoonup 0\) _in_ \(H^{1}(\mathcal{L})\)_._
2. _One should be carefull that this proposition does not give any information about the behavior of the boundary data_ \(g_{k}\) _on_ \(\mathcal{E}\cup\mathcal{G}\)_. In other words, we have not yet any information about_ \(\tilde{\mu}\mathbf{1}_{|\mathcal{E}\cup G}\)_._
3. _Up to now, we have proved that the measure_ \(\mu\) _vanishes in_ \(T^{*}(W_{J})\) _, i.e on interior points, and on the subset_ \(\mathcal{H}\cup\mathcal{E}\) _of_ \(T^{*}(W_{J}^{\partial})\) _. Therefore,_ \(\mu\) _is supported in the glancing set, that is_ \(\mu=\mu\mathbf{1}_{\mathcal{G}}\)_._
**Lemma 5.12**.: _Under assumptions A1 and A2, and with a suitable choice of the pseudo-differential symbol \(q=\sigma(Q)\), we have_
\[\lim_{k\to\infty}A_{j}^{k}=0,\qquad\forall j\in\{2,3,4,5,6,7,11,13\}. \tag{5.22}\]
_Together with (5.7), this implies that the right hand side of (4.19) tends to \(0\) as \(k\to\infty\)._
Proof.: The proof essentially relies on the calculus Lemma 5.9. If we detail the limit (5.23), we can write accordingly to Propositions 5.3, 5.5 and 5.28
\[\left\{\begin{aligned} lim_{k\to\infty}\Big{(}Q_{2}u\,|\,u\Big{)}_{ L^{2}(\mathcal{L})}=\Big{\langle}\mu\mathbf{1}_{\mathcal{G}},\lambda^{-2}q_{2} \Big{\rangle}\\ lim_{k\to\infty}\Big{(}-iQ_{1}\partial_{n}u\,|\,u\Big{)}_{L^{2}( \mathcal{L})}=\Big{\langle}\mu\mathbf{1}_{\mathcal{G}},\lambda^{-2}q_{1} \theta|(\tau,\xi^{\prime})|\Big{\rangle}=\Big{\langle}\mu\mathbf{1}_{ \mathcal{G}},i\lambda^{-2}q_{1}\sqrt{-r_{0}}\Big{\rangle}=0\end{aligned}\right. \tag{5.23}\]
since \(r_{0}\equiv 0\) on the glancing set \(\mathcal{G}\).
First, we take the pseudo-differential symbol \(q=\sigma(Q)\) as in the proof of Corollary 5.7. With this choice, the terms \(A_{2}^{k},A_{4}^{k},A_{5}^{k},A_{6}^{k},A_{7}^{k},A_{11}^{k}\) and \(A_{13}^{k}\) can be treated with the second limit of (5.23) since the pseudo-differential operator \((Q^{2}-Q^{*2})\), resp. \((R-R^{*})\) is of order \(\leq(-1),resp.\,1\).
On the other hand, the term \(A_{11}^{k}\) tends to \(0\) thanks to the first limit of (5.23). Finally, for the term \(A_{3}^{k}\), we have just to notice that \(\partial_{n}u_{k}\) is bounded in \(L^{2}_{x_{n}}(L^{2}_{t,x^{\prime}})\) and converges weakly to \(0\) in this space, and use again the fact that \((Q^{2}-Q^{*2})\) is of order \(\leq(-1)\).
As a by-product, we have obtained the following lemma. We denote by \(q=\sigma(Q)\) the symbol of the pseudo-differential operator \(Q\in\mathcal{A}^{0}\).
**Corollary 5.13**.: _Under assumptions A1 and A2, the measures \(\mu,\tilde{\mu}\) and \(\nu\) satisfy the following identity_
\[\Big{\langle}\nu,q^{2}\Big{\rangle}+\Big{\langle}\tilde{\mu},|(\tau,\xi^{ \prime})|^{-2}q^{2}r_{0}\Big{\rangle}=-\Big{\langle}\mu\mathbf{1}_{\mathcal{G} },|(\tau,\xi^{\prime})|^{-2}\,q^{2}(\partial_{n}r)\Big{\rangle}, \tag{5.24}\]
_for all \(0\)-order symbol \(q\), supported in \(W_{J}\)._
Now, we can conclude the study for the measure \(\mu\).
**Proposition 5.14**.: _The measure \(\mu\) vanishes identically over \(T^{*}(W^{\partial}_{J})\)._
_In particular, \(u_{k}\to 0\) strongly in \(H^{1}(W_{J})\) up to the boundary._
Proof.: The proof relies on a specific choice of the symbol \(q\). First, we recall the notation
\(r_{0}(x^{\prime},\tau,\xi^{\prime})=\tau^{2}-\sum_{1\leq i,j\leq n-1}a_{ij}(x^ {\prime},0)\xi_{i}\xi_{j}\), see Section 3.1. In addition, it's clear that in formula (5.24), \(q=q_{|x_{n}=0}\). Let us then consider a function \(q_{0}\in\mathcal{C}^{\infty}_{0}(\mathbb{R})\), supported in \([-1,1]\), such that \(q_{0}(s)=1\) for \(s\in[-1/2,1/2]\). We set for \(\varepsilon>0\)
\[q_{\varepsilon}(t,x^{\prime},\tau,\xi)=q_{0}\Big{(}\frac{r_{0}(x^{\prime}, \tau,\xi)}{\varepsilon\sum_{1\leq i,j\leq n-1}a_{ij}(x^{\prime},0)\xi_{i}\xi_{ j}}\Big{)} \tag{5.25}\]
Plugging \(q_{\varepsilon}\) into (5.24) and letting \(\varepsilon\to 0^{+}\), we get by Lebesgue dominated convergence
\[\Big{\langle}\nu,\mathbf{1}_{\mathcal{G}}\Big{\rangle}=-\Big{\langle}\mu \mathbf{1}_{\mathcal{G}},|(\tau,\xi^{\prime})|^{-2}\left(\partial_{n}r\right) \Big{\rangle} \tag{5.26}\]
All points of the glancing set \(\mathcal{G}=\mathcal{G}_{d}\) are strictly diffractive ( see (3.3)) which gives \(\partial_{n}r_{|\mathcal{G}}>0\). Therefore the two members of this identity are of opposite sign and thus both are equal to zero. Consequently, the measure \(\mu\) vanishes identically.
**Remark 5.15**.:
1. _Finally, summarizing previous results, we obtain that the measures equation (_5.24_) reads as follows :_ (5.27) \[\Big{\langle}\nu\mathbf{1}_{\mathcal{E}\cup\mathcal{G}},q^{2}\Big{\rangle}+ \Big{\langle}\tilde{\mu}\mathbf{1}_{\mathcal{E}\cup\mathcal{G}},|(\tau,\xi^{ \prime})|^{-2}q^{2}r_{0}\Big{\rangle}=0\] _for all 0-order symbol_ \(q\) _, supported in_ \(W_{J}\)_._
2. _Roughly speaking, this formula tells us that we have two ways to prove that_ \(\tilde{\mu}\equiv 0\)_. Either, we set a condition on the data_ \(g\) _itself, in other words, we make use of assumption_ A3_.a or_ A3_.b, or we we use a condition linking the two boundary data_ \(\partial_{n}u_{|\partial\mathcal{L}}\) _and_ \(u_{|\partial\mathcal{L}}=g\)_, which is assumption_ A3_.c._
### End of the proof of Theorem 2.3
Here we have reached the point where, for the first time, we make use of assumptions A3.b or A3.c.
**Proposition 5.16**.: _Under assumptions A1, A2 and A3.b, the measures \(\tilde{\mu}\) and \(\nu\) vanish identically on the set \(\mathcal{E}\cup\mathcal{G}\) and hence on the boundary \(\partial\mathcal{L}\)._
Proof.: In the setting of assumption A3.b, for every \(t\in J\) we can write the classical elliptic estimate
\[\|g_{k}(t,.)\|_{H^{1}(\partial\Omega)}\leq c_{0}\|c(t,x^{\prime},D_{x^{\prime }})g_{k}(t,.)\|_{H^{1-\alpha}(\partial\Omega)}+c_{1}\|g_{k}(t,.)\|_{L^{2}( \partial\Omega)}=c_{1}\|g_{k}(t,.)\|_{L^{2}(\partial\Omega)} \tag{5.28}\]
for some constants \(c_{0},c_{1}>0\) independent of \(t\in J\). We deduce that uniformly with respect to \(t\in J\),
\[\|D_{x^{\prime}_{j}}g_{k}(t,.)\|_{L^{2}(\partial\Omega)}\to 0\quad\text{for} \quad k\to\infty\]
Therefore, integrating on \(t\) and taking the limit \(k\to\infty\), we can write
\[\Big{\langle}\tilde{\mu},|(\tau,\xi^{\prime})|^{-2}|\xi^{\prime}|^{2}\Big{\rangle}=0 \tag{5.29}\]
and this yields
\[\Big{\langle}\tilde{\mu},|(\tau,\xi^{\prime})|^{-2}q^{2}\tau^{2}\Big{\rangle}= \Big{\langle}\tilde{\mu}\mathbf{1}_{\mathcal{E}\cup\mathcal{G}},|(\tau,\xi^{ \prime})|^{-2}q^{2}\tau^{2}\Big{\rangle}=0 \tag{5.30}\]
since \(\tau^{2}\leq c|\xi^{\prime}|^{2}\) in \(\mathcal{E}\cup\mathcal{G}\). Together with the result of Proposition 5.5, this gives \(\tilde{\mu}\equiv 0\) and \(\nu\equiv 0\) accordingly to (5.27).
This completes the proof of Theorem 2.3 under assumption A3.b.
**Proposition 5.17**.: _Under assumptions A1, A2 and A3.c, the measures \(\tilde{\mu}\) and \(\nu\) vanish identically on the set \(\mathcal{E}\cup\mathcal{G}\) and hence on the boundary \(\partial\mathcal{L}\)._
Proof.: All identities we will handle in this proof take place on the boundary \(\partial\mathcal{L}\). Therefore, we will simply write \(\partial_{n}u_{k}\) (resp. \(u_{k}\)) instead of \(\partial_{n}u_{k|\partial\mathcal{L}}\) (resp. \(u_{k|\partial\mathcal{L}}\)). In addition, without loss of generality, we may assume that \(\mathcal{U}_{M}\subset W_{J}\). Denote \(F_{k}=\partial_{n}u_{k}+\partial_{t}u_{k}\). Clearly, \(F_{k}\rightharpoonup 0\) weakly in \(L^{2}(\partial\mathcal{L})\). In addition, thanks to condition A3.c, \(F_{k}\) is bounded in \(H^{\alpha}(\mathcal{U}_{M})\), with \(\alpha>0\). Therefore we may assume that
\[\partial_{n}u_{k}+\partial_{t}u_{k}=F_{k}\to 0\quad\text{strongly in}\quad L^{2}( \mathcal{U}_{M}). \tag{5.31}\]
Consider an elliptic point \(\rho_{0}\in T^{*}(\mathcal{U}_{M})\). A classical analysis at elliptic points of the boundary, see for instance [8, Appendix], shows that microlocally near \(\rho_{0}\), we have
\[\partial_{n}u_{k}-Op(\sqrt{-r_{0}(x^{\prime},t,\tau,\xi^{\prime})})u_{k}=o(1) \quad\text{in}\quad H^{1/2},\quad\text{for}\quad k\to\infty \tag{5.32}\]
Together with (5.31), this yields
\[\partial_{t}u_{k}+Op(\sqrt{-r_{0}(x^{\prime},t,\tau,\xi^{\prime})})u_{k}=o(1) \quad\text{in}\quad L^{2},\quad\text{for}\quad k\to\infty \tag{5.33}\]
Therefore \(u_{k|\partial\mathcal{L}}=g_{k}\to 0\) strongly in \(H^{1}\) near \(\rho_{0}\) since the symbol \(i\tau+\sqrt{-r_{0}(x^{\prime},t,\tau,\xi^{\prime})}\) is elliptic near this point. Consequently \(\rho_{0}\notin supp(\tilde{\mu})\) and using again (5.27), \(\rho_{0}\notin supp(\nu)\)
On the other hand, if \(Q\) is a \(0\)-order polyhomogeneous pseudo-differential operator on \(\partial\mathcal{L}\), with symbol \(q\), real valued and supported in \(\mathcal{U}_{M}\), we have
\[\Big{(}Q^{2}\partial_{n}u_{k}\,|\,\partial_{n}u_{k}\Big{)}_{L^{2}(\mathcal{U }_{M})}=\Big{(}Q^{2}\partial_{t}u_{k}\,|\,\partial_{t}u_{k}\Big{)}_{L^{2}( \mathcal{U}_{M})}+\Big{(}Q^{2}F_{k}\,|\,F_{k}\Big{)}_{L^{2}(\mathcal{U}_{M})}- 2Re\Big{(}Q^{2}F_{k}\,|\,\partial_{t}u_{k}\Big{)}_{L^{2}(\mathcal{U}_{M})} \tag{5.34}\]
Passing to the limit in \(k\) and taking into account (5.31), we obtain
\[\Big{\langle}\nu\mathbf{1}_{\mathcal{E}\cup\mathcal{G}},q^{2}\Big{\rangle}= \Big{\langle}\tilde{\mu}\mathbf{1}_{\mathcal{E}\cup\mathcal{G}},|(\tau,\xi^{ \prime})|^{-2}q^{2}\tau^{2}\Big{\rangle} \tag{5.35}\]
Using then the fact that \(\tilde{\mu}=\tilde{\mu}\mathbf{1}_{\mathcal{G}}\) and plugging into (5.27), we get
\[\Big{\langle}\tilde{\mu}\mathbf{1}_{\mathcal{G}},|(\tau,\xi^{\prime})|^{-2}q^ {2}(r_{0}+\tau^{2})\Big{\rangle}=\Big{\langle}\tilde{\mu}\mathbf{1}_{ \mathcal{G}},|(\tau,\xi^{\prime})|^{-2}q^{2}\tau^{2}\Big{\rangle}=0 \tag{5.36}\]
for all symbol \(q\). And this gives \(\tilde{\mu}\equiv 0\) since \(\tau\neq 0\) near \(\mathcal{G}\).
This completes the proof of Theorem 2.3 under assumption A3.c.
## 6. Proof of Theorem 2.5
The proof is based on the wave front propagation theorem of Melrose-Sjostrand, see [19]. We start with a general remark about solutions of system (1.1). Consider \(g\in H^{1}(\partial\mathcal{L})\), with support in \(\overline{\Gamma}_{M}=[0,M]\times O\) and assume in addition that \(WF(g)\), the \(\mathcal{C}^{\infty}\)-wave front of \(g\), is contained in the elliptic set \(\mathcal{E}\). First, we recall that the corresponding solution \(u\) vanishes identically for \(t\leq 0\). Therefore \(u\) is of class \(\mathcal{C}^{\infty}\) up to the boundary \(\partial\mathcal{L}\), outside \(\overline{\Gamma}_{M}\). Indeed, consider \(\rho\in T^{*}_{b}(\mathcal{L})\), \(\rho\notin T^{*}(\Gamma_{M})\), and denote \(\gamma_{\rho}\) the generalized bicharacteristic curve issued from \(\rho\). Following this curve backward in time, one enters in the region \(\{t<0\}\), say at some point \(\gamma_{\rho}(-t_{0}),\,t_{0}>0\), where \(u\) is smooth. Accordingly to the description of a generalized bicharacteristic curve given in Section 3.2, we have for \(s_{0}\in[-t_{0},0]\)
* \(\gamma_{\rho}(s_{0})\) is an interior point, i.e lies in the characteristic set \(Char(P_{A})\cap T^{*}(\mathcal{L})\),
* \(\gamma_{\rho}\) hits the boundary at a hyperbolic point for \(s=s_{0}\),
* \(\gamma_{\rho}(s_{0})\) is a glancing point, i.e \(\gamma_{\rho}\in\mathcal{G}\).
In all cases, \(\gamma_{\rho}(s)\) never intersects the closed set \(WF(g)\subset\mathcal{E}\). Hence by regularity propagation (see [19]), \(\rho\notin WF(u)\). Moreover, this propagation property yields that the \(H^{\alpha}\) norm of \(u\) is microlocally bounded near \(\rho\), for every \(\alpha\geq 1\).
In the sequel we use this property to prove that estimate (2.5) fails in general.
Take \(s<0,\,\alpha\in]1,2[\), and \(F\) a closed conical subset of \(T^{*}(\Gamma_{M})\), \(F\subset\mathcal{E}\). Also, consider a symbol \(a(t,x^{\prime},\tau,\xi^{\prime})\) of order \(0\), supported in \(T^{*}(\Gamma_{M})\cap\mathcal{E}\) and equal to \(1\) on \(F\). Denoting \(A=a(t,x^{\prime},D_{t},D_{x^{\prime}})\) the corresponding pseudo-differential operator, it's classical that one can construct a sequence of smooth functions \((f_{k})\subset H^{s}(\partial\mathcal{L})\), compactly supported in \(\Gamma_{M}\), satisfying
\[\|f_{k}\|_{H^{s}}=1\quad\text{and}\quad f_{k}\rightharpoonup 0\quad\text{weakly in}\quad H^{s}(\Gamma_{M}), \tag{6.1}\]
and
\[\|Af_{k}\|_{H^{s}}\to 1\quad\text{for}\quad k\to\infty. \tag{6.2}\]
This simply means that the lack of compactness of \((f_{k})\) is located in \(supp(a)\subset\mathcal{E}\).
Finally consider a pseudo-differential operator on \(\partial\mathcal{L}\), \(B_{s-\alpha}=b_{s-\alpha}(t,x^{\prime},D_{t},D_{x^{\prime}})\) of order \(s-\alpha\), with \(b_{s-\alpha}\) supported in \(T^{*}(\Gamma_{M})\). The following sequence \(g_{k}\) will be the key of our counter-example.
\[g_{k}=Af_{k}+B_{s-\alpha}(Id-A)f_{k}. \tag{6.3}\]
First, the second term of the Rhs of (6.3) is clearly bounded in \(H^{\alpha}(\Gamma_{M})\). Precisely, we have for some \(c>0\), \(\|B_{s-\alpha}(Id-A)f_{k}\|_{H^{\alpha}}\leq c\|f_{k}\|_{H^{s}}=c\). Therefore, accordingly to (6.1), we deduce that \(\|B_{s-\alpha}(Id-A)f_{k}\|_{H^{s}}\to 0\). And this yields \(\|g_{k}\|_{H^{s}}\to 1\) for \(k\to\infty\).
Secondly, it's classical that \(\|q(t,x^{\prime},D_{t},D_{x^{\prime}})g_{k}\|_{H^{\alpha}}\) is uniformly bounded by \(\|f_{k}\|_{H^{s}}\), for any pseudo-differential symbol \(q\) of order \(0\) supported in \((\mathcal{H}\cup\mathcal{G})_{|\Gamma_{M}}\). Indeed, in this case, the symbols \(q\) and \(a\) have disjoint supports and the composition \(Op(q)A\) is infinitely smoothing. Using then (6.3), we get for some constant \(c>0\)
\[\|q(t,x^{\prime},D_{t},D_{x^{\prime}})g_{k}\|_{H^{\alpha}}\leq c\|f_{k}\|_{H^ {s}}=c \tag{6.4}\]
Moreover, accordingly to (6.1), we obtain that \(q(t,x^{\prime},D_{t},D_{x^{\prime}})g_{k}\to 0\) strongly in \(H^{\alpha^{\prime}}(\Gamma_{M})\) for all \(\alpha^{\prime}<\alpha\).
Let us now analyze the sequence \((u_{k})\) of solutions to the wave system (1.1) with \((g_{k})\) as boundary data. We split it in the following form \(u_{k}=v_{k}+w_{k}\) where
\[\left\{\begin{array}{c}P_{A}v_{k}=0\quad\text{in}\quad\mathcal{L},\quad v_{ k|\partial\mathcal{L}}=Af_{k}\\ \\ P_{A}w_{k}=0\quad\text{in}\quad\mathcal{L},\quad w_{k|\partial\mathcal{L}}=B_{s -\alpha}(Id-A)f_{k}\\ \\ v_{k}(0)=\partial_{t}v_{k}(0)=w_{k}(0)=\partial_{t}w_{k}(0)=0.\end{array}\right. \tag{6.5}\]
First, as a consequence of the well posedness of system (1.1) ( see [14] ), it's clear that the sequence \(w_{k}\) is bounded in \(H^{\alpha}(\mathcal{L}_{M+T})\) and thus \(w_{k}\to 0\) strongly in \(H^{\alpha^{\prime}}(\mathcal{L}_{M+T})\) for all \(\alpha^{\prime}<\alpha\). In particular,
\[\|\partial_{n}w_{k|\partial\Omega}\|_{L^{2}(\Gamma_{M+T}^{\prime})}\to 0\quad \text{strongly}. \tag{6.6}\]
Next, to study the sequence \((v_{k})\), we need the following Lemma.
**Lemma 6.1**.: _Consider \(s<0\) and for \(c>0\) denote \(\mathcal{E}_{c}=\{(t,x;\tau,\xi)\in T^{*}(\mathbb{R}^{n}),\ |\tau|\leq c|\xi|\}\). Then on the space \(\{h\in H^{s}(\mathbb{R}^{n}),supp(\hat{h})\subset\mathcal{E}_{c}\}\), \(\|.\|_{L^{2}(\mathbb{R};H^{s}(\mathbb{R}^{n-1}))}\) is a norm, equivalent to its natural norm \(\|.\|_{H^{s}(\mathbb{R}^{n})}\)._
_As a consequence, we deduce that on the space \(\{h\in H^{s}(\Gamma_{M}),supp(\hat{h})\subset\mathcal{E}\}\), \(\|.\|_{L^{2}(0,M;H^{s}(O))}\) is a norm, equivalent to its natural norm \(\|.\|_{H^{s}(\Gamma_{M})}\)._
The proof is straightforward and left to the reader.
The sequence \((Af_{k})\) is bounded in \(L^{2}(0,M+T;H^{s}(O))\). Therefore \((v_{k})\) is bounded in \(L^{2}(0,M+T;H^{s}(\Omega))\) (see [14, Th.2.7]), and thus in \(H^{s}(\mathcal{L}_{M+T})\). Using the propagation argument developed in the beginning of this section, we see that \((v_{k})\) and thus \((u_{k})\) is bounded in \(H^{\alpha}(\mathcal{L}_{M+T})\) up to the boundary, except on the closed subset \(F\subset\mathcal{E}\). In particular, this sequence is bounded in \(H^{\alpha}(\mathcal{U})\) for any \(\mathcal{U}\) interior neighborhood of the boundary observation region \(\Gamma^{\prime}_{M+T}=(0,M+T)\times O^{\prime}\), ie :
\[\|u_{k}\|_{H^{\alpha}(\mathcal{U})}\leq c\quad\text{for some}\quad c>0. \tag{6.7}\]
Finally, since \(u_{k}\rightharpoonup 0\) weakly in \(H^{s}(\mathcal{L})\) thanks to (6.1), we obtain that \(u_{k}\to 0\) strongly in \(H^{\alpha^{\prime}}(\mathcal{U})\) for any \(\alpha^{\prime}\in[1,\alpha[\), and this gives
\[\|\partial_{n}u_{k|\partial\Omega}\|_{L^{2}(\Gamma^{\prime}_{M+T})}\to 0\]
This concludes the proof of Theorem 2.5.
## 7. Appendix
This section is devoted to our second negative result where we analyze the wave system (1.1)with data microlocally concentrated near a glancing point of \(T^{*}(\partial\mathcal{L})_{|\Gamma_{M}}\). In this case we show that, at least \(3\) derivatives are lost in the sidewise observation.
With the notations of Section 1.1, the following holds.
**Theorem 7.1**.: _There exists a sequence of functions \((g_{k})_{k}\subset H^{1}(\partial\mathcal{L})\) supported in \(\overline{\Gamma}_{M}\), and microlocally concentrated in the glancing set, such that_
\[\frac{\|\partial_{n}u_{k_{|\partial\Omega}}\|_{L^{2}(\Gamma^{\prime}_{M+T})}} {\|g_{k}\|_{H^{s}(\Gamma_{M})}}\quad\longrightarrow 0\quad\text{for}\quad k \longrightarrow\infty, \tag{7.1}\]
_for every \(T>0\) and every \(s>-2\)._
In this section we present the proof of Theorem 7.1, and we start with a short description of the general strategy. First, for an elliptic point \(\omega\in T^{*}(\Gamma_{M})\), we construct a family of solutions \(u_{\varepsilon}\) of the wave system (1.1) with smooth traces \(g_{\varepsilon}\) microlocally concentrated at \(\omega\), and for \(s\leq 1\), we compare the norms \(\|\partial_{n}u_{\varepsilon_{|\partial\Omega}}\|_{L^{2}(\Gamma^{\prime}_{M+T })}\) and \(\|u_{\varepsilon_{|\partial\Omega}}\|_{H^{s}(\Gamma_{M})}\). The idea is then to use a suitable sequence of elliptic points \(\omega_{\nu}\) of \(T^{*}(\Gamma_{M})\) converging to a glancing point \(\omega_{0}\), and to perform the same task near each \(\omega_{\nu}\) with a rigorous control of the ellipticity constant. Letting then \(\nu\to 0\) provides the result.
### Microlocal preparation
The key point is a microlocal factorization of the wave symbol near elliptic points and the smoothing property of some parabolic operator (see M.Taylor [23] ).
We recall that in the setting of Section 1.1, \(\Omega\) is a bounded open and connected subset of \(\mathbb{R}^{n}\) with boundary \(\partial\Omega\) of class \(\mathcal{C}^{\infty}\), and \(O\), \(O^{\prime}\) are two non empty open subsets of \(\partial\Omega\) such that \(\overline{O}\cap\overline{O^{\prime}}=\emptyset\). We denote \(m_{0}=(t_{0},x_{0})\), \(t_{0}>0\) a point of \(\Gamma=\mathbb{R}\times O\), and using a local geodesic coordinates system, we assume that near \(m_{0}\), \(\Omega=\{(x^{\prime},x_{n}),\,x_{n}>0\}\) and \(\partial\Omega=\{(x^{\prime},0)\}\).
We recall also that in this special system of coordinates, near \(m_{0}\), the principal symbol of the wave operator takes the particular form stated in Section 3.1
\[\sigma(P_{A})=-\xi_{n}^{2}+\Big{(}\tau^{2}-\sum_{1\leq i,j\leq n-1}a_{ij}(x) \xi_{i}\xi_{j}\Big{)}=-\xi_{n}^{2}+r(x,\tau,\xi^{\prime}), \tag{7.2}\]
and we set \(r_{0}(x^{\prime},\tau,\xi^{\prime})=r(x^{\prime},0,\tau,\xi^{\prime})\). Extending the metric \((a_{ij}(x))_{i,j}\) near \(m_{0}\), in a smooth way outside the domain \(\Omega\), we may assume that the symbol representation (7.2) holds for \(|x_{n}|\leq b\) where \(b>0\) is small enough. Assume now that \(\omega_{0}=(t_{0},x^{\prime}_{0},\tau_{0},\xi^{\prime}_{0})\), \(t_{0}>0\), is an elliptic point of \(T^{*}(\partial\mathcal{L})\), that is \(r_{0}(x^{\prime}_{0},\tau_{0},\xi^{\prime}_{0})<0\), and consider in addition \(V_{\omega_{0}}\) a conical neighborhood of \(\omega_{0}\) in \(\mathbb{R}^{n}\times\mathbb{R}^{n}\) and \(0<a<b\) such that
\[-r(x,\tau,\xi^{\prime})=-r(x^{\prime},x_{n},\tau,\xi^{\prime})\geq C_{\omega_ {0}}^{2}(\tau^{2}+|\xi^{\prime}|^{2}),\quad\forall x_{n}\in[-a,a],\quad \forall(t,x^{\prime};\tau,\xi^{\prime})\in V_{\omega_{0}}. \tag{7.3}\]
Also, consider \(V^{\prime}_{\omega_{0}}\) another conical neighborhood of \(\omega_{0}\) in \(\mathbb{R}^{2n}\), \(\overline{V^{\prime}}_{\omega_{0}}\subset V_{\omega_{0}}\) and a symbol \(\Lambda=\Lambda(t,x^{\prime};\tau,\xi^{\prime})\in S^{0}_{1,0}(\mathbb{R}^{n} \times\mathbb{R}^{n})\), homogeneous of order \(0\), \(0\leq\Lambda\leq 1\), equal to \(1\) on \(V^{\prime}_{\omega_{0}}\) and supported in \(V_{\omega_{0}}\). Finally, we take a function \(m\in\mathcal{C}^{\infty}(\mathbb{R},\mathbb{R}_{+})\), \(m(s)=1\) for \(|s|\leq a/2\) and \(m(s)=0\) for \(|s|\geq 3a/4\) and we define the symbol
\[\chi(x_{n},t,x^{\prime};\tau,\xi^{\prime})=m(x_{n})\Lambda(t,x^{\prime};\tau, \xi^{\prime}) \tag{7.4}\]
In vue of this, it's clear that for some \(C>0\) large enough, the tangential pseudo-differential symbol of order \(2\)
\[K(x_{n},t,x^{\prime},\tau,\xi^{\prime})=-r(x,\tau,\xi^{\prime})\chi(x_{n},t,x ^{\prime};\tau,\xi^{\prime})+C(\tau^{2}+|\xi^{\prime}|^{2})(1-\chi(x_{n},t,x ^{\prime};\tau,\xi^{\prime})) \tag{7.5}\]
is globally elliptic in the half-space \((x_{n},t,x^{\prime})\in[-a,+\infty[\times\mathbb{R}^{n}\), uniformly with respect to \(x_{n}\geq-a\).
**Remark 7.2**.:
1. _In the sequel, we will set_ \((y,\eta)=(t,x^{\prime},\tau,\xi^{\prime})\in\mathbb{R}^{2n}\)_._
2. _Actually, one can see that_ \(K(x_{n},y,\eta)\) _is a global tangential symbol, homogeneous of order_ \(2\)_, and lies in the class_ \(\mathcal{C}^{\infty}([-a,+\infty[;S^{2}_{1,0}(\mathbb{R}^{2n})\)_. More precisely, one has_ (7.6) \[K(x_{n},y,\eta)\geq C^{2}_{\omega_{0}}|\eta|^{2}\quad\forall(x_{n},y,\eta)\in [-a,+\infty[\times\mathbb{R}^{2n}.\]
We devote the next section to the study of a global pseudo-differential system.
### A global pseudo-differential system
**Proposition 7.3**.: _There exists a family of elliptic symbols \(R(x_{n},y,\eta)\in\mathcal{C}^{\infty}([-a,+\infty[;S^{1}_{1,0}(\mathbb{R}^{ 2n}))\) satisfying in the sense of operators_
\[\partial^{2}_{x_{n}}-K=(\partial_{x_{n}}-R)(\partial_{x_{n}}+R)+R_{-\infty} \tag{7.7}\]
_with_
\[R(x_{n},y,\eta)\gtrsim C_{\omega_{0}}|\eta|,\qquad(x_{n},y,\eta)\in[-a,+ \infty[\times\mathbb{R}^{2n} \tag{7.8}\]
_for \(|\eta|\geq A\), \(A\) large enough._
_In addition, \(R_{-\infty}\) is a tangential pseudo-differential operator infinitely smoothing, with symbol \(r_{-\infty}\in\mathcal{C}^{\infty}([-a,+\infty[;S^{-\infty}_{1,0})\)._
Proof.: \((\partial_{x_{n}}-R)(\partial_{x_{n}}+R)=\partial^{2}_{x_{n}}-R\circ R+[ \partial_{x_{n}},R]\), therefore we have to solve
\[R\circ R-[\partial_{x_{n}},R]=K\mod\ \ Op(S^{-\infty}).\]
A classical symbolic calculus then gives
\[R\#R-\partial R/\partial x_{n}=K\mod S^{-\infty}. \tag{7.9}\]
The symbol \(K\) introduced in 7.5 is homogeneous of order 2. Therefore we will seek for a classical symbol \(R\), i.e as an asymptotic sum \(R\sim\sum_{j\geq 0}r_{(1-j)}\) where \(r_{(1-j)}=r_{(1-j)}(x_{n},y,\eta)\) is homogeneous of order \(1-j\) and smooth with respect to \(x_{n}\).
We recall that if \(a_{1},a_{2}\) are two symbols belonging respectively to \(S^{m_{1}}_{1,0}\) and \(S^{m_{2}}_{1,0}\), then one has
\[a_{1}\#a_{2}\sim\sum_{\alpha}\frac{1}{\alpha!}\partial^{\alpha}_{\eta}a_{1}D^ {\alpha}_{y}a_{2}.\]
Consequently, equation 7.9 yields at order 2, 1 and 0, respectively
\[\left\{\begin{array}{c}r_{1}^{2}=K\\ \\ 2r_{0}r_{1}+\sum_{|\alpha|=1}\partial^{\alpha}_{\eta}r_{1}D^{\alpha}_{y}r_{1}- \partial r_{1}/\partial x_{n}=0\\ \\ 2r_{-1}r_{1}+\sum_{|\alpha|=2}\frac{1}{\alpha!}\partial^{\alpha}_{\eta}r_{1}D^{ \alpha}_{y}r_{1}+\sum_{|\alpha|=1}\partial^{\alpha}_{\eta}r_{1}D^{\alpha}_{y}r_ {0}-\partial r_{0}/\partial x_{n}=0,\end{array}\right.\]
and more generally, for \(j\geq 1\)
\[2r_{1-j}r_{1}-F_{j}(r_{1},r_{0},....,r_{1-(j-1)})=0,\]
where \(F_{j}\) is an homogeneous symbol of order \(2-j\), depending on \(r_{k},\ k\in\{2-j,...,0,1\}\).
We choose
\[r_{1}=K^{1/2}\quad\mbox{for}\quad|\eta|\geq 1\]
and for \(j\geq 1\)
\[r_{1-j}=\frac{1}{2}r_{1}^{-1}F_{j}(r_{1},r_{0},....,r_{1-(j-1)})\quad\mbox{ for}\quad|\eta|\geq 1\]
It's classical that the asymptotic sum \(\sum_{j\geq 0}r_{(1-j)}\) provides the answer (see Alinhac-Gerard [1, Chapter 1]). In addition, one can check that \(R(x_{n},y,\eta)\in\mathcal{C}^{\infty}([-a,+\infty[;S^{1}_{1,0}(\mathbb{R}^{2 }))\). In particular, notice that \(R(x_{n},y,\eta)\approx|\eta|\) for \(x_{n}\geq a\) and \(|\eta|\geq 1\).
We study now a pseudo-differential initial value system generated by this symbol \(R\).
We recall that the symbol \(R(x_{n},y,\eta)\) is uniformly elliptic of order one, see (7.8).
**Proposition 7.4**.: _Assume that \(R(x_{n},y,\eta)\gtrsim C_{\omega_{0}}|\eta|\), \(|\eta|>A\). Then there exists a tangential pseudo-differential operator \(R_{-\infty}\in Op(S^{-\infty}(\mathbb{R}^{n}))\) such that for every \(v_{0}\in L^{2}(\mathbb{R}^{n})\), the system_
\[\left\{\begin{array}{c}\frac{\partial v}{\partial x_{n}}+Rv=R_{-\infty}v \qquad\mbox{in}\quad\{x_{n}>0\}\\ \\ v(0,y)=v_{0}\end{array}\right. \tag{7.10}\]
_admits a unique solution \(v(x_{n},.)\in\mathcal{C}^{0}(\mathbb{R}^{+},L^{2}(\mathbb{R}^{n}))\) In addition, for every \(B>0\) we have :_
\[C_{\omega_{0}}\int_{0}^{B}\|v(x_{n},.)\|^{2}_{H^{1/2}(\mathbb{R}^{n})}dx_{n} \lesssim\|v_{0}\|^{2}_{L^{2}(\mathbb{R}^{n})}. \tag{7.11}\]
Proof.: The existence of a solution in \(L^{2}\) is classical. Choose \(R_{-\infty}=R_{-\infty}(D_{y})\) with positive symbol \(r_{-\infty}(\eta)\in\mathcal{C}_{0}^{\infty}(\mathbb{R}^{n})\), equal to \(1\) on \(\{|\eta|\leq A\}\). The operator \(R+R_{-\infty}\) is then uniformly elliptic and one can use for instance classical results of [20]. To prove the smoothing property of (7.11), it suffice to work with functions of \(\mathscr{S}(\mathbb{R}^{n+1})\). Pick \(\varphi\in\mathcal{C}^{\infty}(\mathbb{R}_{+},\mathbb{R}_{+})\) a decreasing function such that \(\varphi(0)=1\). Multiplying the equation by \(\varphi(x_{n})\overline{v}\) and integrating, we get for \(B>0\)
\[\varphi(B)\|v(B,.)\|^{2}_{L^{2}}+\int_{0}^{B}\mathrm{Re}\left((2\varphi R- \varphi^{\prime}+R_{-\infty})v,\overline{v}\right)_{L^{2}}\!dx_{n}=\|v_{0}\|^{ 2}_{L^{2}}.\]
This yields the desired result thanks to Garding inequality ( see [1, Chapter I] ), by taking \((-\varphi^{\prime})\) large enough.
**Proposition 7.5**.: _For every \(s\in\mathbb{R}\) and \(v_{0}\in H^{s}(\mathbb{R}^{n})\), system (7.10) admits a unique solution \(v(x_{n},.)\in\mathcal{C}^{0}(\mathbb{R}^{+},H^{s}(\mathbb{R}^{n}))\). In addition, for \(B>0\) we have :_
\[C_{\omega_{0}}\int_{0}^{B}\|v(x_{n},.)\|^{2}_{H^{s+1/2}(\mathbb{R}^{n})}dx_{n} \lesssim\|v_{0}\|^{2}_{H^{s}(\mathbb{R}^{n})}. \tag{7.12}\]
Proof.: Consider the symbol \(K_{s}(\eta)=(1+|\eta|^{2})^{s/2}\) and denote by \(K_{s}(D_{y})\) the corresponding tangential pseudo-differential operator. One has
\[\partial_{x_{n}}K_{s}v+RK_{s}v=[K_{s},R]v+R_{-\infty}v=M_{s}v,\]
where \(M_{s}\) is a tangential pseudo-differential of order \(\leq s\). Multiplying this equation by \(\varphi(x_{n})K_{s}\overline{v}\) and integrating, we get
\[\varphi(B)\|K_{s}v(B,.)\|^{2}_{L^{2}}+\int_{0}^{B}\mathrm{Re}\left((2\varphi R -\varphi^{\prime}-2\varphi M_{s}K_{-s})K_{s}v,K_{s}\overline{v}\right)_{L^{2} }\!dx_{n}=\|K_{s}v_{0}\|^{2}_{L^{2}}.\]
The end of the proof is then similar to the previous one.
In the following lemma, we study the behavior of solutions to system 7.10 under the action of a \(0\)-order tangential pseudo-differential operator.
**Lemma 7.6**.: _Consider a smooth family of tangential pseudo-differential operators \(M(x_{n},y,D_{y})\), of order \(0\). Then for every \(v_{0}\in H^{s}(\mathbb{R}^{n})\), the solution \(v\) of system (7.10) satisfies for \(B>0\)_
\[C_{\omega_{0}}^{3}\int_{0}^{B}\|Mv(x_{n},.)\|^{2}_{H^{s+1/2}(\mathbb{R}^{n})}dx _{n}\lesssim C_{\omega_{0}}^{2}\|M_{0}v_{0}\|^{2}_{H^{s}(\mathbb{R}^{n})}+\|v _{0}\|^{2}_{H^{s-1}(\mathbb{R}^{n})}. \tag{7.13}\]
_Here we denoted \(M_{0}=M(0,y,D_{y})\)._
Proof.: If \(v\) is a solution of system (7.10), \(M(x_{n},y,D_{y})v\) then satisfies
\[\partial_{x_{n}}(Mv)+RMv=[R,M]v-[\partial_{x_{n}},M]v+MR_{-\infty}v=\tilde{M}v \qquad\text{in}\quad\{x_{n}>0\}\]
where \(\tilde{M}\) is a tangential pseudo-differential operator of order \(0\). Arguing then as in the proof of Proposition 7.4, we obtain
\[\left\{\begin{aligned} C_{\omega_{0}}\int_{0}^{B}\|Mv(x_{n},.)\|_{H^{s+ 1/2}(\mathbb{R}^{n})}^{2}dx_{n}\lesssim\|M_{0}v_{0}\|_{H^{s}(\mathbb{R}^{n})}^{ 2}\\ +\int_{0}^{B}\|\tilde{M}v(x_{n},.)\|_{H^{s-1/2}(\mathbb{R}^{n})} \|Mv(x_{n},.)\|_{H^{s+1/2}(\mathbb{R}^{n})}dx_{n}\\ \lesssim\|M_{0}v_{0}\|_{H^{s}(\mathbb{R}^{n})}^{2}+2C_{\omega_{0} }^{-1}\int_{0}^{B}\|\tilde{M}v(x_{n},.)\|_{H^{s-1/2}(\mathbb{R}^{n})}^{2}dx_{n }+1/2\,C_{\omega_{0}}\int_{0}^{B}\|Mv(x_{n},.)\|_{H^{s+1/2}(\mathbb{R}^{n})}^{ 2}dx_{n}\end{aligned}\right. \tag{7.14}\]
Therefore
\[\left\{\begin{aligned} C_{\omega_{0}}\int_{0}^{B}\|Mv(x_{n},.)\|_{H ^{s+1/2}(\mathbb{R}^{n})}^{2}dx_{n}\lesssim\|M_{0}v_{0}\|_{H^{s}(\mathbb{R}^ {n})}^{2}+C_{\omega_{0}}^{-1}\int_{0}^{B}\|v(x_{n},.)\|_{H^{s-1/2}(\mathbb{R}^ {n})}^{2}dx_{n}\\ \lesssim\|M_{0}v_{0}\|_{H^{s}(\mathbb{R}^{n})}^{2}+C_{\omega_{0} }^{-2}\|v_{0}\|_{H^{s-1}(\mathbb{R}^{n})}^{2}.\end{aligned}\right. \tag{7.15}\]
accordingly to (7.12). This completes the proof of Lemma 7.13.
At the end of this section, we apply these results to our initial problem, making the link between the solutions of the global pseudo-differential system (7.4) and those of the wave system (1.1).
Remind that \(\omega_{0}=(y_{0},\eta_{0})=(t_{0},x_{0}^{\prime},\tau_{0},\xi_{0}^{\prime})\), \(t_{0}>0\), is an elliptic point of \(T^{*}(\partial\mathcal{L})\). First, we consider a family of tangential symbols
\[\psi(x_{n},y,\eta)=\psi_{0}(x_{n})\lambda_{0}(y,\eta)\in\mathcal{C}^{\infty}( \mathbb{R}_{+};S^{0}_{1,0}(\mathbb{R}^{2n}) \tag{7.16}\]
such that \(\psi_{0}=1\) near \(0\), \(\lambda_{0}\equiv 1\) microlocally near \(\omega_{0}\), and \(\operatorname{supp}(\psi)\subset\{\chi=1\}\) where \(\chi\) is the symbol introduced in (7.4).
**Lemma 7.7**.: _For \(v_{0}\in L^{2}(\mathbb{R}^{n})\), let \(v(x_{n},.)\) be the associated solution of system (7.10). Then the function \(w=\psi(x_{n},y;D_{y})v\) satisfies the wave equation_
\[P_{A}w=[\partial_{x_{n}}^{2}-K,\psi]v+R_{-\infty}v\quad\text{in}\quad\{x_{n}>0\} \tag{7.17}\]
_where \(R_{-\infty}\) is a smooth family of tangential pseudo-differential operators, infinitely smoothing._
Proof.: Accordingly to the factorization of Proposition 7.3 and (7.10), we have
\[(\partial_{x_{n}}^{2}-K)w=[\partial_{x_{n}}^{2}-K,\psi]v+\psi(\partial_{x_{n}} ^{2}-K)v=[\partial_{x_{n}}^{2}-K,\psi]v+R_{-\infty}v. \tag{7.18}\]
Moreover, thanks to the design of the symbols \(\chi\) and \(\psi\), the symbolic calculus gives
\[\chi(x_{n},y;D_{y})\circ\psi(x_{n},y;D_{y})=\psi(x_{n},y;D_{y})+R_{-\infty},\]
hence
\[\Big{(}1-\chi(x_{n},y;D_{y})\Big{)}\circ\psi(x_{n},y;D_{y})=R_{-\infty}.\]
We then deduce that,
\[(\partial_{x_{n}}^{2}-K)w=\Big{(}\partial_{x_{n}}^{2}+r\chi-C(D_{t}^{2}+D_{x^ {\prime}}^{2})(1-\chi)\Big{)}\psi v=P_{A}\psi v+R_{-\infty}v. \tag{7.19}\]
Therefore
\[P_{A}w=[\partial_{x_{n}}^{2}-K,\psi]v+R_{-\infty}v\quad\text{in}\quad\{x_{n}> 0\}. \tag{7.20}\]
### A family of concentrated data
Consider \(\omega_{0}=(y_{0},\eta_{0})=(t_{0},x^{\prime}_{0},\tau_{0},\xi^{\prime}_{0})\) an elliptic point of \(T^{*}(\mathbb{R}^{n})\). And for \(\varepsilon>0\), take a solution \(v\) of system 7.10 with a boundary data \(v_{0\varepsilon}\) given by
\[v_{0\varepsilon}(y)=\varepsilon^{-n/4}exp\Big{(}\frac{i}{\varepsilon}\Big{[}(y -y_{0}).\eta_{0}\Big{]}\Big{)}exp\Big{(}-\frac{|y-y_{0}|^{2}}{\varepsilon} \Big{)} \tag{7.21}\]
**Lemma 7.8**.: _For \(v_{0\varepsilon}\) given above, we have_
\[\|v_{0\varepsilon}\|_{H^{s}}\sim\varepsilon^{-s},\qquad\text{for}\quad \varepsilon\to 0^{+}\quad\text{and}\quad\ s\in\mathbb{R}. \tag{7.22}\]
_In addition, if \(\lambda=\lambda(y;\eta)\in S^{k}_{1,0}(\mathbb{R}^{2n}),k\in\mathbb{R}\), is a tangential pseudo-differential symbol such that \(\omega_{0}=(y_{0},\eta_{0})\notin\text{supp}(\lambda)\), we have for every \(0\leq s\leq s^{\prime}\)_
\[\|\lambda(y;D_{y})v_{0\varepsilon}\|_{H^{s}}=o(\varepsilon^{s^{\prime}}) \quad\text{for}\quad\varepsilon\to 0^{+}. \tag{7.23}\]
**Remark 7.9**.: _Actually, the sequence \((v_{0\varepsilon})_{\varepsilon}\) weakly converges to \(0\) in \(L^{2}(\mathbb{R}^{n})\). Moreover, we can see that it admits a microlocal defect measure given by \(\mu(v_{0\varepsilon})=\delta_{(y_{0},\eta_{0}/|\eta_{0}|)}\)._
Proof.: For the seek of simplicity, we will work in \(\mathbb{R}^{n}\) equipped with its usual euclidian coordinate system, and assume that \(y_{0}=0\). More precisely, for given \(\xi_{0}\in\mathbb{R}^{n}\setminus 0\), we set
\[f_{\varepsilon}(x)=\varepsilon^{-n/4}exp\Big{(}\frac{i}{\varepsilon}x.\xi_{0} \Big{)}exp\Big{(}-\frac{|x|^{2}}{\varepsilon}\Big{)}.\]
Estimate (7.22) is obvious by direct computation. In what concerns (7.23), it's a classical fact of basic microlocal analysis, and we detail this point for the convenience of the reader. First, we notice that it's enough to prove the result for \(k=0\). Also, without loss of generality, we may assume the pseudo-differential symbol in the form \(\lambda(x,\xi)=\psi(\xi)\varphi(x)\) where \(\psi(\xi)\) is homogeneous of order \(0\) for \(|\xi|\geq 1\) supported outside a small conical neighborhood of \(\xi_{0}\). Moreover, we take \(\varphi\in\mathcal{C}_{0}^{\infty}(\mathbb{R}^{n})\), supported near the origin. In this setting, the Fourier transform of \(g_{\varepsilon}=\psi(D)\varphi f_{\varepsilon}\) reads as follows
\[\left\{\begin{array}{c}\mathcal{F}g_{\varepsilon}(\xi)=\varepsilon^{-n/4} \psi(\xi)\int exp\Big{(}-ix.(\xi-\varepsilon^{-1}\xi_{0})\Big{)}\varphi(x) exp\Big{(}-\varepsilon^{-1}|x|^{2}\Big{)}dx\\ =\varepsilon^{-n/4}\psi(\xi)\Big{(}\mathcal{F}(\varphi)*\mathcal{F}(exp(- \varepsilon^{-1}|\,.\,|^{2})\Big{)}(\xi-\varepsilon^{-1}\xi_{0})\\ =\pi^{n/2}\varepsilon^{n/4}\psi(\xi)\Big{(}\mathcal{F}(\varphi)*( exp(-\frac{\varepsilon}{4}|\,.\,|^{2})\Big{)}(\xi-\varepsilon^{-1}\xi_{0})=\pi^{n/2} \varepsilon^{n/4}\psi(\xi)(I_{1}+I_{2})(z)\end{array}\right. \tag{7.24}\]
where we denoted \(z=\xi-\varepsilon^{-1}\xi_{0}\), and
\[I_{1}=\int_{|\eta|\leq|z|/2}\mathcal{F}(\varphi)(\eta)exp(-\frac{\varepsilon} {4}|z-\eta|^{2})d\eta,\qquad I_{2}=\int_{|\eta|\geq|z|/2}\mathcal{F}(\varphi)( \eta)exp(-\frac{\varepsilon}{4}|z-\eta|^{2})d\eta.\]
In \(I_{1}\), \(|z-\eta|\geq|z|/2\geq c(|\xi|+\varepsilon^{-1}|\xi_{0}|)\) accordingly to the support condition of the symbol \(\psi\).
Therefore \(|z-\eta|\geq c|\xi|^{1/4}\varepsilon^{-3/4}|\xi_{0}|^{3/4}\), which yields to
\[|I_{1}|\leq c\,exp\Big{(}-c\,\varepsilon^{-1/2}|\xi_{0}|^{3/2}|\xi|^{1/2}\Big{)} \int|\mathcal{F}(\varphi)(\eta)|d\eta\leq C_{s}\varepsilon^{s}\langle\xi \rangle^{-s} \tag{7.25}\]
for every \(s>0\), \(|\xi|\geq 1\).
For \(I_{2}\), we write
\[\left\{\begin{array}{l}|I_{2}|\leq\int_{|\eta|\geq|z|/2}|\mathcal{F}(\varphi)( \eta)|d\eta\\ \\ \leq c_{k}(1+|z|)^{-k}\int_{|\eta|\geq|z|/2}|\mathcal{F}(\varphi)(\eta)|(1+|\eta |)^{k}d\eta\leq c_{k}(1+|z|)^{-k}\end{array}\right. \tag{7.26}\]
since \(\mathcal{F}(\varphi)\) lies in \(\mathcal{S}(\mathbb{R}^{n})\). Arguing then as above, we obtain for \(I_{2}\) an estimate similar to (7.25), which yields in turn
\[|\mathcal{F}g_{\varepsilon}(\xi)|\leq C_{s}^{\prime}\left|\psi(\xi)\left| \varepsilon^{s+n/4}\langle\xi\rangle^{-s}\right. \tag{7.27}\]
for every \(s>0\), \(|\xi|\geq 1\).
Finally, we replace in this last estimate \(s\) by \(s^{\prime}+n\) with \(s^{\prime}\geq s\). We then get
\[\langle\xi\rangle^{s}|\mathcal{F}g_{\varepsilon}(\xi)|\leq C_{s^{\prime}} \left|\psi(\xi)\right|\varepsilon^{s^{\prime}+5n/4}\left\langle\xi\rangle^{s -s^{\prime}-n},\]
and this gives the desired estimate.
In the sequel, without loss of generality, we assume that the ellipticity constant \(C_{\omega_{0}}\) of the pseudo-differential operator \(R\) introduced in Proposition 7.4 satisfies \(C_{\omega_{0}}\leq 1\).
**Corollary 7.10**.: _The function \(F_{\varepsilon}=[\partial_{x_{n}}^{2}-K,\psi]v_{\varepsilon}+R_{-\infty}v_{ \varepsilon}\), i.e the RHS of 7.20, satisfies for \(B>0\) and \(\varepsilon\) small enough_
\[\int_{0}^{B}\|F_{\varepsilon}(x_{n},.)\|_{L^{2}(\mathbb{R}^{n})}^{2}dx_{n} \lesssim C_{\omega_{0}}^{-3}\varepsilon\quad\text{for }\quad\varepsilon\to 0^{+}. \tag{7.28}\]
Proof.: We compute
\[\left\{\begin{array}{l}F_{\varepsilon}(x_{n},.)=(\partial_{x_{n}}^{2}\psi)v _{\varepsilon}+2(\partial_{x_{n}}\psi)\partial_{x_{n}}v_{\varepsilon}-[K,\psi ]v_{\varepsilon}+R_{-\infty}v_{\varepsilon}\\ \\ =\Big{(}(\partial_{x_{n}}^{2}\psi)v_{\varepsilon}-2(\partial_{x_{n}}\psi)R \Big{)}v_{\varepsilon}-\psi_{0}(x_{n})[K,\lambda_{0}]v_{\varepsilon}+R_{- \infty}v_{\varepsilon}\\ \\ =M_{1}v_{\varepsilon}+M_{2}v_{\varepsilon}+R_{-\infty}v_{\varepsilon}\end{array}\right. \tag{7.29}\]
Notice that \(M_{1}\) is a tangential pseudo-differential operator of order \(1\) whose symbol vanishes near \(x_{n}=0\), and in \(M_{2}\), the symbol \(\sigma([K,\lambda_{0}])\) is of order one and vanishes near \(\omega_{0}\).
First, accordingly to (7.8), (7.12) and (7.22), we can write
\[\int_{0}^{B}\|R_{-\infty}v_{\varepsilon}(x_{n},.)\|_{L^{2}}^{2}dx_{n}\lesssim \int_{0}^{B}\|v_{\varepsilon}(x_{n},.)\|_{L^{2}}^{2}dx_{n}\lesssim C_{\omega_{ 0}}^{-1}\|v_{0\varepsilon}\|_{H^{-1/2}}^{2}\lesssim C_{\omega_{0}}^{-1}\varepsilon \tag{7.30}\]
Secondly, \(M_{1}=(1+|D_{y}|)\Big{(}(1+|D_{y}|)^{-1}M_{1}\Big{)}\). Applying then (7.13) to \((1+|D_{y}|)^{-1}M_{1}\) with \(s=1/2\), we get
\[\int_{0}^{B}\|M_{1}v_{\varepsilon}(x_{n},.)\|_{L^{2}}^{2}dx_{n}=\int_{0}^{B}\|( 1+|D_{y}|)^{-1}M_{1}v_{\varepsilon}(x_{n},.)\|_{H^{1}}^{2}dx_{n}\lesssim C_{ \omega_{0}}^{-3}\|v_{0}\|_{H^{-1/2}}^{2} \tag{7.31}\]
since \(M_{1}\) vanishes near \(\{x_{n}=0\}\). Therefore, taking into account (7.22), we get
\[\int_{0}^{B}\|M_{1}v_{\varepsilon}(x_{n},.)\|_{L^{2}}^{2}dx_{n}\lesssim C_{ \omega_{0}}^{-3}\varepsilon \tag{7.32}\]
Finally, we use the same argument with the last term \(M_{2}v_{\varepsilon}\).
\[\int_{0}^{B}\|M_{2}v_{\varepsilon}(x_{n},.)\|_{L^{2}}^{2}dx_{n}=\int_{0}^{B}\|(1 +|D_{y}|)^{-1}M_{2}v_{\varepsilon}(x_{n},.)\|_{H^{1}}^{2}dx_{n} \tag{7.33}\]
\[\lesssim C_{\omega_{0}}^{-1}\|(1+|D_{y}|)^{-1}M_{2}v_{0\varepsilon}\|_{H^{1/2 }}^{2}+C_{\omega_{0}}^{-3}\|v_{0}\|_{H^{-1/2}}^{2} \tag{7.34}\]
Reminding that \(M_{2}\) vanishes near \(\omega_{0}\), estimate (7.23) yields for all \(s^{\prime}>0\)
\[\int_{0}^{B}\|M_{2}v_{\varepsilon}(x_{n},.)\|_{L^{2}}^{2}dx_{n}\lesssim C_{ \omega_{0}}^{-1}\varepsilon^{s^{\prime}}+C_{\omega_{0}}^{-3}\varepsilon \tag{7.35}\]
Taking then \(s^{\prime}=1\) and using (7.30), (7.32) and (7.35), and the fact that \(C_{\omega_{0}}\leq 1\), we get the result.
### Application to the lack of observability
We recall the notation \(\Gamma_{M}=(0,M)\times O\) and \(\Gamma^{\prime}_{M+T}=(0,M+T)\times O^{\prime}\) where \(O\) and \(O^{\prime}\) are two non empty open subsets of \(\partial\Omega\) such that \(\overline{O}\cap\overline{O^{\prime}}=\emptyset\). Let \(m_{0}\in\Gamma_{M}\) and \(\omega_{0}\in T_{m_{0}}^{*}\partial\mathcal{L}\) be an elliptic point in the sense of (3.2). Let us take a family of tangential pseudo-differential symbols \(\psi\), as introduced for Lemma 7.7, supported near \(\omega_{0}\), and with small space-time compact support near \(m_{0}\). More precisely, if \(m_{0}=(t_{0}>0,x_{0})\), we assume \(supp_{(t,x)}(\psi)\subset]t_{0}-\rho,t_{0}+\rho[\times U_{x_{0}}\), with \(\rho>0\) small and \(U_{x_{0}}\) a small neighborhood of \(x_{0}\) in \(\mathbb{R}^{n}\).
Now, in a local system of geodesic coordinates near \(m_{0}\), we have \(\Omega\cap U_{x_{0}}=\{x,\ x_{n}>0\}\), and in addition, the support property of \(\psi\) can be interpreted in the following sense
\[supp_{(t,x)}(\psi)\subset\{(t,x^{\prime},x_{n}),\ x_{n}\leq\alpha\}:=U_{m_{0}} ^{\alpha} \tag{7.36}\]
for some \(\alpha\) small enough. In particular, if \(v\) is a solution of system (7.4), it is defined on the whole half-space \(\{(t,x)=(t,x^{\prime},x_{n}),\ x_{n}\geq 0\}\) and, in geodesic coordinates, the function \(w=\psi v\) satisfies
\[supp(w)\cap\mathcal{L}\subset U_{m_{0}}^{\alpha}\cap\mathcal{L}. \tag{7.37}\]
In addition, we notice that \(\Gamma^{\prime}_{M+T}\subset\overline{\mathcal{L}_{M+T}\setminus U_{m_{0}}^{ \alpha}}\)
Finally, we consider the family of data \(v_{0\varepsilon}\) introduced in (7.21), \(v_{\varepsilon}\) the associated solution, and we set the wave system
\[\left\{\begin{array}{c}P_{A}h_{\varepsilon}=P_{A}w_{\varepsilon}=F_{ \varepsilon}\quad\text{in }\mathcal{L},\\ h_{\varepsilon}(t,.)=0\quad\text{on }\partial\mathcal{L},\\ h_{\varepsilon}(0,.)=\partial_{t}h_{\varepsilon}(0,.)=0\quad\text{in } \Omega,\end{array}\right. \tag{7.38}\]
where \(w_{\varepsilon}=\psi v_{\varepsilon}\) is the function introduced in Lemma 7.7. And we set \(u_{\varepsilon}=h_{\varepsilon}-w_{\varepsilon}=h_{\varepsilon}-\psi v_{\varepsilon}\). Recalling that the symbol of the pseudo-differential operator \(\psi\) is supported in space-time, near \(m_{0}=(t_{0}>0,x_{0})\), we have
\[\left\{\begin{array}{c}P_{A}u_{\varepsilon}=0\quad\text{in }\mathcal{L},\\ u_{\varepsilon}(t,.)=-\psi v_{0\varepsilon}(t,.)\quad\text{on }\partial \mathcal{L},\\ u_{\varepsilon}(0,.)=\partial_{t}u_{\varepsilon}(0,.)=0\quad\text{in }\Omega.\end{array}\right. \tag{7.39}\]
Notice in particular that \(u_{\varepsilon|\Gamma}=-\psi v_{\varepsilon|\Gamma}\) and \(u_{\varepsilon|(\partial\mathcal{L}\setminus\Gamma)}=0\). Using now the classical multiplier method of J.L.Lions for system (7.39) and hyperbolic energy estimate for system (7.38), we derive
\[\left\{\begin{array}{l}\|\partial_{n}u_{\varepsilon}\|^{2}_{L^{2}(\Gamma^{ \prime}_{M_{+}T})}\leq C\int_{\mathcal{L}_{M+T}\setminus U^{\alpha}_{m_{0}}}| \nabla_{t,x}u_{\varepsilon}|^{2}dxdt\\ \leq C\int_{\mathcal{L}_{M+T}\setminus U^{\alpha}_{m_{0}}}|\nabla_{t,x}h_{ \varepsilon}|^{2}dxdt+C\int_{\mathcal{L}_{M+T}\setminus U^{\alpha}_{m_{0}}}| \nabla_{t,x}v_{\varepsilon}|^{2}dxdt\leq C\int_{\mathcal{L}_{M+T}}|\nabla_{t, x}h_{\varepsilon}|^{2}dxdt\end{array}\right. \tag{7.40}\]
thanks to the support condition (7.36). Therefore, accordingly to hyperbolic energy estimate,
\[\|\partial_{n}u_{\varepsilon}\|^{2}_{L^{2}(\Gamma^{\prime}_{M_{+}T})}\leq C\| F_{\varepsilon}\|^{2}_{L^{2}((0,M+T)\times\Omega)}.\]
Thus, using (7.28) with \(B=M+T\), we obtain
\[\|\partial_{n}u_{\varepsilon}\|^{2}_{L^{2}(\Gamma^{\prime}_{M_{+}T})}\lesssim C ^{-3}_{\omega_{0}}\varepsilon. \tag{7.41}\]
### End of the proof of Theorem 7.1
Here we continue with the notations of Section 7.1. Let \(\omega_{0}=(t_{0},x^{\prime}_{0},\tau_{0},\xi^{\prime}_{0})\), \(t_{0}>0\), be a glancing point of \(T^{*}(\partial\mathcal{L})\), that is \(r_{0}(x^{\prime}_{0},\tau_{0},\xi^{\prime}_{0})=0\). And for \(\nu\in]0,1/2[\), consider the sequence \(\omega_{\nu}=(t_{0},x^{\prime}_{0},\tau_{\nu},\xi^{\prime}_{\nu})=(t_{0},x^{ \prime}_{0},(1-\nu)\tau_{0},(1+\nu)\xi^{\prime}_{0})\). We have
\[-r_{0}(\omega_{\nu})=2\nu\Big{(}\tau_{0}^{2}+\sum_{1\leq i,j\leq n-1}a_{ij}(x^{ \prime}_{0},0)\xi^{\prime}_{0i}\xi^{\prime}_{0j}\Big{)}\geq c\nu(\tau_{\nu}^{ 2}+|\xi^{\prime}_{\nu}|^{2}) \tag{7.42}\]
where the constant \(c>0\) depends only on \(\omega_{0}\) and the metric \((a_{ij}(x))\). In particular if \(\nu\to 0\), \((\omega_{\nu})\) is a sequence of elliptic points in \(T^{*}(\partial\mathcal{L})\) converging to the glancing point \(\omega_{0}\). Now, for fixed \(\nu\in]0,1/2[\), we follow all the arguments developed in sections 7.1 to 7.4 above : factorization of the wave symbol in a microlocal neighborhood of \(\omega_{\nu}\), resolution of a global pseudo-differential system of order 1,....We can then construct a sequence of solutions \(u^{\nu}_{\varepsilon}\) to the wave equation
\[\left\{\begin{array}{l}P_{A}u^{\nu}_{\varepsilon}=0\quad\text{in }\mathcal{L},\\ u^{\nu}_{\varepsilon}(t,.)=-\psi v^{\nu}_{0\varepsilon}(t,.)\quad\text{on } \partial\mathcal{L},\\ u^{\nu}_{\varepsilon}(0,.)=\partial_{t}u^{\nu}_{\varepsilon}(0,.)=0\quad \text{in }\Omega.\end{array}\right. \tag{7.43}\]
Obviously, Lemma 7.8 still reads
\[\|v^{\nu}_{0\varepsilon}\|_{H^{s}}\sim\varepsilon^{-s},\qquad\text{for}\quad \varepsilon\to 0^{+},\quad\text{and}\quad\ s\in\mathbb{R}, \tag{7.44}\]
uniformly with respect to \(\nu\), and the ellipticity constant \(C_{\omega_{\nu}}\) is now given by
\[C_{\omega_{\nu}}\approx\nu^{1/2}, \tag{7.45}\]
thanks to (7.42). Let us chose \(\nu=\varepsilon^{s}\). Thus we get a sequence of data \((v^{\varepsilon^{s}}_{0\varepsilon})\) weakly converging to \(0\) in \(L^{2}(\mathbb{R}^{n})\), of norm 1, and with a microlocal defect measure given by \(\mu(v^{\varepsilon^{s}}_{0\varepsilon})=\delta_{(y_{0},\eta_{0}/|\eta_{0}|)}\), which is precisely the Dirac mass at the limit glancing point.
Furthermore, estimate (7.41) takes now the following form
\[\|\partial_{n}(u^{\varepsilon^{s}}_{\varepsilon})\|^{2}_{L^{2}(\Gamma^{\prime }_{M_{+}T})}\lesssim C^{-3}_{\omega_{\nu}}\varepsilon\lesssim\varepsilon^{1-3 s/2}. \tag{7.46}\]
Comparing then with \(\|v^{\varepsilon^{s}}_{0\varepsilon}\|_{H^{s}}\sim\varepsilon^{-s}\), we obtain a contradiction for \(s>-2\).
The proof of Theorem 7.1 is complete. |
2302.00316 | Accelerated First-Order Optimization under Nonlinear Constraints | We exploit analogies between first-order algorithms for constrained
optimization and non-smooth dynamical systems to design a new class of
accelerated first-order algorithms for constrained optimization. Unlike
Frank-Wolfe or projected gradients, these algorithms avoid optimization over
the entire feasible set at each iteration. We prove convergence to stationary
points even in a nonconvex setting and we derive accelerated rates for the
convex setting both in continuous time, as well as in discrete time. An
important property of these algorithms is that constraints are expressed in
terms of velocities instead of positions, which naturally leads to sparse,
local and convex approximations of the feasible set (even if the feasible set
is nonconvex). Thus, the complexity tends to grow mildly in the number of
decision variables and in the number of constraints, which makes the algorithms
suitable for machine learning applications. We apply our algorithms to a
compressed sensing and a sparse regression problem, showing that we can treat
nonconvex $\ell^p$ constraints ($p<1$) efficiently, while recovering
state-of-the-art performance for $p=1$. | Michael Muehlebach, Michael I. Jordan | 2023-02-01T08:50:48Z | http://arxiv.org/abs/2302.00316v2 | # Accelerated First-Order Optimization under Nonlinear Constraints
###### Abstract
We exploit analogies between first-order algorithms for constrained optimization and nonsmooth dynamical systems to design a new class of accelerated first-order algorithms for constrained optimization. Unlike Frank-Wolfe or projected gradients, these algorithms avoid optimization over the entire feasible set at each iteration. We prove convergence to stationary points even in a nonconvex setting and we derive rates for the convex setting. An important property of these algorithms is that constraints are expressed in terms of velocities instead of positions, which naturally leads to sparse, local and convex approximations of the feasible set (even if the feasible set is nonconvex). Thus, the complexity tends to grow mildly in the number of decision variables and in the number of constraints, which makes the algorithms suitable for machine learning applications. We apply our algorithms to a compressed sensing and a sparse regression problem, showing that we can treat nonconvex \(\ell^{p}\) constraints (\(p<1\)) efficiently, while recovering state-of-the-art performance for \(p=1\).
Machine Learning, Convex Optimization, Convex Optimization, Convex Optimization, Convex Optimization, Convex Optimization, Convex Optimization
## 1 Introduction
Optimization plays an essential role in machine learning by providing a theoretical foundation on which algorithms, systems, and datasets can be brought together at unprecedented scales. The focus in recent years has been on unconstrained optimization and first-order algorithms, as this has sufficed for many applications in pattern recognition. In particular, theoretical work on rates, lower bounds, and choice of step sizes has focused on the unconstrained setting. This is despite the important role that constraints play in applications; indeed, emerging problems in machine learning involve decision-making in the real world, which often includes safety constraints, economic constraints, and constraints arising from the presence of multiple decision-makers. Similarly, control-theoretic problems often involve interactions with physical, biological, and social systems, whose laws are generally expressed in terms of fundamental constraints.
In practice, constraints can sometimes be treated via reparametrizations, which transform the constrained problem into an unconstrained one. Unfortunately, such a reparameterization affects the conditioning and thereby the convergence rates of algorithms, and it might be difficult to find reparameterizations that are computationally efficient. This motivates a nascent trend to focus directly on constrained optimization while retaining the advantages of first-order algorithms for machine learning.
The most prominent first-order methods that treat constraints are projected gradient algorithms and the Frank-Wolfe method. Both involve an inner loop inside of an overall procedure that optimizes over the entire feasible set. While this enables a relatively straightforward convergence analysis that parallels the unconstrained case, the procedure is only efficient if the feasible set has a simple structure, such as a norm ball, a low-dimensional hyperplane, or a probability simplex. If the feasible set fails to enable closed-form projections or closed-form Frank-Wolfe updates, algorithm designers often turn to interior point or sequential quadratic programming methods. These are significantly more complex, rely on second-order information, and their iteration complexity scales less favorably with the problem size.
Our goal in the current paper is to address the need for learning-friendly first-order methods that can handle constraints. We present a new class of first-order methods that are applicable to a wide range of problems in machine learning. An important simplification, compared to Frank-Wolfe or projected gradients, is that these methods rely exclusively on local approximations of the feasible set. While the entire feasible set might be described with a very large (or even infinite) number of _nonlinear_ constraints, these local approximations, which are well-defined for feasible and infeasible points, typically consist of a small number of _linear_ constraints. This substantially reduces the amount of computation required for a single iteration, and results in an expanded range of possible applications in machine learning. We highlight the efficiency of our methods by
including numerical results from a compressed sensing and a sparse regression problem, which include nonconvex \(\ell^{p}\) regularization with \(p<1\). A detailed summary that connects the proposed methods with the literature can be found in App. A.
**Notation and outline:** We consider the following problem:
\[\min_{x\in\mathbb{R}^{n}}f(x),\quad\text{s.t.}\quad g(x)\geq 0, \tag{1}\]
where the function \(f:\mathbb{R}^{n}\to\mathbb{R}\) defines the objective, the function \(g:\mathbb{R}^{n}\to\mathbb{R}^{n_{\text{s}}}\) the constraints, and where \(n\) and \(n_{\text{g}}\) are positive integers. The set of all real numbers is denoted by \(\mathbb{R}\) and the set of all integers by \(\mathbb{Z}\). In order to simplify our exposition, we do not explicitly include equality constraints--these can be treated in a similar way. The function \(f\) is assumed to be such that \(f(x)\to\infty\) for \(|x|\to\infty\), and the set of all \(x\in\mathbb{R}^{n}\) that satisfies \(g(x)\geq 0\) is denoted by \(C\), assumed non-empty and bounded. The boundedness of \(C\) simplifies the exposition; however, if \(C\) were unbounded, the coercivity of \(f\) could be used to add the additional constraint \(f(x)\leq f(x_{0})\), where \(x_{0}\) is a feasible initial condition, at which point \(C\) would be bounded again. The functions \(f\) and \(g\) are continuously differentiable and have a Lipschitz continuous gradient. Combined with the properties of \(C\) this guarantees that the minimum in (1) is attained. Moreover, the indicator function of a closed convex set \(A\subset\mathbb{R}^{n}\) is denoted by \(\psi_{A}\), and the subdifferential of the indicator function at \(x\in A\) is denoted by \(\partial\psi_{A}(x)\).
We note that non-smooth constraints can in many cases be reformulated (or approximated) such that the above assumptions on \(g\) are met. In case of an \(\ell^{p}\)-norm constraint, this leads for example to
\[\sum_{i=1}^{n}|x_{i}|^{p}\leq 1\ \ \Leftrightarrow\ \sum_{i=1}^{n}\bar{x}_{i}^{p}\leq 1,\ \ -\bar{x}_{i}\leq x_{i}\leq\bar{x}_{i},\\ i=1,\ldots,n, \tag{2}\]
where \(\bar{x}_{i}\in\mathbb{R}\), \(i=1,\ldots,n\) are additional decision variables.
We will mostly frame optimization problems in terms of continuous-time dynamical systems, where the equilibria of the dynamics correspond to the stationary points of (1). The continuous-time point of view often provides important (qualitative) intuition, simplifies convergence arguments, and exposes important links to dynamical and mechanical systems. Indeed, the recent line of work pursued by Su et al. (2016), Wibisono et al. (2016), Franca et al. (2020) and others has shown that continuous-time models provide not only a means to understand and derive upper bounds on the iteration complexity of algorithms but also lower bounds (cf. Muehlebach & Jordan, 2020).
The paper is structured in the following way: Sec. 2 summarizes earlier work of Muehlebach & Jordan (2022), which covers gradient descent and sets the stage for discussing momentum-based algorithms in Sec. 3. A variety of convergence results that capture both discrete-time and continuous-time models are presented in Sec. 4. In the nonconvex regime we establish convergence to stationary points and we derive accelerated rates in the convex regime. Sec. 5 presents numerical experiments, which include nonconvex sparse regression and compressed sensing problems. The paper concludes with a short discussion in Sec. 6.
## 2 Constrained Gradient Flow
One of the main ideas in Muehlebach & Jordan (2022) is to express constraints in terms of velocities instead of positions, which naturally leads to local, sparse and convex approximation of the feasible set. We begin with a brief review of this work.
Let us model an optimization algorithm as a continuous-time or discrete-time dynamical system, whose equilibria correspond to the stationary points of (1). In continuous time, the configuration of the system will be denoted by \(x:[0,\infty)\to\mathbb{R}^{n}\), which is assumed to be absolutely continuous. A fundamental observation, lying at the heart of the current research, is that the constraint \(x(t)\in C\), for all \(t\geq 0\), is equivalent to the constraint \(\dot{x}(t)^{+}\in T_{C}(x(t))\), for all \(t\geq 0,x(0)\in C\), where \(T_{C}(x(t))\) denotes the tangent cone (in the sense of Clarke) of the set \(C\) at \(x(t)\in\mathbb{R}^{n}\), and \(\dot{x}(t)^{+}\) denotes the forward velocity: \(\dot{x}(t)^{+}:=\lim_{\mathrm{d}t\downarrow 0}(x(t+\mathrm{d}t)-x(t))/ \mathrm{d}t\). The tangent cone \(T_{C}(x)\) is defined as the set of all vectors \(v\) such that there exist two sequences \(x_{k}\in C\) and \(t_{k}\geq 0\) with \(x_{k}\to x\), \(t_{k}\to 0\) and \((x_{k}-x)/t_{k}\to v\). Provided that a constraint qualification holds (for example Mangasarian-Fromovitz or Abadie constraint qualification), the tangent cone can be expressed as
\[T_{C}(x)=\{v\in\mathbb{R}^{n}\mid\nabla g_{i}(x)^{\mathsf{T}}v\geq 0,\ \ \forall i\in I_{x}\},\]
where \(I_{x}\) denotes the set of active inequality constraints at \(x\); that is, \(i\in I_{x}\) if \(g_{i}(x)\leq 0\).
We therefore conclude that the constraint \(x(t)\in C\), which constrains the position \(x\), is equivalent to a constraint on the forward velocity \(\dot{x}^{+}\). We note that the velocity \(\dot{x}\) is allowed to be discontinuous and may not exist for every \(t\geq 0\).1 For example, if the trajectory \(x\) reaches the boundary of the feasible set, an instantaneous jump of the velocity might be required to ensure that \(x\) remains in \(C\).
Footnote 1: We assume that \(\dot{x}\) is of locally bounded variation, which means that on any compact interval \(\dot{x}\) has countably many discontinuity points, where left and right limits exist.
In discrete time, however, this equivalence between position and velocity constraints no longer holds, since \(T_{C}(x)\) is
only a first-order approximation of the feasible set. Thus, implementing \((x_{k+1}-x_{k})/T\in T_{C}(x_{k})\) may lead to infeasible iterates. Muehlebach & Jordan (2022) therefore suggest to introduce the velocity constraint \(V_{\alpha}(x)\), which is defined as
\[V_{\alpha}(x):=\{v\in\mathbb{R}^{n}\mid\nabla g_{i}(x)^{\mathsf{T}}v+\alpha g _{i}(x)\geq 0,\forall i\in I_{x}\}, \tag{3}\]
and includes the restitution coefficient \(\alpha>0\). The following remarks motivate (3):
1. For \(x\in C\), the set \(V_{\alpha}(x)\) reduces to the tangent cone \(T_{C}(x)\) (assuming constraint qualification).
2. For a fixed \(x\in\mathbb{R}^{n}\), \(V_{\alpha}(x)\) is a convex polyhedral set involving only the active constraints \(I_{x}\). The set \(V_{\alpha}(x)\) therefore amounts to a sparse and linear approximation of the feasible set \(C\), even if \(C\) is nonconvex.
3. In continuous time, the constraint \(\dot{x}(t)^{+}\in V_{\alpha}(x(t))\) for all \(t\geq 0\) implies \[g_{i}(x(t))\geq\min\{g_{i}(x(0))e^{-\alpha t},0\},\] (4) for all \(t\geq 0\) and all \(i\in\{1,\ldots,n_{\texttt{g}}\}\), which can be verified with Gronwall's inequality. This means that potential constraint violations decrease at rate \(\alpha\).
The continuous-time gradient flow dynamics that were studied in Muehlebach & Jordan (2022) arise from the following conditions:
\[\dot{x}(t)^{+}+\nabla f(x(t))=R(t),\quad-R(t)\in N_{V_{\alpha}(x(t))}(\dot{x} (t)^{+}), \tag{5}\]
for all \(t\geq 0\), where \(N_{V_{\alpha}(x(t))}(\dot{x}(t)^{+})\) denotes the normal cone of the set \(V_{\alpha}(x(t))\) at \(\dot{x}(t)^{+}\). Thus, the variable \(R(t)\) can be regarded as a constraint force that imposes the constraint \(\dot{x}(t)^{+}\in V_{\alpha}(x(t))\). Moreover, for each fixed \(t\geq 0\), we can eliminate \(R(t)\) in (5) and interpret the resulting expression as a stationarity condition with respect to \(\dot{x}(t)^{+}\), which means that (5) is equivalent to
\[\dot{x}(t)^{+}:=\operatorname*{argmin}_{v\in V_{\alpha}(x(t))}\frac{1}{2}|v+ \nabla f(x(t))|^{2}. \tag{6}\]
As long as \(V_{\alpha}(x(t))\) is nonempty, this guarantees the uniqueness of \(\dot{x}(t)^{+}\) for every \(t\geq 0\). It also provides the following natural interpretation: the forward velocity \(\dot{x}(t)^{+}\) is chosen to match the unconstrained gradient flow equation \(\dot{x}(t)^{+}+\nabla f(x(t))=0\) as close as possible, subject to the velocity constraint \(\dot{x}(t)^{+}\in V_{\alpha}(x(t))\).
_Remark 2.1_.: Nonemptiness of \(V_{\alpha}(x)\): If \(C\) is convex, \(V_{\alpha}(x)\) is guaranteed to be nonempty for all \(x\in\mathbb{R}^{n}\). If \(C\) is nonconvex, nonemptiness of \(V_{\alpha}(x)\) for all \(x\) in a neighborhood of \(C\) is guaranteed if the Mangasarian-Fromovitz constraint qualification holds for all \(x\in C\). We note that the Mangasarian-Fromovitz constraint qualification is generic in the following sense: Provided that \(g\) is semi-algebraic or definable (these cases include all the usual functions used in optimization) there exists \(\epsilon_{0}\in\mathbb{R}^{m},\epsilon_{0}>0\), such that the set \(C_{\epsilon}:=\{x\in\mathbb{R}^{n}\mid g(x)\geq-\epsilon\}\) satisfies the Mangasarian-Fromovitz constraint qualification for all \(x\in C_{\epsilon}\) and for all \(\epsilon\in(0,\epsilon_{0})\); see Bolte et al. (2018).
In discrete time, it suffices to replace \(\dot{x}(t)^{+}\) by \((x_{k+1}-x_{k})/T\) and \(x(t)\) by \(x_{k}\) in order to obtain the corresponding constrained gradient-descent dynamics. The condition (6) can be interpreted as a modified projected gradient scheme, where projections over the entire feasible set \(C\) are replaced with optimizations over the sparse and convex approximation \(V_{\alpha}(x_{k})\). The remark about the nonemptiness of \(V_{\alpha}(x_{k})\) applies in the same way.
The results from Muehlebach & Jordan (2022) establish convergence of (5) (and/or (6)) both in continuous and discrete time. In continuous time, it was shown that even when \(f\) and \(C\) are nonconvex, the trajectories of (5) (and/or (6)) converge to the set of stationary points. Moreover, if \(f\) is strongly convex with strong convexity constant \(\mu\) and \(\alpha\) is set to \(2\mu\), the trajectories converge from any initial condition to the minimizer of (1) at an exponential rate:
\[|f(x(t))-f(x^{*})|\leq(|f(x(0))-f^{*}|+c_{1})e^{-2t/\kappa},\]
where \(\kappa\) is the condition number and \(c_{1}\geq 0\) is an explicit constant that captures whether the initial condition is feasible or not. Similarly, if \(f\) is strongly convex and \(C\) is convex, the trajectories in discrete time (where \(\dot{x}(t)^{+}\), \(x(t)\), and \(R(t)\) are replaced with \((x_{k+1}-x_{k})/T\), \(x_{k}\), and \(R_{k}\), respectively) are guaranteed to converge to the minimizer of (1) for \(T\leq 2/(L_{\mathsf{I}}+\mu)\), \(\alpha<\mu\), where \(L_{\mathsf{I}}\) refers to the smoothness constant of the corresponding Lagrangian. Convergence is approximately linear in this case; see Muehlebach & Jordan (2022) for the formal statement of the results.
This implies that (5) (and/or (6)) implement gradient-flow dynamics that can handle constraints and converge linearly with the typical \(1/\kappa\)-rate if the objective function \(f\) is smooth and strongly convex. The set \(V_{\alpha}(x)\) can be seen as a velocity constraint and provides a natural generalization of the tangent cone. It also reduces the computational cost for each iteration, since projections on the entire feasible set are avoided. In the next section, we generalize these ideas to algorithms that have momentum. This will naturally lead to accelerated algorithms that converge linearly at a rate of roughly \(1/\sqrt{\kappa}\) (if \(f\) is smooth and strongly convex) or at the sublinear rate \(1/t^{2}\) (if \(f\) smooth and convex), which is a significant speedup. We will also derive discrete-time convergence results even if \(f\) and \(C\) are nonconvex.
## 3 Accelerated Gradient Flow
We build upon the results summarized in Sec. 2 to derive momentum-based algorithms, beginning our presentation with a derivation in continuous time. The corresponding discrete-time algorithms will be stated subsequently. A natural starting point is the work of Polyak (1964), Su et al. (2016), and Muehlebach & Jordan (2019), for example, who argued that in the _unconstrained_ case, accelerated optimization algorithms can be viewed as dynamical systems described by second-order differential equations. A canonical example is the following:
\[\dot{u}(t)+2\delta u(t)+\nabla f(x(t)+\beta u(t))=0, \tag{7}\]
where we use the variable \(u(t)=\dot{x}(t)\) to denote the velocity (or momentum), and where \(\delta\geq 0\) and \(\beta\geq 0\) are damping parameters.1
Footnote 1: The variables \(\delta\), \(\beta\) may also depend on time. For ease of presentation we focus on the case where \(\delta\) and \(\beta\) are fixed, but also state corresponding results for time-varying parameters.
In the presence of constraints, \(u(t)\) is allowed to be discontinuous, which is in sharp contrast to (7). For example, if the trajectory \(x(t)\) approaches the boundary of the feasible set, an instantaneous jump in \(u(t)\) might be required to ensure that \(x(t)\) remains feasible. Thus, compared to (5) (or equivalently (6)), where the state \(x(t)\) is absolutely continuous, we are now in a position where we allow for the state \((x(t),u(t))\) (which includes the velocity \(u\)) to be discontinuous. This means that in addition to a differential equation of the type (5), which characterizes the smooth motion, we also prescribe how the discontinuities in \(u\) can arise. If we regard \((x(t),u(t))\) as the position and velocity of a mechanical system, discontinuities in \(u\) have a mechanical meaning as impacts, which are described by a corresponding impact law. The mathematical formalism, which enables discontinuities in \(u\), is summarized next.
We still regard the state \(z:=(x,u)\) to be the result of an integration process:
\[z(t)=z(t_{0})+\int_{t_{0}}^{t}\mathrm{d}z,\quad\forall t\geq t_{0}.\]
However, instead of the usual Lebesgue density \(\mathrm{d}z=\dot{z}(t)\mathrm{d}t\), \(\mathrm{d}z\) now represents a differential measure (Leine & van de Wouw, 2008a), and admits both a density with respect to the Lebesgue measure (denoted by \(\mathrm{d}t\)), as well as a density with respect to an atomic measure (denoted by \(\mathrm{d}\eta\)). As is common in non-smooth mechanics, we assume that \(z(t)\) is of locally bounded variation and does not contain any singular terms. This means that \(z(t)\) can be decomposed in an absolutely continuous function and piecewise constant step function (Leine & van de Wouw, 2008a). At every time \(t\), \(z(t)\) has well-defined left and right limits, \(z(t)^{-}\) and \(z(t)^{+}\), even though \(z(t)\) might not exist or might not be of interest. We can express the differential measure \(\mathrm{d}z\) as \(\mathrm{d}z=\dot{z}(t)\mathrm{d}t+(z(t)^{+}-z(t)^{-})\mathrm{d}\eta\), and the integration over an interval \([t_{0},t]\), which contains the time instants \(t_{\mathrm{di}}\), \(i=1,2,\ldots\), where \(z(t)\) is discontinuous, yields
\[z(t)^{+}=z(t_{0})^{-}+\int_{t_{0}}^{t}\dot{z}(t)\mathrm{d}t+\sum_{i\geq 1}z (t_{\mathrm{di}})^{+}-z(t_{\mathrm{di}})^{-}.\]
As a consequence of allowing the state to be discontinuous, we need to delineate both the density \(\dot{z}(t)\) with respect to the Lebesgue measure \(\mathrm{d}t\) (which describes the smooth part of the motion) as well as the density \(z(t)^{+}-z(t)^{-}\) (which describes the non-smooth motion) for fully determining the state trajectory \(z(t)\). By analogy to non-smooth mechanics (see, e.g., Studer, 2009), this can be achieved with the following measure-differential inclusion:
\[\mathrm{d}u+2\delta u\mathrm{d}t+\nabla f(x+\beta u)\mathrm{d}t =\sum_{i\in I_{x}}\nabla g_{i}(x)\mathrm{d}\lambda_{i},\\ \gamma_{i}^{+}+e\gamma_{i}^{-}\in N_{\mathbb{R}_{\leq 0}}(- \mathrm{d}\lambda_{i}),\quad i\in I_{x}, \tag{8}\]
where \(\epsilon\in[0,1)\) is a constant, \(\gamma_{i}\) is the velocity associated with the \(i\)th constraint and is defined as
\[\gamma_{i}(x,u):=\nabla g_{i}(x)^{\mathsf{T}}u+\alpha g_{i}(x),\]
and where we have omitted the dependence on \(t\).2 We note that the set \(I_{x}\) (or \(I_{x(t)}\) in full notation) is time dependent. The normal cone inclusion in (8) is illustrated with Fig. 1 and will be further discussed below. The constant \(\epsilon\) has the interpretation of a restitution coefficient, whereby \(\epsilon=0\) leads to inelastic collisions, and \(\epsilon=1\) yields elastic collisions. Measure-differential inclusions are common in non-smooth mechanics and the community has established various existence results for inclusions of the type (8); see, for example, Piazza et al. (2021); Leine & van de Wouw (2008b) and references therein.
Footnote 2: We will frequently do so in the following.
We note that if \(x\) is in the interior of the feasible set, \(I_{x}\) is empty, and therefore (8) reduces to (7). This means that (8) generalizes (7) from the unconstrained case to the constrained case by including the constraint \(\gamma_{i}^{+}+e\gamma_{i}^{-}\in N_{\mathbb{R}_{\leq 0}}(-\mathrm{d}\lambda_{i})\), which, as we will discuss below, describes the discontinuities of \(u\) (via Newton's impact law) and imposes the velocity constraint \(u(t)\in V_{\alpha}(x(t))\), whenever \(u(t)\) exists.
It is important to note that (8) is understood in the sense of integration: For any compact time interval \([t_{0},t_{1}]\), (8) defines the difference \(u(t_{1})^{+}-u(t_{0})^{-}\), which is obtained by integrating \(\mathrm{d}u\) from \(t_{0}\) to \(t_{1}\); similarly, the difference \(x(t_{1})^{+}-x(t_{0})^{-}\) is obtained by integrating \(u(t)\mathrm{d}t\). This means that (8) has a very natural discretization, which will
be discussed in the next paragraph. The last two paragraphs describe the meaning of (8). Formal convergence results in continuous and discrete time will be derived in Sec. 4.
Discretization of (8):The measure-differential inclusion (8) lends itself to the following discretization: \(\mathrm{d}u=u_{k+1}-u_{k}\), \(\mathrm{d}t=T_{k}\), \(\mathrm{d}\lambda_{i}=\Lambda_{ki}\), \(\gamma_{i}^{+}=\gamma_{i}(x_{k},u_{k+1})\), \(\gamma_{i}^{-}=\min\{0,\gamma_{i}(x_{k},u_{k})\}\), where \(T_{k}>0\) is the step size.1 This yields
Footnote 1: The min in \(\gamma_{i}^{-}\) ensures \(u_{k+1}\in V_{\alpha}(x_{k})\), which is not automatically satisfied in discrete time.
\[u_{k+1}\!-u_{k}\!+\!2\delta u_{k}T_{k}\!+\!\nabla f(x_{k}\!+\! \beta u_{k})T_{k}\!=\!\!\sum_{i\in I_{x_{k}}}\!\!\!\nabla g_{i}(x_{k})\Lambda_{ ki},\] \[\gamma_{i}(x_{k},u_{k+1})+\epsilon\min\{0,\gamma_{i}(x_{k},u_{k} )\}\in N_{\mathbb{R}_{\leq 0}}(-\Lambda_{ki}),\]
\(i\in I_{x_{k}}\). We use the newly computed momentum for updating the position \(x_{k}\): \(x_{k+1}=x_{k}+T_{k}u_{k+1}\), which is motivated by analogy to unconstrained optimization. (This discretization scheme is found to be superior compared to the standard Euler method; see Muehlebach & Jordan (2021).) The resulting update for \(u_{k+1}\) can be interpreted as a stationarity condition for \(u_{k+1}\), and as a result, the proposed algorithm can be summarized as follows:
\[\tilde{u}_{k+1} = \operatorname*{argmin}_{v\in\mathbb{R}^{n}}\frac{1}{2}|v\!-\!u_{ k}\!+\!2\delta u_{k}T_{k}\!+\!\nabla f(x_{k}+\beta u_{k})T_{k}|^{2},\] s.t. \[\gamma_{i}(x_{k},v)\geq-\epsilon\min\{\gamma_{i}(x_{k},u_{k}),0 \},\;\;i\in I_{x_{k}}\] \[x_{k+1} = x_{k}+T_{k}u_{k+1}. \tag{9}\]
Remark 2.1 applies here in the same way: If \(C\) is convex and \(\epsilon=0\), the feasible set in (9) is guaranteed to be nonempty, which means that \(u_{k+1}\) is well defined (existence and uniqueness). If \(C\) is nonconvex or \(\epsilon>0\), nonemptiness of the feasible set is guaranteed if constraint qualifications are satisfied (for example Mangasarian-Fromovitz). These constraint qualifications are generic, as discussed in Remark 2.1, which ensures that \(u_{k+1}\) is well defined as long as \(x_{k}\) stays in a neighborhood of the feasible set. As will be shown with our convergence analysis (see Sec. 4), we can indeed ensure that \(x_{k}\) remains in a neighborhood of \(C\), the size of which we can control by choosing \(T_{k}\) appropriately. The pseudo-code of the full algorithm is listed in App. B.
The following remarks are important:
1. The update (9) has the interpretation of choosing \(u_{k+1}\) to be as close as possible to the update in the unconstrained case subject to the velocity constraint \(\gamma_{i}(x_{k},u_{k+1})\geq-\epsilon\min\{\gamma_{i}(x_{k},u_{k}),0\}\). As a result, in case \(I_{x_{k}}\) is empty, (9) reduces to a standard momentum-based method; if \(\beta=0\) we obtain the heavy-ball algorithm, if \(\beta\neq 0\) we obtain Nesterov's method.
2. The update (9) includes only the constraints \(I_{x_{k}}\) which are active at iteration \(k\). The constraint on \(v\) in (9) is guaranteed to be convex, even if the underlying feasible set is nonconvex. The constraints in (9) yield therefore a sparse, local and convex approximation of the feasible set. Instead of performing optimizations on the position level as is common with projected gradients or the Frank-Wolfe method, (9) suggests to constrain the velocities \(u_{k}\), \(k=1,2,\dots\).
We now proceed to give an interpretation and explanation of the continuous-time dynamics (8).
Smooth motion:If \(u(t)\) happens to be absolutely continuous in the interval \((t_{0},t_{1})\), its differential measure reduces to \(\dot{u}(t)\mathrm{d}t\). Similarly, the multipliers \(\mathrm{d}\lambda_{i}\) have only a density with respect to the Lebesgue measure \(\mathrm{d}t\), which we denote by \(\lambda_{i}(t)\), i.e., \(\mathrm{d}\lambda_{i}=\lambda_{i}(t)\mathrm{d}t\). As a result, (8) reduces to
\[\dot{u}+2\delta u+\nabla f(x+\beta u)=\sum_{i\in I_{x}}\nabla g_{i}(x)\lambda_ {i}, \tag{10}\]
for all \(t\in(t_{0},t_{1})\) (a.e.). Furthermore, absolute continuity of \(u(t)\) implies absolute continuity of \(\gamma_{i}\), i.e., \(\gamma_{i}^{+}=\gamma_{i}^{-}=\gamma_{i}\). In the limit \(\mathrm{d}t\downarrow 0\), the inclusion in (8) therefore reduces to
\[(1+\epsilon)\gamma_{i}\in N_{\mathbb{R}_{\leq 0}}(-\lambda_{i})\quad \Leftrightarrow\quad\gamma_{i}\in N_{\mathbb{R}_{\leq 0}}(-\lambda_{i}),\]
for all \(i\in I_{x(t)}\) and for all \(t\in(t_{0},t_{1})\) (a.e.). (\(N_{\mathbb{R}_{\leq 0}}\) is a cone; we can therefore divide by \(1+\epsilon>0\).) The normal cone inclusion prescribing the relationship between \(\gamma_{i}\) and \(\lambda_{i}\) is similar to Fig. 1.
From a physics perspective the normal cone inclusion \(\gamma_{i}\in N_{\mathbb{R}\leq 0}(-\lambda_{i})\) represents a force law, which by conic duality can also be expressed as (see again Fig. 1)
\[-\lambda_{i}\in N_{\mathbb{R}\geq 0}(\gamma_{i})=\partial\psi_{\mathbb{R}_{ \geq 0}}(\gamma_{i}).\]
The sum \(\nabla g_{i}(x(t))\lambda_{i}\) over \(i\in I_{x}\) on the right-hand side of (10) therefore has a physical interpretation as a constraint force:
\[-R\!=\!-\sum_{i\in I_{x}}\!\nabla g_{i}(x)\lambda_{i},\quad-R\in\partial\psi _{V_{\alpha}(x)}(u)=N_{V_{\alpha}(x)}(u),\]
which imposes the velocity constraint \(u(t)\in V_{\alpha}(x(t))\) for all \(t\in(t_{0},t_{1})\) (a.e.). By virtue of Gronwall's inequality this ensures (4).
We therefore conclude that in case of smooth motion, the measure-differential inclusion (8) generalizes the differential equation (7) from the unconstrained case to the constrained case, where the additional constraint force \(R(t)\) imposes the velocity constraint \(u(t)\in V_{\alpha}(x(t))\) (for almost all \(t\)). The introduction of the force \(R(t)\) is analogous to (5).
Since the motion is smooth for almost every \(t\), the normal cone inclusion in (8) guarantees the satisfaction of the velocity constraint \(u(t)\in V_{\alpha}(x(t))\) (or equivalently, \(\gamma_{i}(x(t),u(t))\geq 0\) for all \(i\in I_{x(t)}\)) for all \(t\geq 0\) (a.e.). However, when a new constraint arises at time \(t_{0}\), there might be a situation where \(\gamma_{i}^{-}(x(t_{0}),u(t_{0}))<0\). In such a case an impact will be required to ensure that \(\gamma_{i}^{+}(x(t_{0}),u(t_{0}))\geq 0\). This is the subject of the next paragraph.
Non-smooth motion:In order to derive the non-smooth motion we integrate (8) over a time instant \(\{t\}\), where \(u(t)\) is discontinuous; that is, \(u(t)^{-}\neq u(t)^{+}\). Due to the fact that the singleton \(\{t\}\) has zero Lebesgue measure, we are left with the atomic parts, leading to \(\mathrm{d}u=(u(t)^{+}-u(t)^{-})\mathrm{d}\eta\), \(\mathrm{d}\lambda_{i}:=\Lambda_{i}\mathrm{d}\eta\),
\[u(t)^{+}-u(t)^{-}=\sum_{i\in I_{x(t)}}\nabla g_{i}(x(t))\Lambda_ {i},\\ \gamma_{i}^{+}+\epsilon\gamma_{i}^{-}\in N_{\mathbb{R}\leq 0}(- \Lambda_{i}),\quad i\in I_{x(t)}. \tag{11}\]
The normal cone inclusion should be interpreted as a generalization of Newton's impact law. For \(\Lambda_{i}>0\), it implies \(\gamma_{i}^{+}+\epsilon\gamma_{i}^{-}=0\), meaning that the velocity associated to constraint \(i\) after impact, \(\gamma_{i}^{+}\), is \(-\epsilon\gamma_{i}^{-}\), where \(\gamma_{i}^{-}\) is the velocity associated to constraint \(i\) before impact. From the discussion of the smooth motion it follows \(\gamma_{i}(x(t_{0}),u(t_{0}))^{-}\) at time \(t_{0}\) can only be negative if the constraint \(i\) becomes active at time \(t_{0}\); that is, \(i\not\in I_{x(t)}\) for \(t<t_{0}\) and \(i\in I_{x(t)}\) for \(t=t_{0}\). This necessitates a discontinuity in \(u\) at time \(t_{0}\), which according to the above normal cone inclusion comes in two variants: i) \(\Lambda_{i}>0\), which implies \(\gamma_{i}^{+}=-\epsilon\gamma_{i}^{-}\) and ii) \(\Lambda_{i}=0\), which implies \(\gamma_{i}^{+}\geq-\epsilon\gamma_{i}^{-}\). In variant i), the impulsive force \(\Lambda_{i}\) contributes the component \(\Lambda_{i}\nabla g_{i}(x(t))\) (normal to constraint \(i\)) to the velocity jump \(u(t_{0})^{+}-u(t_{0})^{-}\), whereas in variant ii), there is no such contribution. Both variants ensure \(\gamma_{i}(x(t_{0}),u(t_{0}))^{+}\geq-\epsilon\gamma_{i}(x(t_{0}),u(t_{0}))^{-} \geq 0\).
The characterization of the non-smooth motion according to (11) can be interpreted as a stationarity condition for \(u(t)^{+}\), which yields
\[u(t)^{+}=\operatorname*{argmin}_{v\in\mathbb{R}^{n}}\frac{1}{2} |v-u(t)^{-}|^{2}\quad\text{s.t.}\\ \gamma_{i}(x(t),v)\geq-\epsilon\gamma_{i}(x(t),u(t)^{-}),\ \ \forall i\in I_{x(t)}. \tag{12}\]
The minimization in (12) has the following interpretation: for each \(u(t)^{-}\) there is a unique \(u(t)^{+}\), which is chosen to be as close as possible to \(u(t)^{-}\) subject to the impact law \(\gamma_{i}^{+}\geq-\epsilon\gamma_{i}^{-}\) for all \(i\in I_{x(t)}\).
Equilibria of (8):The equilibria of (8) are obtained from \(x(t)\equiv x_{0},\mathrm{d}\lambda_{i}\equiv\lambda_{0i}\mathrm{d}t,u(t)\equiv 0,\mathrm{d}u\equiv 0\), where \(x_{0}\in\mathbb{R}^{n}\) and the multipliers \(\lambda_{0i}\geq 0\), \(i\in I_{x_{0}}\) are constant. As a result, (8) reduces to
\[-\nabla f(x_{0})+\sum_{i\in I_{x_{0}}}\nabla g_{i}(x_{0})\lambda_ {i0}=0,\\ (1+\epsilon)\alpha g_{i}(x_{0})\in N_{\mathbb{R}_{\leq 0}}(- \lambda_{i0}),\ \ i\in I_{x_{0}}.\]
The normal cone inclusion can be simplified by dividing by \(\alpha(1+\epsilon)>0\) (the normal cone is a cone), which implies that \(g_{i}(x_{0})\) and \(\lambda_{i0}\) satisfy the complementarity conditions
\[g_{i}(x_{0})\geq 0,\quad\lambda_{i0}\geq 0,\quad\lambda_{i0}g_{i}(x_{0})=0, \quad\forall i\in I_{x_{0}}.\]
Hence, the equilibria of (8) satisfy the Karush-Kuhn-Tucker conditions of (1), which means that the stationary points of (1) are indeed equilibria.
## 4 Convergence Analysis
The following section discusses the convergence of trajectories of (8) and (9), and characterizes the rate of convergence. Without loss of generality we assume that \(f\) is normalized such that the Lipschitz constant of the gradient is unity.
**Proposition 4.1**.: _Let \((x(t)\), \(u(t))\) be a trajectory satisfying (8) (according to Sec. 3) with \(x(0)\in C\). Let \(f\) be \(1\)-smooth, let \(g\) satisfy the Mangasarian-Fromovitz constraint qualification, and let either \(f\) be convex or \(2\delta-\beta>0\). Then, \(x(t)\) converges to the set of stationary points, while \(u(t)\) converges to zero. Moreover, each isolated local minimum corresponds to an asymptotically stable equilibrium in the sense of Lyapunov._
The following proposition demonstrates that the use of momentum combined with well-chosen damping parameters indeed results in accelerated convergence rates (\(\mathcal{O}(1/t^{2})\) in the smooth and convex case, and \(e^{-\sqrt{\mu t}}\) in the smooth and strongly convex case).1
Footnote 1: Continuous-time rates are indeed meaningful in this context,
\begin{table}
\begin{tabular}{l|c|c|c|c} variant & \(\alpha\) & \(\delta\) & \(\beta\) & rate \(\rho\) \\ \hline h. b. & \(\sqrt{\mu}\) & \(\sqrt{\mu}\) & \(0\) & \(e^{-\sqrt{\mu t}}\) \\ N. c. p. & \(\sqrt{\mu}-\mu/2\) & \(\frac{\sqrt{\mu}}{1+\sqrt{\mu}}\) & \(\frac{1-\sqrt{\mu}}{1+\sqrt{\mu}}\) & \(e^{-(\sqrt{\mu}-\mu/2)t}\) \\ N. v. p. & \(\frac{2}{t+3}\) & \(\frac{3}{2(t+3)}\) & \(\frac{t}{t+3}\) & \(\frac{9}{(t+3)^{2}}\) \\ \end{tabular}
\end{table}
Table 1: The table summarizes convergence rates that arise from different choices of \(\alpha\), \(\beta\), and \(\delta\). The abbreviation h. b. stands for heavy ball, N. c. p. for Nesterov constant parameters, N. v. p. for Nesterov varying parameters.
**Proposition 4.2**.: _Let \(C\) be convex and \(f\) be \(1\)-smooth and either convex or strongly convex with strong convexity constant \(\mu>0\). Let the parameters \(\alpha\), \(\beta\), \(\delta\), and \(\rho\) be chosen according to Table 1 and assume that Slater's condition holds. Then, for any \(x(0)\in\mathbb{R}^{n}\), \(u(0)=0\), the following holds:_
\[\min \{0,g(x(0))\}^{\mathsf{T}}\lambda^{*}\rho(t)\leq f(x(t))-f(x^{*})\] \[\leq\left(\frac{\alpha^{2}}{2}|x(0)-x^{*}|^{2}+f(x(0))-f(x^{*}) \right)\rho(t),\]
_where \(x^{*}\) is the minimizer of (1) and \(\lambda^{*}\) is any multiplier that satisfies the Karush-Kuhn-Tucker conditions._
We also demonstrate convergence of the discrete algorithm (9) in a nonconvex and possibly stochastic setting. For simplicity we state and prove the deterministic result when \(\epsilon=0\) (as becomes apparent from the proof, the stochastic case with bounded zero-mean gradient perturbations follows from the same arguments).
**Proposition 4.3**.: _Let \(T_{k}=T_{0}/k^{s}\), \(k=1,2,\dots\), for some \(T_{0}>0\) and \(s\in(1/2,1)\), and let the function \(\min\{0,g_{1}(x)\}\) have compact level sets. Let \(f\) be 1-smooth and either convex or such that \(2\delta-\beta>0\), let \(x_{k},u_{k}\) be the iterates defined in (9) with \(\epsilon=0\) and arbitrary \((x_{0},u_{0})\in\mathbb{R}^{2n}\), and let \(g\) satisfy the Mangasarian-Fromovitz constraint qualification. If \(u_{k}\) is bounded and \(f\) has isolated stationary points, then \(x_{k}\) converges to a stationary point of (1), while \(u_{k}\) converges to zero._
We note that the restriction \(1/2<s<1\) can be loosened to \(1/2<s\leq 1\) if additional assumptions on the damping parameters \(\delta\) and \(\beta\) are satisfied (this requires a slightly more detailed proof). Similarly, the assumption that \(f\) has isolated stationary points is made for simplifying the presentation. The assumption that \(u_{k}\) is bounded can be enforced by a simple reset strategy: if the newly computed \(u_{k+1}\) exceeds a predefined threshold, we simply set \(u_{k+1}=0\), \(x_{k+1}=x_{k}\), and continue running the algorithm. This reset strategy reduces the total energy \(|u_{k}|^{2}/2+f(x_{k})\) from step \(k\) to step \(k+1\) by a fixed amount, which implies that the function \(V_{k}(x_{k},u_{k})\) (which lies at the heart of the convergence analysis) is also reduced from \(k\) to \(k+1\) for large enough \(k\); see App. F. Hence, the arguments used for showing convergence still apply.
We note that the behavior of algorithm (9) is complex, as it relies on a _local_ approximation of the feasible set, whereby multiple constraints can become active or inactive over the course of the optimization. Establishing Prop. 4.3 is therefore nontrivial (see the proof in the appendix) and requires blending different ideas from numerical analysis, optimization, and dynamical systems.
## 5 Numerical Examples
The following section is divided into two parts. The first part illustrates the dynamics of (8) and the discretization via (9) on a one-dimensional example and is intended to provide insights concerning the nonsmooth dynamics, as well as the discretization. The second part applies (9) to (nonconvex) compressed sensing and large-scale sparse regression problems. As we will see, our algorithm recovers state-of-the-art performance for convex relaxations, while also handling nonconvex sparsity constraints in a seamless manner (traditional projection-based methods cannot be easily extended to this setting). Further details and an additional numerical example are included in App. C.
### Illustrative Example
In order to plot trajectories in the phase space we choose \(f(x)=(x+2)^{2}/2\) and \(g(x)=(x,-x+2)\), where \(x\) is scalar. Each constraint \(g_{i}(x)\geq 0\) and its corresponding velocity constraint \(\gamma_{i}(x)\geq 0\) induces a region,
\[\mathcal{R}_{i}:=\{(x,u)\in\mathbb{R}^{2}\mid g_{i}(x)\leq 0,\gamma_{i}(x,u)\leq 0\},\]
in the phase space, \(i=1,2\), where trajectories are either non-smooth or slide along the boundary of \(\mathcal{R}_{i}\). Outside of \(\mathcal{R}_{i}\), the trajectories follow the smooth motion (7). Fig. 2 (left) shows the trajectories along with \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\). For a given \((x(t_{0}),u(t_{0})^{-})\) an impact happens if \(g_{i}(x(t_{0}))\leq 0\) and \(\gamma_{i}(x(t_{0}),u(t_{0}))^{-}<0\), which ensures that \(\gamma_{i}(x(t_{0}),u(t_{0}))^{+}\geq-\epsilon\gamma_{i}(x(t_{0}),u(t_{0}))^{ -}\). In our example only the case \(\gamma_{i}^{+}=-\epsilon\gamma_{i}^{-}\) occurs, as there are no impacts where more than one constraint participates (\(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\) are disjoint). The coefficient of restitution \(\epsilon\) therefore determines the velocity after impact. For \(\epsilon=0\) trajectories end up at the boundary of the set \(\mathcal{R}_{i}\), whereas for \(\epsilon>0\) they will leave \(\mathcal{R}_{i}\) (in case of impact). If \(g_{i}(x(t_{0}))\leq 0\), \(\gamma_{i}(x(t_{0}),u(t_{0}))^{-}=0\), no impact happens, (\(u(t_{0})=u(t_{0})^{-}=u(t_{0})^{+}\)), and trajectories either leave \(\mathcal{R}_{i}\) or slide along its boundary. This depends on the contribution of the unconstrained dynamics, that is, on the vector \(v_{\text{uc}}(t_{0}):=(u(t_{0}),-2\delta u(t_{0})-\nabla f(x(t_{0})+\beta u(t_{0 }))).\) If \(v_{\text{uc}}(t_{0})\) points outwards, trajectories will leave \(\mathcal{R}_{i}\) and follow the unconstrained motion (\(\mathrm{d}\lambda_{i}=0\)). If \(v_{\text{uc}}(t_{0})\) points inwards, there will be a contribution from \(\mathrm{d}\lambda_{i}=\lambda_{i}(t_{0})\mathrm{d}t\), which ensures that trajectories slide along the boundary of \(\mathcal{R}_{i}\).
Fig. 2 (second panel) shows the trajectories resulting from a discretization of (9) with \(T_{k}=T=0.1\). We can clearly see the consequences of including constraints on the velocity
level: Trajectories may become infeasible, since constraints enter (9) only once they are violated. Nevertheless, even for large time steps \(T_{k}=T\) (up to \(T\approx 1.8\)), trajectories converge to the unique minimizer of our problem.
### Compressed Sensing and Sparse Regression
We consider the following \(\ell^{p}\)-regularized inverse problem:
\[\min_{x\in\mathbb{R}^{n}}\frac{1}{2}|Ax-b|_{2}^{2}+\nu|x|_{p}^{p}, \tag{13}\]
where \(|x|_{p}\) refers to the \(\ell^{p}\) "norm" (we explicitly allow for \(0<p\leq 1\)). This has numerous applications in machine learning, statistics, and signal processing (see, e.g., Hastie et al., 2009). The traditional convex approach for solving such an inverse problem is to set \(p=1\) and to leverage the fact that projections onto the \(\ell^{1}\) ball have closed-form solutions. This yields, for example, the iterative shrinkage-thresholding algorithm (ISTA) and the fast iterative shrinkage-thresholding algorithm (FISTA) (see Beck & Teboulle, 2009, Algorithm (3.1), Algorithm (4.1)-(4.3)), which are based on gradient descent and accelerated gradient descent, respectively. However, when \(p<1\), projections onto the \(\ell^{p}\) "norm" ball no longer have closed-form solutions and it is unclear how to generalize ISTA/FISTA to this setting. In the following, we will highlight that this case can be handled efficiently with our approach.
We treat the regularizer as shown in (2) and apply (9). The updates can be solved in closed form; see App. C, where we also include the pseudo-code of the resulting algorithm.
In the first example, each element of \(A\in\mathbb{R}^{100\times 1000}\) is sampled from a standard normal distribution. The vector \(b\) is set to \(Ax^{*}+n/2\), where the components of \(n\in\mathbb{R}^{100}\) are sampled from a standard normal and \(x^{*}\) is a vector that contains zeros everywhere except for 13 randomly chosen entries that are set to one. This gives rise to a challenging and ill-conditioned optimization problem that includes 1000 decision variables. Fig. 2 (third panel) compares the results computed by our Algorithm for \(p=1\) and \(p=0.7\), whereas the fourth panel (solid lines) compares our approach to ISTA and FISTA for \(p=1\). We note: i) the quality of the reconstruction for \(p=1\) is significantly worse compared to \(p=0.7\) (the parameter \(\nu\) was tuned with five-fold cross validation in both cases) and ii) our algorithm decreases the objective function at a similar rate as FISTA for \(p=1\). All algorithms require about the same execution time per iteration.
The second example consists of an image reconstruction problem taken from Beck & Teboulle (2009), where \(A=RW\in\mathbb{R}^{n\times n},n=65536\), with \(R\) representing a Gaussian blur operator, \(W\) the inverse of a three stage Haar wavelet transform, and \(\nu=2\cdot 10^{-5}\). The problem is of considerable size and includes 65536 decision variables. Similar to the previous example, our approach is on par with the performance of FISTA for \(p=1\) (see Fig. 2, fourth panel), but is also able to solve problems with \(p<1\). Fig. 4 in App. C compares the resulting reconstruction of FISTA (\(p=1\)) compared to our reconstruction \(p=0.6\), whereby the latter has much fewer artifacts. Summarizing, our approach not only achieves similar quality as FISTA for \(p=1\) (clearly outperforming ISTA) but is also able to handle nonconvex relaxations (\(p<1\)). App. C contains further details about the implementation and also includes an additional numerical example.
## 6 Conclusion
We have introduced a new type of accelerated optimization algorithm for constrained optimization problems. By imposing constraints on velocities, rather than on positions, the algorithm avoids projections or optimizations over the entire feasible set at each iteration. This has not only the potential to reduce execution time compared to Frank-Wolfe or projected gradient schemes, but more importantly, expands the range of potential applications, as constraints are not necessarily required to be convex or to have a simple structure. We have highlighted important analogies to non-smooth dynamical systems, and characterized the algorithm's behavior in continuous and discrete time.
Figure 2: The first panel shows trajectories resulting from (8) (with parameters \(\alpha=0.5,\delta=0.1,\beta=0,\epsilon=0\)). The boundaries of \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\) are highlighted in red. The second panel shows the results from the discretization (9) with \(T_{k}=T=0.1\). The third panel shows the solution vector of the compressed sensing problem with \(\ell^{1}\) and \(\ell^{0.7}\) regularization. The last panel shows the objective function value (normalized) for the different methods and the two instances of (13), where CS refers to βcompressed sensingβ and SR to βsparse regressionβ.
## Acknowledgements
We thank the German Research Foundation and the Branco Weiss Fellowship, administered by ETH Zurich, for the generous support.
|
2308.15026 | Subordinated Bessel heat kernels | We prove new bounds for Bessel heat kernels and Bessel heat kernels
subordinated by stable subordinators. In particular, we provide a 3G inequality
in the subordinated case. | Krzysztof Bogdan, Konstantin Merz | 2023-08-29T05:18:15Z | http://arxiv.org/abs/2308.15026v1 | # Subordinated Bessel heat kernels
###### Abstract.
We prove new bounds for Bessel heat kernels and Bessel heat kernels subordinated by stable subordinators. In particular, we provide a 3G inequality in the subordinated case.
Key words and phrases:Bessel heat kernel, stable subordinator, 3G inequality K.B. was supported through the DFG-NCN Beethoven Classic 3 programme, contract no. 2018/31/G/ST1/02252 (National Science Center, Poland) and SCHI-419/11-1 (DFG, Germany). K.M. was supported through the PRIME programme of the German Academic Exchange Service (DAAD) with funds from the German Federal Ministry of Education and Research (BMBF).
We now introduce our setting. For \(\zeta\in(-1/2,\infty)\), we define the Bessel heat kernel
\[p_{\zeta}^{(2)}(t,r,s):=\frac{(rs)^{1/2-\zeta}}{2t}\exp\left(-\frac{r^{2}+s^{2}}{ 4t}\right)I_{\zeta-1/2}\left(\frac{rs}{2t}\right),\quad r,s,t>0. \tag{1.1}\]
Here and below, for \(z\in\mathbb{C}\setminus(-\infty,0]\), \(I_{\nu}(z)\) denotes the modified Bessel function of the first kind of order \(\nu\in\mathbb{C}\) [DLMF, (10.25.2)].
The kernel \(p_{\zeta}^{(2)}(t,r,s)\) with the reference (speed) measure \(r^{2\zeta}dr\) on \(\mathbb{R}_{+}\) is the transition density of the Bessel process of order \(\zeta-1/2\)_reflected at the origin_. We remark that the Bessel process of order \(\zeta-1/2\)_killed at the origin_ has the transition density (1.1) with \(I_{\zeta-1/2}(\cdot)\) replaced with \(I_{|\zeta-1/2|}(\cdot)\), see Borodin and Salminen [2, Appendix 1.21, p. 133-134]. The distinction is superfluous when \(\zeta\geq 1/2\) because on the one hand \(|\zeta-1/2|=\zeta-1/2\) and on the other hand the Bessel process does not hit the origin, so no conditions (reflecting or killing) are to be imposed at the origin. See also Malecki, Serafin, and Zorawik [11]. Recall that \(p_{(d-1)/2}^{(2)}(t,r,s)\) is the transition density of the radial part of the Brownian motion in \(\mathbb{R}^{d}\) with the clock \(2t\). For further information on \(p_{\zeta}^{(2)}\), we refer, e.g., to the textbooks [2, Part I, Section IV.6 or Appendix 1.21] or [1, Chapter XI].
We now define the \(\frac{\alpha}{2}\)-subordinated Bessel heat kernels for \(\alpha\in(0,2)\). Recall that for \(\alpha\in(0,2)\) and \(t>0\), by Bernstein's theorem, the completely monotone function \([0,\infty)\ni\lambda\mapsto\mathrm{e}^{-t\lambda^{\alpha/2}}\) is the Laplace transform of a probability density function \(\mathbb{R}_{+}\ni\tau\mapsto\sigma_{t}^{(\alpha/2)}(\tau)\). Thus,
\[\mathrm{e}^{-t\lambda^{\alpha/2}}=\int_{0}^{\infty}\mathrm{e}^{-\tau\lambda} \,\sigma_{t}^{(\alpha/2)}(\tau)\,d\tau,\quad t>0,\,\lambda\geq 0, \tag{1.2}\]
see, e.g., Schilling, Song, and Vondracek [10, (1.4) and Chapter 5]. In [1, Appendix B] we list some useful properties of and sharp estimates for \(\sigma_{t}^{(\alpha/2)}(\tau)\) and references. We define the \(\frac{\alpha}{2}\)-subordinated Bessel heat kernel with the reference measure \(r^{2\zeta}dr\) on \(\mathbb{R}_{+}\) as
\[p_{\zeta}^{(\alpha)}(t,r,s):=\int_{0}^{\infty}p_{\zeta}^{(2)}(\tau,r,s)\, \sigma_{t}^{(\alpha/2)}(\tau)\,d\tau,\quad r,s,t>0. \tag{1.3}\]
We should note that more general subordination and references are discussed in Grzywny and Trojan [12] and [10], but our main motivation for this study is the fact that \(p_{\zeta}^{(\alpha)}\) arises when considering the \(d\)-dimensional fractional Laplacian \((-\Delta)^{\alpha/2}\) on the space of multiples of solid harmonics, i.e., functions of the form \([u]_{\ell,m}(x):=u(|x|)|x|^{\ell}Y_{\ell,m}(x/|x|)\), where \(Y_{\ell,m}\) is a \(L^{2}(\mathbb{S}^{d-1})\)-normalized spherical harmonic, \(u\in L^{2}(\mathbb{R}_{+},r^{2\zeta}dr)\), and \(\zeta=(d-1+2\ell)/2\). Namely,
\[\langle[u]_{\ell,m},\mathrm{e}^{-t(-\Delta)^{\alpha/2}}(t,\cdot,\cdot)[u]_{ \ell,m}\rangle_{L^{2}(\mathbb{R}^{d})}=\langle u,p_{(d-1+2\ell)/2}^{(\alpha)} (t,\cdot,\cdot)u\rangle_{L^{2}(\mathbb{R}_{+},r^{2\zeta}dr)},\quad t>0. \tag{1.4}\]
Furthermore, the following equality holds pointwise,
\[p_{(d-1+2\ell)/2}^{(\alpha)}(t,r,s) \tag{1.5}\] \[\quad=\iint\limits_{\mathbb{S}^{d-1}\times\mathbb{S}^{d-1}}\overline {[1]_{\ell,m}(r\omega_{x})}[1]_{\ell,m}(s\omega_{y})\mathrm{e}^{-t(-\Delta)^{ \alpha/2}}(t,r\omega_{x},s\omega_{y})\,d\omega_{x}\,d\omega_{y},\quad r,s,t>0,\]
where \(\mathrm{e}^{-t(-\Delta)^{\alpha/2}}\) is the heat kernel of \((-\Delta)^{\alpha/2}\) on \(\mathbb{R}^{d}\). See [1] for details.
### Organization and notation
In Section 2, we recall and prove sharp upper and lower bounds for \(p_{\zeta}^{(\alpha)}(t,r,s)\) (Theorem 2.1), discuss \(p_{\zeta}^{(\alpha)}(t,r,s)\) as probability transition density and kernel of a strongly continuous contraction semigroup on \(L^{2}(\mathbb{R}_{+},r^{2\zeta}dr)\) (Proposition 2.2), and recall an explicit expression for \(p_{\zeta}^{(1)}(t,r,s)\) in (2.5). In Section 3, we prove a 3G inequality when \(\alpha\in(0,2)\) (Theorem 3.1). In Section 4, we prove further pointwise bounds, called comparability results (Theorem 4.1). The technical part of the proof of Theorem 2.1, when \(\alpha\in(0,2)\), is given in Appendix A.
Below we denote generic constants, i.e., numbers in \((0,\infty)\) by \(c\). The values of constants may change from place to place. We may mark the dependence of \(c\) on some parameter \(\tau\) by the notation \(c_{\tau}\) or \(c(\tau)\). For functions \(f,g\geq 0\), we write \(f\lesssim g\) to indicate that there is a constant \(c\) such that \(f\leq cg\). If \(c\) depends on \(\tau\), we may write \(f\lesssim_{\tau}g\). The notation \(f\sim g\) means that \(f\lesssim g\lesssim f\); we say \(f\)_is comparable to_\(g\). We abbreviate \(a\wedge b:=\min\{a,b\}\) and \(a\lor b:=\max\{a,b\}\). The regularized hypergeometric function [12, (15.2.1)] is denoted by \({}_{2}\tilde{F}_{1}(a,b;c;z):={}_{2}F_{1}(a,b;c;z)/\Gamma(c)\), with \(a,b,c\in\mathbb{C}\) and \(z\in\{w\in\mathbb{C}:\,|w|<1\}\). We introduce further notation as we proceed.
### Acknowledgments
We thank Volker Bach, Kamil Bogus, Jacek Dziubanski, Tomasz Grzywny, Jacek Malecki, Haruya Mizutani, Adam Nowak, Marcin Preisner, and Grzegorz Serafin for discussion and references.
## 2. Fundamental properties, bounds, and explicit expressions
We recall the following properties of \(p_{\zeta}^{(\alpha)}(t,r,s)\) for \(\zeta\in(-1/2,\infty)\), \(\alpha\in(0,2]\) proved in [1, Section 2]. For all \(t,t^{\prime},r,s>0\), we have \(p_{\zeta}^{(\alpha)}(t,r,s)=p_{\zeta}^{(\alpha)}(t,s,r)>0\),
\[\int_{0}^{\infty}p_{\zeta}^{(\alpha)}(t,r,s)s^{2\zeta}\,ds=1, \tag{2.1}\] \[\int_{0}^{\infty}p_{\zeta}^{(\alpha)}(t,r,z)p_{\zeta}^{(\alpha)}( t^{\prime},z,s)z^{2\zeta}\,dz=p_{\zeta}^{(\alpha)}(t+t^{\prime},r,s),\quad \text{and}\] (2.2) \[p_{\zeta}^{(\alpha)}(t,r,s)=t^{-\frac{2\zeta+1}{\alpha}}p_{\zeta }^{(\alpha)}\left(1,\frac{r}{t^{1/\alpha}},\frac{s}{t^{1/\alpha}}\right). \tag{2.3}\]
Using the Levy distribution
\[\sigma_{t}^{(1/2)}(\tau)=\frac{1}{2\sqrt{\pi}}\cdot\frac{t}{\tau^{3/2}} \mathrm{e}^{-t^{2}/(4\tau)},\quad t,\tau>0, \tag{2.4}\]
see, e.g., Stein and Weiss [15, p. 6], it is possible to give an explicit expression for \(p_{\zeta}^{(\alpha)}(t,r,s)\) in the physically important case \(\alpha=1\). One obtains, for \(\zeta\in(-1/2,\infty)\) and \(r,s,t>0\),
\[\begin{split} p_{\zeta}^{(1)}(t,r,s)&=\frac{2\Gamma( \zeta+1)}{\sqrt{\pi}}\cdot\frac{t}{\left(r^{2}+s^{2}+t^{2}\right)^{\zeta+1}} \\ &\quad\times\,_{2}\tilde{F}_{1}\left(\frac{\zeta+1}{2},\frac{ \zeta+2}{2};\zeta+\frac{1}{2};\frac{4r^{2}s^{2}}{\left(r^{2}+s^{2}+t^{2} \right)^{2}}\right),\end{split} \tag{2.5}\]
see, e.g., Betancor, Harboure, Nowak, and Viviani [1, p. 136] for a computation. In particular,
\[p_{0}^{(1)}(t,r,s) =\frac{2t}{\pi\left(r^{2}+s^{2}+t^{2}\right)\left(1-4r^{2}s^{2} \cdot\left(r^{2}+s^{2}+t^{2}\right)^{-2}\right)}, \tag{2.6a}\] \[p_{1}^{(1)}(t,r,s) =\frac{4}{\pi}\,\frac{t}{(r^{2}-s^{2})^{2}+t^{2}(t^{2}+2r^{2}+2s^ {2})}. \tag{2.6b}\]
Since explicit expressions for the \(\alpha/2\)-stable subordination density with rational \(\alpha/2\) (see, e.g., Penson and Gorska [11]) are available, one could also compute \(p_{\zeta}^{(\alpha)}(t,r,s)\) for such \(\alpha\), but the resulting expressions are rather involved when \(\alpha\neq 1\). Using bounds for hypergeometric functions, one obtains the following upper and lower bounds
\[p_{\zeta}^{(1)}(t,r,s)\sim_{\zeta}\frac{t}{(r^{2}+s^{2}+t^{2})^{\zeta}[(r-s)^ {2}+t^{2}]}, \tag{2.7}\]
see, e.g., [1, Proposition 6.1]. In the following theorem, we give sharp upper and lower bounds for \(p_{\zeta}^{(\alpha)}(t,r,s)\) and all \(\alpha\in(0,2)\). We remark that the case of \(\alpha=1\) in Theorem 2.1 is resolved in [1, Proposition 6.1]; see also Dziubanski and Preisner [11, Proposition 6] and Betancor, Castro, and Stinga [1]. The case of general \(\alpha\in(0,2)\) seems unknown, although similar estimates were obtained by analogous techniques in various settings, see, e.g., Bogdan, Stos, and Sztonyk [1, Theorem 3.1] for \(\mathbb{R}^{d}\) and unbounded fractals; see also remarks in the proof below.
**Theorem 2.1**.: _Let \(\zeta\in(-1/2,\infty)\). Then, there are \(c,c^{\prime}>0\) such that_
\[p_{\zeta}^{(2)}(t,r,s)\asymp_{\zeta}t^{-\frac{1}{2}}\frac{\exp \left(-\frac{(r-s)^{2}}{ct}\right)}{(rs+t)^{\zeta}} \tag{2.8a}\] \[\asymp_{\zeta}\left(1\wedge\frac{r}{t^{1/2}}\right)^{\zeta}\, \left(1\wedge\frac{s}{t^{1/2}}\right)^{\zeta}\,\left(\frac{1}{rs}\right)^{ \zeta}\cdot t^{-\frac{1}{2}}\cdot\exp\left(-\frac{(r-s)^{2}}{c^{\prime}t} \right), \tag{2.8b}\]
_for all \(r,s,t>0\). Moreover, for all \(\alpha\in(0,2)\) and all \(r,s,t>0\),_
\[p_{\zeta}^{(\alpha)}(t,r,s)\sim_{\zeta,\alpha}\frac{t}{|r-s|^{1+\alpha}(r+s) ^{2\zeta}+t^{\frac{1+\alpha}{\alpha}}(t^{\frac{1}{\alpha}}+r+s)^{2\zeta}}. \tag{2.9}\]
Here and below the notation \(\asymp_{\zeta}\) combines an upper bound and a lower bound similarly as \(\sim_{\zeta}\), but the displayed constants in exponential factors (i.e., the constant
in (2.8a) and \(c^{\prime}\) in (2.8b)) may be different in the upper and the lower bounds. Furthermore, as suggested by the notation \(\sim_{\zeta}\), we allow the constants in the exponential factors to depend on \(\zeta\), too. Thus, for instance, (2.8a) is equivalent to the statement that there are \(c_{j,\zeta}\), \(j\in\{1,2,3,4\}\) such that
\[c_{1,\zeta}t^{-\frac{1}{2}}\frac{\exp\left(-\frac{(r-s)^{2}}{c_{2,\zeta}t} \right)}{(rs+t)^{\zeta}}\leq p_{\zeta}^{(2)}(t,r,s)\leq c_{3,\zeta}t^{-\frac{1 }{2}}\frac{\exp\left(-\frac{(r-s)^{2}}{c_{4,\zeta}t}\right)}{(rs+t)^{\zeta}}. \tag{2.10}\]
Proof of Theorem 2.1 for \(\alpha=2\).: The two-sided estimates in (2.8a) follow from the following asymptotics [DLMF, (10.30.1), (10.30.4)],
\[I_{\rho}(z)\sim\frac{1}{\Gamma(\rho+1)}\cdot\left(\frac{z}{2}\right)^{\rho} \mathbf{1}_{z\leq 1}+\frac{\mathrm{e}^{z}}{\sqrt{2\pi z}}\mathbf{1}_{z\geq 1}, \quad z\geq 0,\,\rho\notin\{-1,-2,...\}. \tag{2.11}\]
The two-sided estimates in (2.8b) were proved, e.g., in Frank and Merz [13, Theorem 10].
The upper bound in (2.9) for \(\alpha<2\) can be deduced from [11, Corollary 3.8]. Private communication with the authors of that remarkable work indicates that their arguments should, however, be modified to prove the lower bound in (2.9). So, for the sake of completeness, in Appendix A below, we give a self-contained proof of the two-sided estimates in (2.9) for \(\alpha\in(0,2)\).
Using the pointwise bounds in Theorem 2.1, we show that \(p_{\zeta}^{(\alpha)}\) is a strongly continuous contraction semigroup on \(L^{2}(\mathbb{R}_{+},r^{2\zeta}dr)\).
**Proposition 2.2**.: _Let \(\zeta\in(-1/2,\infty)\) and \(\alpha\in(0,2]\). Then, \(\{p_{\zeta}^{(\alpha)}(t,\cdot,\cdot)\}_{t>0}\) is a strongly continuous contraction semigroup on \(L^{2}(\mathbb{R}_{+},r^{2\zeta}dr)\)._
Proof.: By symmetry, the normalization (2.1), and a Schur test, \(p_{\zeta}^{(\alpha)}(t,\cdot,\cdot)\) defines a contraction on \(L^{2}(\mathbb{R}_{+},r^{2\zeta}dr)\) for every \(t>0\). To prove the strong continuity of \(p_{\zeta}^{(\alpha)}(t,\cdot,\cdot)\), it suffices, by the density of \(C_{c}^{\infty}(\mathbb{R}_{+})\) in \(L^{2}(\mathbb{R}_{+},r^{2\zeta}dr)\), to show
\[\lim_{t\searrow 0}\int_{0}^{\infty}dr\,r^{2\zeta}\left|\int_{0}^{\infty}ds\,s^{2 \zeta}p_{\zeta}^{(\alpha)}(t,r,s)\varphi(s)-\varphi(r)\right|^{2}=0 \tag{2.12}\]
for every non-negative function \(\varphi\in C_{c}^{\infty}(\mathbb{R}_{+})\). Indeed, this follows from the semigroup property, \(|\varphi(r)|\lesssim_{\varphi}\mathbf{1}_{r<c}\) for some \(c=c(\varphi)>0\), the bound
\[\int_{0}^{\infty}dr\,r^{2\zeta}\left|\int_{0}^{\infty}ds\,s^{2\zeta}p_{\zeta} ^{(\alpha)}(t,r,s)\varphi(s)\right|^{2}\lesssim\int_{0}^{c}dr\int_{0}^{c}ds\, (rs)^{2\zeta}\,p_{\zeta}^{(\alpha)}(2t,r,s), \tag{2.13}\]
the bounds for \(p_{\zeta}^{(\alpha)}(t,r,s)\) in (2.8), and the dominated convergence theorem.
## 3. 3G inequality for \(\alpha\in(0,2)\)
In our forthcoming work [1], we use the following 3G inequality for \(p_{\zeta}^{(\alpha)}(t,r,s)\). The result is motivated and similar to that of Bogdan and Jakubowski [1, (7)-(9)].
**Theorem 3.1**.: _Let \(\zeta\in[0,\infty)\), \(\alpha\in(0,2)\). Then, for all \(r,s,z,t,\tau>0\), we have_
\[\min\left\{\!\left(\!\frac{r+z}{r+s+z}\!\right)^{2\zeta}p_{\zeta}^{(\alpha)}(t,r,z),\left(\!\frac{s+z}{r+s+z}\!\right)^{2\zeta}p_{\zeta}^{(\alpha)}(\tau,z,s) \!\right\}\!\lesssim_{\zeta,\alpha}\!p_{\zeta}^{(\alpha)}(t+\tau,r,s), \tag{3.1}\]
_and_
\[p_{\zeta}^{(\alpha)}(t,r,z)\cdot p_{\zeta}^{(\alpha)}(\tau,z,s) \tag{3.2}\] \[\lesssim_{\zeta,\alpha}p_{\zeta}^{(\alpha)}(t+\tau,r,s)\left[ \!\left(\!\frac{r+s+z}{s+z}\!\right)^{2\zeta}p_{\zeta}^{(\alpha)}(t,r,z)+ \left(\!\frac{r+s+z}{r+z}\!\right)^{2\zeta}p_{\zeta}^{(\alpha)}(\tau,z,s)\! \right]\!.\]
_Remarks 3.2_.: (1) Weighted 3G inequalities for Green's functions (resolvent kernels) and their application to Schrodinger perturbations were studied, e.g., by Hansen [10], with quasi-metric interpretations.
(2) The weights in (3.1)-(3.2) are independent of \(t\) and \(\tau\), but they involve all three spatial variables \(r,s,z\), which is slightly incompatible with the setting of [10].
(3) As observed in [1, p. 182], there is no 3G inequality for the Brownian motion in \(\mathbb{R}^{d}\). However, there is a substitute, called 4G inequality, in Bogdan and Szczypkowski [1, Theorem 1.3].
(4) We do not know of "suitable" substitutes of the statements in Theorem 3.1 for \(\zeta\in(-1/2,0)\).
Proof of Theorem 3.1.: Define and observe
\[f(r,s,z):=\left(\frac{s+z}{r+s}\right)^{2\zeta}\mathbf{1}_{\{r>s\lor z\}}+ \mathbf{1}_{\{r<s\lor z\}}\sim\left(\frac{s+z}{r+s+z}\right)^{2\zeta}. \tag{3.3}\]
To prove (3.1), without loss of generality, we assume \(r>s\). Then, \(f(r,s,z)=\left(\frac{s+z}{r+s}\right)^{2\zeta}\mathbf{1}_{\{r>z\}}+\mathbf{1} _{\{r<z\}}\) and \(f(s,r,z)=1\). By (2.9),
\[p_{\zeta}^{(\alpha)}(t,r,z)\sim\min\left\{\frac{t}{|r-z|^{1+\alpha}(r+z)^{2 \zeta}},\frac{1}{t^{1/\alpha}(t^{1/\alpha}+r+z)^{2\zeta}}\right\}. \tag{3.4}\]
Thus, the left-hand side of (3.1) is
\[\begin{split}&\left(f(s,r,z)\cdot p_{\zeta}^{(\alpha)}(t,r,z) \right)\wedge\left(f(r,s,z)\cdot p_{\zeta}^{(\alpha)}(\tau,z,s)\right)\\ &\sim\min\left\{\!\frac{t}{|r-z|^{1+\alpha}(r+z)^{2\zeta}},\frac {\tau\cdot f(r,s,z)}{|s-z|^{1+\alpha}(s+z)^{2\zeta}},\frac{1}{t^{\frac{1}{ \alpha}}(t^{\frac{1}{\alpha}}+r+z)^{2\zeta}},\frac{f(r,s,z)}{\tau^{\frac{1}{ \alpha}}(\tau^{\frac{1}{\alpha}}+s+z)^{2\zeta}}\right\}.\end{split} \tag{3.5}\]
We begin with estimating the minimum of the first two terms on the right-hand side of (3.5). Since \((s+z)^{2\zeta}\mathbf{1}_{z>r>s}\geq r^{2\zeta}\mathbf{1}_{z>r>s}\), we get
\[\begin{split}&\min\left\{\frac{t}{|r-z|^{1+\alpha}(r+z)^{2 \zeta}},\frac{\tau f(r,s,z)}{|s-z|^{1+\alpha}(s+z)^{2\zeta}}\right\}\\ &\lesssim\frac{t+\tau}{r^{2\zeta}}\min\left\{\frac{1}{|r-z|^{1+ \alpha}},\frac{1}{|s-z|^{1+\alpha}}\right\}\\ &\lesssim\frac{t+\tau}{r^{2\zeta}\cdot|r-s|^{1+\alpha}}\sim\frac {t+\tau}{|r-s|^{1+\alpha}(r+s)^{2\zeta}},\end{split} \tag{3.6}\]
which is half of our desired estimate. We now consider the minimum of the last two terms in (3.5) and claim
\[\begin{split}\min&\left\{\frac{1}{t^{1/\alpha}(t^{1/ \alpha}+r)^{2\zeta}},\frac{f(r,s,z)}{\tau^{1/\alpha}(\tau^{1/\alpha}+s+z)^{2 \zeta}}\right\}\\ &\lesssim\frac{1}{t^{1/\alpha}(t^{1/\alpha}+r)^{2\zeta}+\tau^{1/ \alpha}(\tau^{1/\alpha}+r)^{2\zeta}}.\end{split} \tag{3.7}\]
Suppose (3.7) is true. Then, by
\[\begin{split}(t+\tau)^{1/\alpha}((t+\tau)^{1/\alpha}+r+s)^{2\zeta }&\sim(t+\tau)^{1/\alpha}((t+\tau)^{1/\alpha}+r)^{2\zeta}\\ &\lesssim(t+\tau)^{(1+2\zeta)/\alpha}+(t+\tau)^{1/\alpha}r^{2 \zeta}\\ &\lesssim t^{1/\alpha}(t^{2\zeta/\alpha}+r^{2\zeta})+\tau^{1/ \alpha}(\tau^{2\zeta/\alpha}+r^{2\zeta})\\ &\sim t^{1/\alpha}(t^{1/\alpha}+r)^{2\zeta}+\tau^{1/\alpha}(\tau ^{1/\alpha}+r)^{2\zeta},\end{split} \tag{3.8}\]
we obtain
\[\min\left\{\frac{1}{t^{\frac{1}{\alpha}}(t^{\frac{1}{\alpha}}+r+z)^{2\zeta}}, \frac{f(r,s,z)}{\tau^{\frac{1}{\alpha}}(\tau^{\frac{1}{\alpha}}+s+z)^{2\zeta} }\right\}\lesssim\frac{1}{(t+\tau)^{\frac{1}{\alpha}}((t+\tau)^{\frac{1}{ \alpha}}+r+s)^{2\zeta}}, \tag{3.9}\]
which, together with (3.6), completes the proof of (3.1). It remains to prove (3.7). We distinguish between \(z<r\) and \(z>r\). In the latter case, we have
\[\begin{split}\min&\left\{\frac{1}{t^{\frac{1}{ \alpha}}(t^{\frac{1}{\alpha}}+r+z)^{2\zeta}},\frac{1}{\tau^{\frac{1}{\alpha}}( \tau^{\frac{1}{\alpha}}+s+z)^{2\zeta}}\right\}\\ &\leq\min\left\{\frac{1}{t^{\frac{1}{\alpha}}(t^{\frac{1}{ \alpha}}+r)^{2\zeta}},\frac{1}{\tau^{\frac{1}{\alpha}}(\tau^{\frac{1}{\alpha}} +r)^{2\zeta}}\right\},\end{split} \tag{3.10}\]
as desired. On the other hand, if \(z<r\), then we have, using \(s\lor z<r\),
\[\frac{(s+z)(\tau^{1/\alpha}+r)}{(r+s)(\tau^{1/\alpha}+s+z)}\sim\frac{(s+z)\tau ^{1/\alpha}+r(s+z)}{r\tau^{1/\alpha}+r(s+z)}\leq 3,\]
and so
\[\begin{split}\min&\left\{\frac{1}{t^{1/\alpha}(t^{1/ \alpha}+r+z)^{2\zeta}},\frac{[(s+z)/(r+s)]^{2\zeta}}{\tau^{1/\alpha}(\tau^{1/ \alpha}+s+z)^{2\zeta}}\right\}\\ &\lesssim\min\left\{\frac{1}{t^{1/\alpha}(t^{1/\alpha}+r)^{2\zeta }},\frac{1}{\tau^{1/\alpha}(\tau^{1/\alpha}+r)^{2\zeta}}\right\}\\ &\sim\frac{1}{t^{1/\alpha}(t^{1/\alpha}+r)^{2\zeta}+\tau^{1/ \alpha}(\tau^{1/\alpha}+r)^{2\zeta}},\end{split} \tag{3.11}\]
which shows (3.7) for \(z<r\). Plugging (3.6) and (3.9) into (3.5) yields
\[\begin{split}&\Big{(}f(s,r,z)\cdot p_{\zeta}^{(\alpha)}(t,r,z) \Big{)}\wedge\Big{(}f(r,s,z)\cdot p_{\zeta}^{(\alpha)}(\tau,z,s)\Big{)}\\ &\lesssim\min\bigg{\{}\frac{t+\tau}{|r-s|^{1+\alpha}(r+s)^{2 \zeta}},\frac{1}{(t+\tau)^{1/\alpha}((t+\tau)^{1/\alpha}+r+s)^{2\zeta}}\bigg{\}} \\ &\sim p_{\zeta}^{(\alpha)}(t+\tau,r,s),\end{split} \tag{3.12}\]
as claimed. Estimate (3.2) follows from (3.1).
## 4. Comparability results for \(p_{\zeta}^{(\alpha)}\)
The following comparability results are crucial for our forthcoming work [1]. As we will see, they follow from the bounds in Theorem 2.1.
**Theorem 4.1**.: _Let \(\zeta\in(-1/2,\infty)\) and \(\alpha\in(0,2]\)._
1. _Let_ \(z,s>0\)_,_ \(0<C\leq 1\)_, and_ \(\tau\in[C,C^{-1}]\)_. Then, there is_ \(c_{j}=c_{j}(\zeta,C)\)_,_ \(j\in\{1,2\}\) _with_ \[\begin{split} p_{\zeta}^{(2)}(1,c_{1}z,c_{1}s)\lesssim_{C,\zeta }p_{\zeta}^{(2)}(\tau,z,s)\lesssim_{C,\zeta}p_{\zeta}^{(2)}(1,c_{2}z,c_{2}s), \\ p_{\zeta}^{(\alpha)}(\tau,z,s)\sim_{C,\zeta,\alpha}p_{\zeta}^{( \alpha)}(1,z,s),\quad\alpha<2.\end{split}\] (4.1) _In particular, for_ \(\alpha\in(0,2)\) _and_ \(\tau,z,s,c>0\)_, one has_ \[\begin{split} p_{\zeta}^{(\alpha)}(\tau,cz,cs)\sim_{\zeta, \alpha,c}p_{\zeta}^{(\alpha)}(\tau,z,s),\\ p_{\zeta}^{(\alpha)}(c\tau,z,s)\sim_{\zeta,\alpha,c}p_{\zeta}^{( \alpha)}(\tau,z,s),\quad\alpha<2.\end{split}\] (4.2)
2. _Let_ \(C,\tau>0\) _and_ \(0<z\leq s/2<\infty\)_. Then, there is_ \(c=c(\zeta,C)\) _with_ \[\begin{split} p_{\zeta}^{(\alpha)}(\tau,z,s)\lesssim_{C,\zeta, \alpha}p_{\zeta}^{(\alpha)}(\tau,c,cs)\mathbf{1}_{\{\tau>C\}}\\ +\left(\tau^{-\frac{1}{2}}\frac{\mathrm{e}^{-cs^{2}/\tau}}{(\tau+ s^{2})^{\zeta}}\mathbf{1}_{\alpha=2}+\frac{\tau}{s^{2\zeta+1+\alpha}+\tau^{\frac{2 \zeta+1+\alpha}{\alpha}}}\mathbf{1}_{\alpha\in(0,2)}\right)\mathbf{1}_{\{ \tau<C\}},\\ p_{\zeta}^{(\alpha)}(\tau,z,s)\lesssim_{\zeta,\alpha}\tau^{- \frac{1}{2}}\frac{\mathrm{e}^{-cs^{2}/\tau}}{(\tau+s^{2})^{\zeta}}\mathbf{1}_{ \alpha=2}+\frac{\tau}{s^{2\zeta+1+\alpha}+\tau^{(2\zeta+1+\alpha)/\alpha}} \mathbf{1}_{\alpha\in(0,2)},\end{split}\] (4.3a) \[p_{\zeta}^{(\alpha)}(\tau,z,s)\lesssim_{\zeta,\alpha}s^{-(2\zeta+1)}.\] (4.3c)
3. _Let_ \(0<\tau\leq 1\)_,_ \(0<z\leq s/2\)_, and_ \(s\geq C>0\)_. Then, there is_ \(c=c(\zeta,C)\) _with_ \[p_{\zeta}^{(\alpha)}(\tau,z,s)\lesssim_{C,\zeta,\alpha}p_{\zeta}^{(\alpha)}(1, c,cs).\] (4.4)
4. _Let_ \(0<r\leq s\)_. Then, there is_ \(c=c(\zeta,C)\) _with_ \[p_{\zeta}^{(\alpha)}(1,1,s)\lesssim_{\zeta,\alpha}p_{\zeta}^{(\alpha)}(1,cr, cs).\] (4.5) _In particular, for all_ \(r,s>0\)_,_ \[\min\{p_{\zeta}^{(\alpha)}(1,1,r),p_{\zeta}^{(\alpha)}(1,1,s)\}\lesssim_{\zeta,\alpha}p_{\zeta}^{(\alpha)}(1,cr,cs).\] (4.6)
_._
5. _Let_ \(r,s,z,t>0\) _with_ \(|z-s|>|r-s|/2\)_. Then, there is_ \(c=c(\zeta,C)\) _with_ \[p_{\zeta}^{(\alpha)}(t,z,s)\lesssim_{\zeta,\alpha}p_{\zeta}^{(\alpha)}(t,cr, cs).\] (4.7)
The constants \(c\) in the arguments of functions in Theorem 4.1 may change from place to place. Note that in the above estimates the point \(z=1\) is a natural reference point for the heat kernel \(p_{\zeta}^{(\alpha)}(\tau,z,s)\) when \(s\gg z\) and \(\tau\sim 1\) by the spatial homogeneity of \(p_{\zeta}^{(\alpha)}\). This is in contrast to the analysis of the heat kernel \(\mathrm{e}^{-\tau(-\Delta)^{\alpha/2}}(z,y)\) in \(\mathbb{R}^{d}\), where \(z=0\) is a natural reference point when \(\tau\sim 1\) and \(|y|\gg|z|\) because of the additional translation invariance in this setting. We give some interpretations of the bounds in Theorem 4.1 after the proof.
Proof.:
1. The estimates follow from (2.8) for \(\alpha=2\) and from (2.9) if \(\alpha<2\).
2. We start with \(\alpha=2\). We first consider (4.3a). By (2.8a), \[p_{\zeta}^{(2)}(\tau,z,s)\asymp\tau^{-\frac{1}{2}}\frac{\exp(-c(s-z)^{2}/\tau )}{(\tau+sz)^{\zeta}}\asymp\tau^{-\frac{1}{2}}\frac{\exp(-cs^{2}/\tau)}{(\tau+ sz)^{\zeta}}.\] (4.8) If \(\tau>C>0\), then, by \(2s/\tau\leq(1+s^{2})/\tau\lesssim_{C}1+s^{2}/\tau\), \[\frac{(\tau+s)^{\zeta}}{(\tau+sz)^{\zeta}}\mathrm{e}^{-cs^{2}/\tau} \leq\left(1+\frac{s}{\tau}\right)^{\zeta}\mathrm{e}^{-cs^{2}/\tau }\lesssim_{C}\left(1+\frac{s^{2}}{\tau}\right)^{\zeta}\mathrm{e}^{-cs^{2}/ \tau}\lesssim_{\zeta}\mathrm{e}^{-cs^{2}/\tau}\] (4.9) \[\sim_{C}\mathrm{e}^{-c(s-1)^{2}/\tau},\] which proves (4.3a) for \(\tau>C>0\). Similarly, we have for all \(\tau>0\), \[\frac{(\tau+s^{2})^{\zeta}}{(\tau+sz)^{\zeta}}\mathrm{e}^{-cs^{2}/\tau}\leq(1 +s^{2}/\tau)^{\zeta}\mathrm{e}^{-cs^{2}/\tau}\lesssim\mathrm{e}^{-cs^{2}/\tau},\] (4.10) which yields (4.3b) and the second part of (4.3a), where \(\tau<C\). Estimate (4.3c) follows from \[p_{\zeta}^{(2)}(\tau,z,s)\lesssim\left(\frac{s^{2}}{\tau}\right)^{\zeta+\frac {1}{2}}\exp(-cs^{2}/\tau)\cdot s^{-(2\zeta+1)}\lesssim s^{-(2\zeta+1)}.\] (4.11) This concludes the proof of (4.3a)-(4.3c) for \(\alpha=2\). Now we verify (4.3) for \(\alpha<2\). We start with (4.3a). By (2.9), \[p_{\zeta}^{(\alpha)}(\tau,z,s)\sim\frac{\tau}{s^{2\zeta+1+\alpha}+\tau^{1+1/ \alpha}(\tau^{1/\alpha}+s)^{2\zeta}}.\] (4.12) Thus, (4.3a) for \(\tau>C>0\) follows from \[\frac{|s-1|^{1+\alpha}(s+1)^{2\zeta}}{s^{2\zeta+1+\alpha}+\tau^{1+ 1/\alpha}(\tau^{1/\alpha}+s)^{2\zeta}}+\frac{\tau^{1+1/\alpha}(\tau^{1/\alpha} +s+1)^{2\zeta}}{s^{2\zeta+1+\alpha}+\tau^{1+1/\alpha}(\tau^{1/\alpha}+s)^{2 \zeta}}\] (4.13) \[\lesssim\frac{s^{1+\alpha}(1+s)^{2\zeta}+(s+1)^{2\zeta}}{s^{2 \zeta+1+\alpha}+1}+\frac{1}{(\tau^{1/\alpha}+s)^{2\zeta}}\] \[\lesssim_{C,\zeta}1.\]
Similarly, for all \(\tau>0\), \[p_{\zeta}^{(\alpha)}(\tau,z,s)\lesssim\frac{\tau}{s^{2\zeta+1+\alpha}+\tau^{\frac {2\zeta+1+\alpha}{\alpha}}},\] (4.14) which yields (4.3b) and the second part of (4.3a). Estimate (4.3c) follows from \[\begin{split} p_{\zeta}^{(\alpha)}(\tau,z,s)& \lesssim\frac{\tau}{s^{2\zeta+1+\alpha}+\tau^{(1+\alpha)/\alpha} \cdot s^{2\zeta}}\\ &\lesssim\frac{\tau}{s^{2\zeta+1+\alpha}}\mathbf{1}_{\{\tau<s^{ \alpha}\}}+\frac{\tau}{\tau^{(1+\alpha)/\alpha}\cdot s^{2\zeta}}\mathbf{1}_{\{ \tau>s^{\alpha}\}}\lesssim s^{-(2\zeta+1)}.\end{split}\] (4.15) This concludes the proof of (4.3a)-(4.3c) for \(\alpha<2\).
3. To prove (4.4), we begin with \(\alpha=2\). By (2.8a), we have, for \(\tau\in(0,1]\), \(0<z\leq s/2\), and \(s\geq C>0\), \[\begin{split} p_{\zeta}^{(2)}(\tau,z,s)& \asymp\tau^{-\frac{1}{2}}\frac{\exp\left(-\frac{c(z-s)^{2}}{\tau} \right)}{(zs+\tau)^{\zeta}}\lesssim\frac{\exp(-cs^{2}/\tau)}{\tau^{\zeta+1/2} }\cdot\frac{s^{2\zeta+1}}{s^{2\zeta+1}}\\ &\lesssim_{C}\frac{(s^{2}/\tau)^{\zeta+1/2}\exp(-cs^{2}/\tau)}{ s^{\zeta}}\lesssim_{C}\frac{\mathrm{e}^{-cs^{2}/\tau}}{(s+1)^{\zeta}}\lesssim \frac{\mathrm{e}^{-c(s-1)^{2}}}{(s+1)^{\zeta}}.\end{split}\] (4.16) This concludes the proof of (4.4) for \(\alpha=2\). If \(\alpha<2\), we have, for \(\tau\in(0,1]\), \(0<z\leq s/2\), and \(s\geq C>0\), \[p_{\zeta}^{(\alpha)}(\tau,z,s)\sim_{\zeta}\frac{\tau}{s^{2\zeta+1+\alpha}+ \tau^{1+1/\alpha}(\tau^{1/\alpha}+s)^{2\zeta}}.\] (4.17) Thus, (4.4) follows from \[\frac{\tau\left(|s-1|^{1+\alpha}(s+1)^{2\zeta}+(1+s)^{2\zeta}\right)}{s^{2 \zeta+1+\alpha}+\tau^{1+1/\alpha}(\tau^{1/\alpha}+s)^{2\zeta}}\lesssim\frac{( 1+s)^{2\zeta}(1+s^{1+\alpha})}{s^{2\zeta+1+\alpha}}\lesssim_{C}1.\] (4.18) This concludes the proof of (4.4) for \(\alpha<2\).
4. We come to (4.5), with \(0<r\leq s\). We first treat \(\alpha=2\). By (2.8a), \[\begin{split} p_{\zeta}^{(2)}(1,1,s)&\asymp_{ \zeta}\frac{\mathrm{e}^{-c(s-1)^{2}}}{(s+1)^{\zeta}}\lesssim\frac{(1+rs)^{ \zeta}\mathrm{e}^{-cs^{2}}}{(s+1)^{\zeta}(1+rs)^{\zeta}}\leq\frac{(1+s^{2})^{ \zeta}\mathrm{e}^{-cs^{2}}}{(1+rs)^{\zeta}}\lesssim\frac{\mathrm{e}^{-cs^{2}}} {(1+rs)^{\zeta}}\\ &\lesssim\frac{\mathrm{e}^{-c(r-s)^{2}}}{(rs+1)^{\zeta}}\asymp_{ \zeta}p_{\zeta}^{(2)}(1,cr,cs),\end{split}\] (4.19) where we used \(2s^{2}\geq(r-s)^{2}\). This concludes the proof of (4.5) for \(\alpha=2\). If \(\alpha<2\), then, by (2.9), \[p_{\zeta}^{(\alpha)}(1,1,s)\sim_{\zeta}\frac{1}{|s-1|^{1+\alpha}(1+s)^{2\zeta} +(1+s)^{2\zeta}}.\] (4.20) Since \(2s^{2}\geq(r-s)^{2}\), we have \[\begin{split}&\frac{|r-s|^{1+\alpha}(r+s)^{2\zeta}+(1+r+s)^{2 \zeta}}{|s-1|^{1+\alpha}(1+s)^{2\zeta}+(1+s)^{2\zeta}}\\ &\lesssim\frac{s^{2\zeta+1+\alpha}+(1+s)^{2\zeta}}{|s-1|^{1+ \alpha}(1+s)^{2\zeta}+(1+s)^{2\zeta}}\lesssim 1,\end{split}\] (4.21)
which concludes the proof of (4.5) for \(\alpha<2\). To prove (4.6), it suffices, by symmetry, to assume \(s\geq r>0\). Then, the claim follows from (4.5) since \(\min\{p_{\zeta}^{(\alpha)}(1,1,r),p_{\zeta}^{(\alpha)}(1,1,s)\}\leq p_{\zeta}^{( \alpha)}(1,1,s)\).
5. We come to (4.7). If \(z>r\), the estimate follows immediately from (2.8a) if \(\alpha=2\) and from (2.9) if \(\alpha<2\). So suppose \(z<r\). We distinguish now between \(z>s\) and \(z<s\) and start with the former case. Then the assumption \(|z-s|>|r-s|/2\) implies \[z=z-s+s\geq s+\frac{|r-s|}{2}=\frac{r+s}{2}\gtrsim r.\] (4.22) Thus, \(z\gtrsim r\) and we can again use (2.8a) for \(\alpha=2\) and (2.9) for \(\alpha<2\) to conclude the estimate in this case. Thus, we are left to treat \(z<s\). We first let \(\alpha=2\) and distinguish between the three subcases \(s>2r\), \(s<r/2\), and \(s\in(r/2,2r)\). To treat \(s>2r\) and \(s<r/2\), we estimate \[\begin{split}\frac{\exp\left(-\frac{c(z-s)^{2}}{t}\right)}{(zs+t )^{\zeta}}&\leq\frac{(rs/t+1)^{\zeta}}{(zs/t+1)^{\zeta}}\cdot\frac{ \exp\left(-\frac{c(r-s)^{2}}{4t}\right)}{(rs+t)^{\zeta}}\\ &\leq\left(1+\frac{rs}{t}\right)^{\zeta}\cdot\exp\left(-\frac{c(r -s)^{2}}{8t}\right)\cdot\frac{\exp\left(-\frac{c(r-s)^{2}}{8t}\right)}{(rs+t) ^{\zeta}}\\ &\lesssim_{\zeta}\frac{\exp\left(-\frac{c(r-s)^{2}}{8t}\right)}{( rs+t)^{\zeta}},\end{split}\] (4.23) as desired. It remains to treat the case \(s\in(r/2,2r)\) and \(z<r\lor s\). We distinguish between \(s>2z\) and \(s\in(z,2z)\). If \(s>2z\), then we can argue similarly as before and obtain \[\begin{split}\frac{\exp\left(-\frac{c(z-s)^{2}}{t}\right)}{(zs+t )^{\zeta}}&\leq\left(1+\frac{rs}{t}\right)^{\zeta}\cdot\exp\left( -\frac{c(z-s)^{2}}{8t}\right)\cdot\frac{\exp\left(-\frac{c(r-s)^{2}}{8t}\right) }{(rs+t)^{\zeta}}\\ &\leq\left(1+\frac{2s^{2}}{t}\right)^{\zeta}\cdot\exp\left(-\frac {cs^{2}}{8t}\right)\cdot\frac{\exp\left(-\frac{c(r-s)^{2}}{8t}\right)}{(rs+t) ^{\zeta}}\\ &\lesssim_{\zeta}\frac{\exp\left(-\frac{c(r-s)^{2}}{8t}\right)}{( rs+t)^{\zeta}}.\end{split}\] (4.24) Finally, for \(s\in(r/2,2r)\), \(z<r\), and \(s\in(z,2z)\), we use \(zs>s^{2}/2>rs/4\) to get the desired estimate in the denominator in (2.8a). This completes the proof of (4.7) for \(\alpha=2\). Now we deal with \(\alpha<2\) and \(z<r\wedge s\). Observe that in (2.9) it suffices to estimate \(z+s\gtrsim s+r\) and \(z^{2}+s^{2}\gtrsim r^{2}+s^{2}\). These estimates follow immediately if \(s>r\). So suppose \(z<s<r\). Then, by the assumption \(|z-s|=s-z>|r-s|/2=(r-s)/2\), we get \(3s/2>z+r/2>r/2\). This completes the proof of (4.7).
The proof of Theorem 4.1 is concluded.
We close with some interpretations of the bounds in Theorem 4.1. The numbering of the following remarks refers to the numbering of the bounds in Theorem 4.1.
_Remarks_ 4.2.: (1) For fixed time scales \(\tau\sim 1\), the heat kernels \(p_{\zeta}^{(\alpha)}(\tau,z,s)\) and \(p_{\zeta}^{(\alpha)}(1,r,s)\) are comparable for all \(z,s>0\).
(2) Spatial or temporal dilations on the unit order are negligible when \(\alpha<2\).
(3) Suppose \(s\geq 2z\). For large times \(\tau>C\), the heat kernel \(p_{\zeta}^{(\alpha)}(\tau,z,s)\) can be replaced with \(p_{\zeta}^{(\alpha)}(\tau,1,s)\), even if \(z\) is small, thanks to the lower-boundedness of \(\tau\) and the large distance between \(z\) and \(s\). For small times \(\tau<C\), the distance \(|s-z|\) can still be replaced with \(s\). However, for small \(z\) and \(\tau\), we cannot compare \(p_{\zeta}^{(\alpha)}(\tau,z,s)\) with \(p_{\zeta}^{(\alpha)}(\tau,1,s)\). At least when \(\alpha<2\), we can \(p_{\zeta}^{(\alpha)}(\tau,z,s)\) compare with \(p_{\zeta}^{(\alpha)}(\tau,0,s)\).
(4) For small times \(\tau\in(0,1]\) and locations \(s\geq 2z\), which are strictly away from zero, the heat kernel \(p_{\zeta}^{(\alpha)}(\tau,z,s)\) can be estimated from above by \(p_{\zeta}^{(\alpha)}(1,1,s)\). This is because the lower-boundedness of \(s\) prevents the heat kernel from blowing up for small times and small \(z\), and the large separation between \(s\) and \(z\) allows to replace \(|z-s|\) and \(z+s\) by \(s\).
(5) For fixed time \(\tau=1\) and location \(s>0\), it is more likely to go from \(s\) to \(r\leq s\) than from \(s\) to \(1\). This is plausible for \(r>1\), while, if \(r\leq 1\), it does not really matter if we go to \(1\) or to \(r\) within a unit time step.
(6) The probability for going from \(s\) to \(z\) is bounded by that for going from \(s\) to \(r\) when \(|z-s|\) is greater than \(|r-s|\). Moreover, the (killing or mass creating) effect from the boundary, i.e., the origin, is negligible.
## Appendix A Proof of Theorem 2.1 for \(\alpha\in(0,2)\)
Recall our claim (2.9). Namely, if \(\alpha\in(0,2)\) and \(\zeta>-1/2\), then
\[p_{\zeta}^{(\alpha)}(t,r,s)\sim_{\zeta,\alpha}\frac{t}{|r-s|^{1+\alpha}(r+s) ^{2\zeta}+t^{\frac{1+\alpha}{\alpha}}(t^{\frac{1}{\alpha}}+r+s)^{2\zeta}} \quad\text{ for }r,s,t>0.\]
By the scaling (2.3), \(t=1\) suffices. We split the proof into several steps.
### Auxiliary bounds
We first prove the following, rather bulky, bounds, namely
\[\begin{split} p_{\zeta}^{(\alpha)}(1,r,s)\sim_{\zeta,\alpha}& \ \mathbf{1}_{rs<1}\mathbf{1}_{(r-s)^{2}<1}\\ &+(rs)^{-\zeta}\mathbf{1}_{(r-s)^{2}<1<rs}\\ &+|r-s|^{-(2\zeta+1+\alpha)}\mathbf{1}_{rs<1<(r-s)^{2}}\\ &+\mathbf{1}_{rs>1}\mathbf{1}_{(r-s)^{2}>1}\left(\frac{\mathbf{ 1}_{rs<(r-s)^{2}}}{|r-s|^{2\zeta+1+\alpha}}+\frac{\mathbf{1}_{rs>(r-s)^{2}}}{ (rs)^{\zeta}|r-s|^{1+\alpha}}\right).\end{split}\] (A.1)
To prove (A.1), we use the subordinator bounds
\[\sigma_{1}^{(\alpha/2)}(\tau)\sim_{\alpha}\frac{\exp\left(-C(\alpha)\tau^{-c_ {1}}\right)}{\tau^{c_{2}}}\mathbf{1}_{\tau<1}+\tau^{-1-\alpha/2}\mathbf{1}_{ \tau>1},\] (A.2)
with \(C(\alpha)>0\), \(c_{1}=c_{1}(\alpha)=\alpha/(2-\alpha)\in(0,\infty)\), and \(c_{2}=c_{2}(\alpha)=(2-\alpha/2)/(2-\alpha)\in(1,\infty)\); see, e.g., [1, Proposition B.1]. Since \(\mathbb{R}_{+}\ni x\mapsto\mathrm{e}^{-1/x^{s}}\) vanishes at zero faster than any polynomial whenever \(s>0\), there are \(C_{1}(\alpha),C_{2}(\alpha)>0\) with \(C_{1}(\alpha)>C_{2}(\alpha)\) such that
\[\frac{\exp\left(-C_{1}(\alpha)\tau^{-c_{1}}\right)}{\tau^{1+\alpha/2}}\lesssim _{\alpha}\sigma_{1}^{(\alpha/2)}(\tau)\lesssim_{\alpha}\frac{\exp\left(-C_{2 }(\alpha)\tau^{-c_{1}}\right)}{\tau^{1+\alpha/2}}.\] (A.3)
From the following computations, it will transpire that the precise rate of the exponential decay of \(\sigma_{1}^{(\alpha/2)}(\tau)\) at \(\tau=0\) is irrelevant. This motivates us to not distinguish between \(C_{1}(\alpha)\) and \(C_{2}(\alpha)\) in (A.3) in the following and simply write
\[\sigma_{1}^{(\alpha/2)}(\tau)\asymp_{\alpha}\frac{\exp\left(-C(\alpha)\tau^{- c_{1}}\right)}{\tau^{1+\alpha/2}}.\] (A.4)
We recall that the notation \(\asymp\) means the same as \(\sim\), but constants in the argument of the exponential function (like \(C(\alpha)\) in (A.4)) may be different in the upper and lower bounds. For simplicity, in the following, we will not explicitly indicate any parameter dependence of constants appearing as prefactors or in exponentials anymore, so we will write, e.g., \(c\) for \(c_{\zeta}\). Then, Formulae (1.2), (2.8), and (A.4) yield, for some \(c,c_{1}>0\),
\[p_{\zeta}^{(\alpha)}(1,r,s)\asymp\int_{0}^{\infty}\frac{d\tau}{\tau}\tau^{- \frac{1+\alpha}{2}}\left[\frac{1}{\tau^{\zeta}}\mathbf{1}_{rs\leq\tau}+\frac{ \mathbf{1}_{rs\geq\tau}}{(rs)^{\zeta}}\right]\exp\left(-c\left(\frac{(r-s)^{2} }{\tau}+\frac{1}{\tau^{c_{1}}}\right)\right).\] (A.5)
We now estimate the integral on the right of (A.5). To this end, we distinguish between four cases based on \(rs\lessgtr 1\) and \(|r-s|\lessgtr 1\). Although the notation will not emphasize it, we repeat once more that the following estimates are not uniform in \(\zeta\) or \(\alpha\).
#### a.1.1. Case \(rs\vee(r-s)^{2}<1\)
We show that \(p_{\zeta}^{(\alpha)}(1,r,s)\sim 1\). We consider the first summand in (A.5) and estimate
\[\begin{split}&\int_{rs}^{\infty}\frac{d\tau}{\tau}\tau^{-\zeta- \frac{1+\alpha}{2}}\mathrm{e}^{-c((r-s)^{2}/\tau+1/\tau^{c_{1}})}\\ &\asymp\left[\int_{rs}^{1}\frac{d\tau}{\tau}\tau^{-\frac{2\zeta+1 +\alpha}{2}}\mathrm{e}^{-c((r-s)^{2}/\tau+\tau^{-c_{1}})}+\int_{1}^{\infty} \frac{d\tau}{\tau}\tau^{-\frac{2\zeta+1+\alpha}{2}}\right].\end{split}\] (A.6)
The second summand in the last line of (A.6) is \(\sim 1\) as desired. Since the first summand on the right-hand side of (A.6) is non-negative, we get that the left-hand side of (A.6) is bounded from below by the second summand on the right-hand side of (A.6). On the other hand, by reflecting \(\tau\mapsto\tau^{-1}\), the first summand on the right-hand side of (A.6) can be estimated by
\[\int_{1}^{\infty}d\tau\,\tau^{\frac{2\zeta-1+\alpha}{2}}\mathrm{e}^{-\tau^{c_ {1}}}\lesssim 1\] (A.7)
from above. This gives the desired estimate. Now, we consider the second summand in (A.5). For a lower bound, we drop it (like in the discussion of the lower bound for
(A.6)). For an upper bound, we estimate
\[\begin{split} 0&\leq\int_{0}^{rs}\frac{d\tau}{\tau}\tau^{- \frac{1+\alpha}{2}}(rs)^{-\zeta}\,\mathrm{e}^{-c((r-s)^{2}/\tau+1/\tau^{c_{1}})} \lesssim(rs)^{-\zeta}\int_{0}^{rs}\frac{d\tau}{\tau}\tau^{-\frac{1+\alpha}{2}} \mathrm{e}^{-c\tau^{-c_{1}}}\\ &=(rs)^{-\zeta}\int_{(rs)^{-1}}^{\infty}\frac{d\tau}{\tau}\,\tau^ {\frac{1+\alpha}{2}}\mathrm{e}^{-c\tau^{c_{1}}}\asymp(rs)^{-\zeta}\cdot(rs)^{-( \frac{\alpha+1}{2}-c_{1})}\mathrm{e}^{-c/(rs)^{c_{1}}}\lesssim 1,\end{split}\] (A.8)
as desired. This concludes the case of \(rs\vee(r-s)^{2}<1\).
#### a.1.2. Case \((r-s)^{2}<1<rs\)
We show that \(p_{\zeta}^{(\alpha)}(1,r,s)\sim(rs)^{-\zeta}\). We consider the first summand in (A.5). For a lower bound we drop it, while for an upper bound we estimate
\[\begin{split} 0&\leq\int_{rs}^{\infty}\frac{d\tau}{ \tau}\tau^{-\frac{2\zeta+1+\alpha}{2}}\mathrm{e}^{-c(\frac{(rs)^{2}}{\tau}+ \tau^{-c_{1}})}\lesssim\int_{rs}^{\infty}\frac{d\tau}{\tau}\tau^{-\frac{2 \zeta+1+\alpha}{2}}\\ &\sim(rs)^{-\frac{2\zeta+1+\alpha}{2}}\lesssim(rs)^{-\zeta}. \end{split}\] (A.9)
We now consider the second summand in (A.5) and obtain
\[\begin{split}&\int_{0}^{rs}\frac{d\tau}{\tau}\tau^{-\frac{1+ \alpha}{2}}(rs)^{-\zeta}\,\mathrm{e}^{-c((r-s)^{2}/\tau+1/\tau^{c_{1}})}\\ &\asymp(rs)^{-\zeta}\left[\int_{0}^{1}\frac{d\tau}{\tau}\tau^{- \frac{1+\alpha}{2}}\mathrm{e}^{-c((r-s)^{2}/\tau+\tau^{-c_{1}})}+\int_{1}^{rs} \frac{d\tau}{\tau^{(3+\alpha)/2}}\right]\sim(rs)^{-\zeta}.\end{split}\] (A.10)
This concludes the case of \((r-s)^{2}<1<rs\).
#### a.1.3. Case \(rs<1<(r-s)^{2}\)
We show that \(p_{\zeta}^{(\alpha)}(1,r,s)\sim|r-s|^{-(2\zeta+1+\alpha)}\). We first consider the second summand in (A.5). For a lower bound we drop it, while for an upper bound we estimate
\[\begin{split} 0&\leq\int_{0}^{rs}\frac{d\tau}{\tau} \tau^{-\frac{1+\alpha}{2}}(rs)^{-\zeta}\,\mathrm{e}^{-c(\frac{(r-s)^{2}}{\tau} +\frac{1}{\tau^{c_{1}}})}\lesssim(rs)^{-\zeta}\int_{0}^{rs}\frac{d\tau}{\tau} \tau^{-\frac{1+\alpha}{2}}\mathrm{e}^{-c\frac{(r-s)^{2}}{\tau}}\\ &=(rs)^{-\zeta}|r-s|^{-(1+\alpha)}\int_{(r-s)^{2}/(rs)}^{\infty}d \tau\,\tau^{\frac{\alpha-1}{2}}\mathrm{e}^{-c\tau}\\ &\asymp(rs)^{-\zeta}|r-s|^{-(1+\alpha)}\cdot\frac{|r-s|^{\alpha -1}}{(rs)^{(\alpha-1)/2}}\mathrm{e}^{-c(r-s)^{2}/(rs)}\\ &=|r-s|^{-(2\zeta+1+\alpha)}\cdot\frac{|r-s|^{2\zeta-1+\alpha}}{( rs)^{\frac{2\zeta-1+\alpha}{2}}}\mathrm{e}^{-c(\frac{r-s)^{2}}{rs}}\lesssim|r-s|^{-(2 \zeta+1+\alpha)}.\end{split}\] (A.11)
To bound the first summand in (A.5), we distinguish between \(c_{1}\leq 1\) and \(c_{1}>1\). Consider first \(c_{1}\leq 1\). Then we have \(\exp\left(-c((r-s)^{2}/\tau+1/\tau^{c_{1}})\right)\asymp\exp(-c(r-s)^{2}/\tau)\)
In this case, we have the two-sided estimate
\[\begin{split}&\int_{rs}^{\infty}\frac{d\tau}{\tau}\tau^{-\zeta- \frac{1+\alpha}{2}}\mathrm{e}^{-c(\frac{(r-s)^{2}}{\tau}+\frac{1}{r^{\zeta 1}})}\asymp\int_{rs}^{\infty}\frac{d\tau}{\tau}\tau^{-\frac{2\zeta+1+\alpha}{2}} \mathrm{e}^{-c\frac{(r-s)^{2}}{\tau}}\\ &\asymp\int_{rs}^{(r-s)^{2}}\frac{d\tau}{\tau}\tau^{-\frac{2 \zeta+1+\alpha}{2}}\mathrm{e}^{-c(r-s)^{2}/\tau}+\int_{(r-s)^{2}}^{\infty} \frac{d\tau}{\tau}\tau^{-\frac{2\zeta+1+\alpha}{2}}\\ &\asymp\frac{1}{|r-s|^{2\zeta+1+\alpha}}\left[\int_{rs/(r-s)^{2} }^{1}\frac{d\tau}{\tau}\tau^{-\frac{2\zeta+1+\alpha}{2}}\mathrm{e}^{-c/\tau}+ 1\right]\sim\frac{1}{|r-s|^{2\zeta+1+\alpha}}.\end{split}\] (A.12)
In particular, this estimate suffices for an upper bound for all \(c_{1}>0\) since \(\mathrm{e}^{-c/\tau^{c_{1}}}\leq 1\). Thus, it remains to prove the lower bound when \(c_{1}>1\), i.e., to find lower bounds for
\[\int_{0}^{\infty}\frac{d\tau}{\tau}\,\tau^{-\frac{2\zeta+1+\alpha}{2}}\left[ \exp\left(-c\frac{(r-s)^{2}}{\tau}\right)\mathbf{1}_{\tau>|r-s|^{2/(c_{1}-1)} }+\mathrm{e}^{-c/\tau^{c_{1}}}\mathbf{1}_{\tau\in[rs,|r-s|^{2/(c_{1}-1)}]} \right]\!.\] (A.13)
To this end, we distinguish \(2/(c_{1}-1)\lessgtr 2\). When \(2/(c_{1}-1)\leq 2\), we drop the second summand in (A.13) for a lower bound, while the first summand we treat as in the case \(c_{1}\leq 1\) in (A.12), by splitting the integral at \(\tau=(r-s)^{2}\). The integral for \(\tau<(r-s)^{2}\) can be dropped for a lower bound, while the integral for \(\tau>(r-s)^{2}\) gives the desired contribution. Now consider \(2/(c_{1}-1)>2\). In this case, we drop the first summand in (A.13) for a lower bound. On the other hand, we split the \(\tau\)-integration in the second summand at \(\tau=(r-s)^{2}\), drop the contribution for \(\tau<(r-s)^{2}\) for a lower bound, and estimate
\[\left[\int_{rs}^{(r-s)^{2}}\frac{d\tau}{\tau}\,\tau^{-\frac{2\zeta+1+\alpha}{ 2}}\mathrm{e}^{-c/\tau^{c_{1}}}+\int_{(r-s)^{2}}^{(r-s)^{2/(c_{1}-1)}}\frac{d \tau}{\tau}\,\tau^{-\frac{2\zeta+1+\alpha}{2}}\right]\gtrsim\frac{1}{|r-s|^{2 \zeta+1+\alpha}}.\] (A.14)
This concludes the analysis of the case \(rs<1<(r-s)^{2}\).
#### a.1.4. Case \(rs\wedge(r-s)^{2}>1\)
We show that
\[p_{\zeta}^{(\alpha)}(1,r,s)\sim|r-s|^{-(2\zeta+1+\alpha)}\mathbf{1}_{rs<(r-s) ^{2}}+(rs)^{-\zeta}|r-s|^{-(1+\alpha)}\mathbf{1}_{rs>(r-s)^{2}}.\]
Consider first \(c_{1}\leq 1\). Then, as before, we have \(\exp\left(-c((r-s)^{2}/\tau+1/\tau^{c_{1}})\right)\asymp\exp(-c(r-s)^{2}/\tau)\). We consider the first summand in (A.5) and estimate
\[\begin{split}&\int_{rs}^{\infty}\frac{d\tau}{\tau}\tau^{-\frac{2 \zeta+1+\alpha}{2}}\mathrm{e}^{-c((r-s)^{2}/\tau+1/\tau^{c_{1}})}\asymp\int_{ rs}^{\infty}\frac{d\tau}{\tau}\tau^{-\frac{2\zeta+1+\alpha}{2}}\mathrm{e}^{-c(r-s)^{2}/\tau} \\ &\asymp\mathbf{1}_{rs>(r-s)^{2}}\int_{rs}^{\infty}\frac{d\tau}{ \tau}\tau^{-\frac{2\zeta+1+\alpha}{2}}\\ &\quad+\mathbf{1}_{rs<(r-s)^{2}}\left[\int_{rs}^{(r-s)^{2}} \frac{d\tau}{\tau}\tau^{-\frac{2\zeta+1+\alpha}{2}}\mathrm{e}^{-c(r-s)^{2}/ \tau}+\int_{(r-s)^{2}}^{\infty}\frac{d\tau}{\tau}\tau^{-\frac{2\zeta+1+ \alpha}{2}}\right]\\ &\sim(rs)^{-\frac{2\zeta+1+\alpha}{2}}\mathbf{1}_{rs>(r-s)^{2}}+ \frac{1}{|r-s|^{2\zeta+1+\alpha}}\mathbf{1}_{rs<(r-s)^{2}}.\end{split}\] (A.15)
The second summand in this estimate is already in the desired form, while the first summand can be dropped for a lower bound and estimated from above by
\(s|^{-1-\alpha}\) for the desired upper bound. We now consider the second summand in (A.5) and obtain
\[(rs)^{-\zeta}\int_{0}^{rs}\frac{d\tau}{\tau}\tau^{-\frac{1+\alpha}{2 }}\mathrm{e}^{-c((r-s)^{2}/\tau+1/\tau^{c_{1}})}\sim(rs)^{-\zeta}\int_{0}^{rs} \frac{d\tau}{\tau}\tau^{-\frac{1+\alpha}{2}}\mathrm{e}^{-(r-s)^{2}/\tau}\] \[\quad\asymp\mathbf{1}_{rs>(r-s)^{2}}(rs)^{-\zeta}\left[\int_{0}^{ (r-s)^{2}}\frac{d\tau}{\tau}\tau^{-\frac{1+\alpha}{2}}\mathrm{e}^{-c(r-s)^{2}/ \tau}+\int_{(r-s)^{2}}^{rs}\frac{d\tau}{\tau^{(3+\alpha)/2}}\right]\] \[\quad+\mathbf{1}_{rs<(r-s)^{2}}(rs)^{-\zeta}\int_{0}^{rs}\frac{d \tau}{\tau}\tau^{-\frac{1+\alpha}{2}}\mathrm{e}^{-c(r-s)^{2}/\tau}\] \[\quad\asymp\mathbf{1}_{rs>(r-s)^{2}}(rs)^{-\zeta}\left[\int_{(r-s )^{-2}}^{\infty}d\tau\,\tau^{\frac{\alpha-1}{2}}\mathrm{e}^{-c\tau(r-s)^{2}}+ |r-s|^{-(1+\alpha)}\right]\] \[\quad+\mathbf{1}_{rs<(r-s)^{2}}(rs)^{-\zeta}|r-s|^{-(1+\alpha)} \int_{0}^{rs/(r-s)^{2}}\frac{d\tau}{\tau}\tau^{-\frac{1+\alpha}{2}}\mathrm{e}^ {-c/\tau}\] \[\quad\asymp(rs)^{-\zeta}\cdot|r-s|^{-(1+\alpha)}\left[\mathbf{1}_ {rs>(r-s)^{2}}+\frac{|r-s|^{\alpha-1}}{(rs)^{(\alpha-1)/2}}\exp\left(-c\frac{( r-s)^{2}}{rs}\right)\mathbf{1}_{rs<(r-s)^{2}}\right].\] (A.16)
Comparing (A.15) and (A.16) for \(rs>(r-s)^{2}\) shows that the term \((rs)^{-\zeta}\cdot|r-s|^{-(1+\alpha)}\) on the right-hand side of (A.16) dominates the term \((rs)^{-(2\zeta+1+\alpha)/2}\) on the right-hand side of (A.15) since \((rs)^{-\zeta}\cdot|r-s|^{-(1+\alpha)}\gtrsim(rs)^{-\frac{2\zeta+1+\alpha}{2}}\). On the other hand, if \(rs<(r-s)^{2}\), then the \(|r-s|^{-(2\zeta+1+\alpha)}\) term on the right-hand side of (A.15) dominates the prefactor on the right-hand side of (A.16) since
\[\begin{split}&\frac{|r-s|^{-2}}{(rs)^{\frac{2\zeta-1+\alpha}{2}}} \cdot\exp\left(-c\frac{(r-s)^{2}}{rs}\right)\mathbf{1}_{rs<(r-s)^{2}}\\ &\quad=|r-s|^{-(2\zeta+1+\alpha)}\cdot\frac{|r-s|^{2\zeta-1+ \alpha}}{(rs)^{\frac{2\zeta-1+\alpha}{2}}}\exp\left(-c\frac{(r-s)^{2}}{rs} \right)\mathbf{1}_{rs<(r-s)^{2}}\\ &\quad\lesssim|r-s|^{-(2\zeta+1+\alpha)}.\end{split}\] (A.17)
Thus, for \(c_{1}\leq 1\),
\[\eqref{eq:c_1}\sim(rs)^{-\zeta}|r-s|^{-1-\alpha}\mathbf{1}_{rs>(r-s)^{2}}+|r- s|^{-(2\zeta+1+\alpha)}\mathbf{1}_{rs<(r-s)^{2}},\] (A.18)
as needed. Now suppose \(c_{1}>1\). Then the previous analysis suffices for the upper bound since \(\mathrm{e}^{-c/\tau^{c_{1}}}\leq 1\). Thus, it suffices to prove the lower bound when \(c_{1}>1\), i.e., it remains to find lower bounds for
\[\begin{split}&\int_{0}^{\infty}\frac{d\tau}{\tau}\,\left[\tau^{- \frac{2\zeta+1+\alpha}{2}}\left[\mathrm{e}^{-c\frac{(r-s)^{2}}{\tau}}\mathbf{1} _{\tau>|r-s|^{2/(c_{1}-1)}\lor rs}+\mathrm{e}^{-c/\tau^{c_{1}}}\mathbf{1}_{\tau \in[rs,|r-s|^{2/(c_{1}-1)}]}\right]\right.\\ &\quad\left.+(rs)^{-\zeta}\tau^{-\frac{1+\alpha}{2}}\left[\mathrm{ e}^{-c\frac{(r-s)^{2}}{\tau}}\mathbf{1}_{|r-s|^{2/(c_{1}-1)}<\tau<rs}+\mathrm{e}^{-c/ \tau^{c_{1}}}\mathbf{1}_{\tau<rs\wedge|r-s|^{2/(c_{1}-1)}}\right]\right].\end{split}\] (A.19)
When \(rs<(r-s)^{2}\), we can argue exactly as in the previously analyzed case \(rs<1<(r-s)^{2}\). Thus, we assume \((r-s)^{2}<rs\). In this case, we drop the first line in (A.19) for a lower bound, i.e., it remains to control the second line. If \(2/(c_{1}-1)\leq 2\)
then we drop the second summand in the second line of (A.19) and estimate the first summand there precisely as in (A.16) by splitting the \(\tau\)-integral at \(\tau=(r-s)^{2}\) and dropping the integral over \(\{\tau<(r-s)^{2}\}\). On the other hand, if \(2/(c_{1}-1)>2\), then we drop the first summand in the second line of (A.19). The second summand is bounded from below by
\[(rs)^{-\zeta}\int_{0}^{\infty}\frac{d\tau}{\tau}\,\tau^{-\frac{1+\alpha}{2}} \mathrm{e}^{-c/\tau^{c_{1}}}\mathbf{1}_{(r-s)^{2}<\tau<rs\wedge|r-s|^{2/(c_{1} -1)}}\gtrsim(rs)^{-\zeta}|r-s|^{-(1+\alpha)}.\] (A.20)
This concludes the analysis of the case \(rs\wedge(r-s)^{2}>1\) and thereby the proof of the bounds in (A.1).
### Proof of (2.9)
The bounds (A.1) imply
\[\begin{split} p_{\zeta}^{(\alpha)}(1,r,s)\sim_{\zeta,\alpha}(1+ rs)^{\zeta}\,\mathbf{1}_{(r-s)^{2}<1}+|r-s|^{-(2\zeta+1+\alpha)}\mathbf{1}_{ rs<1<(r-s)^{2}}\\ +\left(|r-s|^{-2\zeta}\mathbf{1}_{1<rs<(r-s)^{2}}+(rs)^{-\zeta} \mathbf{1}_{rs>(r-s)^{2}>1}\right)|r-s|^{-(1+\alpha)}.\end{split}\] (A.21)
Using (A.21), we now show the desired bound (2.9), i.e.,
\[p_{\zeta}^{(\alpha)}(1,r,s)\sim_{\zeta,\alpha}\frac{1}{|r-s|^{1+\alpha}\cdot( r+s)^{2\zeta}+(1+r+s)^{2\zeta}}\]
for all \(r,s>0\). To that end, we distinguish the cases \((r-s)^{2}\lessgtr 1\).
(1) If \((r-s)^{2}<1\), then, by \(2rs=r^{2}+s^{2}-(r-s)^{2}\), we have \(p_{\zeta}^{(\alpha)}(1,r,s)\sim(1+rs)^{-\zeta}\sim(1+r+s)^{-2\zeta}\). This gives the claimed estimate for \(p_{\zeta}^{(\alpha)}(1,r,s)\). Note that we use \(|r-s|^{1+\alpha}(r+s)^{2\zeta}\leq(r+s)^{2\zeta+1+\alpha}\lesssim 1\) for \(r+s\lesssim 1\) and \(\zeta>-1/2\).
(2) If \((r-s)^{2}>1\), then \(|r-s|^{1+\alpha}(r+s)^{2\zeta}\gtrsim(1+r+s)^{2\zeta}\). We now distinguish the cases \(rs\lessgtr(r-s)^{2}\) and use \((r+s)^{2}=(r-s)^{2}+4rs\). If \((r-s)^{2}>rs\), then the claimed estimate for \(p_{\zeta}^{(\alpha)}(1,r,s)\) follows immediately. If \((r-s)^{2}<rs\), then \(|r-s|^{1+\alpha}(rs)^{\zeta}\sim|r-s|^{1+\alpha}(r+s)^{2\zeta}\) as desired.
|
2304.02857 | A MUSE view of the multiple interacting system HCG 31 | We present, for the first time, spatially resolved spectroscopy for the
entire Hickson Compact Group 31 obtained with the MUSE instrument at the
VLT,and an in-depth analysis of this compact group. To obtain a complete
understanding of the system, we derived radial velocity and dispersion velocity
maps, maps of the ionization mechanism of the system, chemical abundances and
their distribution over the whole system, star formation rates and ages of the
different star-forming regions, and the spatial distribution of the Wolf-Rayet
stellar population. We also reconstructed the star formation history of the
galaxies HCG 31 A, C, B and F, measured the emission-line fluxes, and performed
a stellar population synthesis. Our main findings are: (i) that there is
clearly disturbed kinematics due to the merger event that the system is
experiencing; (ii) that the ionization is produced exclusively via star
formation except for the nucleus of the galaxy HCG 31 A, where there is a small
contribution of shocks; (iii) that there is low oxygen abundance distributed
homogeneously through the system; (iv) that there is a prominent population of
carbon Wolf-Rayet stars in the central zone of the group; and (v) that there
are clear evidences of the tidal origin of the galaxies HCG 31 E, HCG 31 H, and
HCG 31 F because they show quite high oxygen abundances for their stellar mass.
All these findings are clear evidence that HCG 31 is currently in an early
merging phase and manifesting a starburst in its central region. | Diego A. GΓ³mez-Espinoza, Sergio Torres-Flores, VerΓ³nica Firpo, Philippe Amram, Benoit Epinat, Thierry Contini, Claudia Mendes de Oliveira | 2023-04-06T04:34:39Z | http://arxiv.org/abs/2304.02857v1 | # A MUSE view of the multiple interacting system HCG 31
###### Abstract
We present, for the first time, spatially resolved spectroscopy for the entire Hickson Compact Group 31 obtained with the MUSE instrument at the VLT,and an in-depth analysis of this compact group. To obtain a complete understanding of the system, we derived radial velocity and dispersion velocity maps, maps of the ionization mechanism of the system, chemical abundances and their distribution over the whole system, star formation rates and ages of the different star-forming regions, and the spatial distribution of the Wolf-Rayet stellar population. We also reconstructed the star formation history of the galaxies HCG 31 A, C, B and F, measured the emission-line fluxes, and performed a stellar population synthesis. Our main findings are: (i) that there is clearly disturbed kinematics due to the merger event that the system is experiencing; (ii) that the ionization is produced exclusively via star formation except for the nucleus of the galaxy HCG 31 A, where there is a small contribution of shocks; (iii) that there is low oxygen abundance distributed homogeneously through the system; (iv) that there is a prominent population of carbon Wolf-Rayet stars in the central zone of the group; and (v) that there are clear evidences of the tidal origin of the galaxies HCG 31 E, HCG 31 H, and HCG 31 F because they show quite high oxygen abundances for their stellar mass. All these findings are clear evidence that HCG 31 is currently in an early merging phase and manifesting a starburst in its central region.
keywords: galaxies: interactions - galaxies: kinematics and dynamics - galaxies: star formation - galaxies: abundances - stars: Wolf-Rayet
## 1 Introduction
The processes of formation and transformation of galaxies are crucial for understanding their evolution. According to the hierarchical model of galaxy formation, the interactions between small galaxies at high redshift are the foundations for the formation of current galaxies (Toomre & Toomre, 1972), therefore, a detailed study of local interacting/merging galaxies can provide important information regarding different phenomena that were common in the distant universe. A great advantage of studying galaxy mergers and interactions in the local universe is the high spatial resolution that we have due to the proximity of these objects. Although various studies have been done comparing the properties of galaxies in the local and distant universe (Epinat et al., 2010; Perez-Montero et al., 2021; Izotov et al., 2021), it should be noted that there are clear differences between studying galaxy mergers in the local universe and studying at high redshift. For example, the mass of the gas has evolved with redshift and also the star formation rates (SFR) (Mannucci et al., 2010; Behroozi et al., 2013;Madau & Dickinson, 2014; Amorin et al., 2015; Boyett et al., 2022). Thus, the comparison of physical processes and phenomena is more relevant than the comparison of the absolute values. Galaxy mergers are unique laboratories for understanding the transformation of galaxies due to gravitational effects. However, to fully understand these phenomena, it is imperative to combine observational results with simulations and models.
There are many types of mergers between galaxies. Depending on the type of collision, the galaxies will be affected in different ways. If one galaxy is much less massive than the other (\(<1:4\) of the mass according to Kaviraj, 2014), the merger is called a _minor merger_. This type of merger does not observationally affect the most massive galaxy. If the two galaxies have similar mass (or at least one has \(>1:3\) of the mass of the other according to Kaviraj et al., 2013) a _major merger_ is said to have occurred in which both galaxies are affected in their morphology. There are some features that indicate that a major merger is occurring, such as tidal tails, destruction of the disks, formation of tidal dwarf galaxies (TDGs) and flattening of the metallicity gradient, among others (Toomre & Toomre, 1972; Duc & Mirabel, 1998; Kewley et al., 2010; Rich et al., 2012; de Mello et al., 2012; Torres-Flores et al., 2014; Mora et al., 2019; Torres-Flores et al., 2020).
During the past few years, several authors have studied interacting galaxies by analyzing their physical properties (Zaragoza-Cardiel
et al., 2018), as well as from the point of view of their kinematic properties (Plana et al., 2003, Amram et al., 2007, Torres-Flores et al., 2010). These efforts have allowed us to understand the role of galaxy interactions in their evolution, however, several questions remain open, with no clear answers. For example, because of the interaction between galaxies an increase in their SFR was expected. Various authors have proposed different scenarios to explain this increase. Xu et al. (2010) studied a sample of pairs of spiral-elliptical galaxies and found that most spiral galaxies did not show an increase in their SFRs. On the other hand, Patton et al. (2011) studied a sample of galaxy pairs and found that they did increase their SFRs, which was expected due to the interaction processes between galaxies. Additionally, Ellison et al. (2011) found that galaxies associated in pairs had a higher fraction of active galactic nuclei (AGN) compared to isolated galaxies. Rupke et al. (2010a) found that interacting galaxies exhibited more flattened metallicity gradients than those observed in isolated galaxies, a phenomenon that could be produced by the interaction between galaxies.
There are other processes in galaxy interactions that result in the formation of new galaxies. Duc & Mirabel (1998) studied the interacting system NGC 5291, where they found star-forming objects in the intergalactic medium. The authors claimed that this suggested the existence of newly formed tidal dwarf galaxies in NGC 5291. These objects should be free of dark matter and should have high metallicities (considering their masses). In addition to the TDGs detected in NGC 5291, other authors have detected and studied TDG candidates in different interacting systems (Weilbacher et al., 2003, de Mello et al., 2012).
As shown before, interacting and merging systems are ideal laboratories for understanding the different physical and kinematical processes that take place in our universe. Various observational techniques have been used over the years to investigate these systems, such as optical imaging, longslit spectroscopy, Fabry-Perot data, HI data cubes, and recently, 3D spectroscopy. The integral field spectroscopy technique (IFS) is one of the most powerful approaches for studying an interacting/merging system in detail thanks to its usually wide spectral range, especially if the field of view is large enough to cover all members, as with a compact group of galaxies, where gravitational encounters are quite common.
In this work we developed a deep spectroscopic study of the Hickson Compact Group 31, which is a complex interacting system of dwarf galaxies located in the nearby universe (Rubin et al., 1990). Several authors (Iglesias-Paramo & Vilchez, 1997, Lopez-Sanchez et al., 2004, Mendes de Oliveira et al., 2006, Amram et al., 2007, Gallagher et al., 2010, Alfaro-Cuello et al., 2015, Torres-Flores et al., 2015) have studied this object but none of them have combined a well-adapted spatial coverage with a broad spectral coverage, which is the case for the MUSE data presented in this work. Thus, this paper provides, for the first time, a complete view of this group, based on MUSE/VLT data of the main galaxies of the system and of the southern tidal tail. Therefore, the analyses presented in this work can be very valuable in the study of galaxy transformation and evolution in dense environments. The paper is organized as follows: in section 2 we present the system; in section 3 we present the data; in section 4 we present the analysis used to derive the emission-line intensities, kinematics, extinction, star formation rates (SFR), oxygen abundance, equivalent width (EW), ages, and ionization mechanism; and, in section 5 we present our results. The discussion and summary are presented in sections 6 and 7, respectively.
## 2 The Hickson Compact Group 31
Our object of study consists of a specific group of galaxies, the Hickson Compact Group 31 (RA [Deg] = 75.409572, DEC [Deg] = -4.257011). This group lies at a distance of \(59.38\pm 4.16\) Mpc. That distance was measured assuming a redshift of \(z=0.01347\pm 0.00002\)(Wong et al., 2006) and a value of the Hubble constant of \(H_{0}=67.8km\ sec^{-1}\ Mpc^{-1}\)(Riess et al., 2016).
This group consists of many low-mass galaxies with low-metallicities (Mendes de Oliveira et al., 2006), and all of them are in an interacting event (Rubin et al., 1990). This particular configuration makes HCG 31 an ideal laboratory to study galacticy interactions and evolution.
Due to its configuration, HCG 31 is a very well-studied object and many authors have contributed information about this system, (e.g Rubin et al., 1990, Iglesias-Paramo & Vilchez, 1997, Lopez-Sanchez et al., 2004, Amram et al., 2007, Alfaro-Cuello et al., 2015, Torres-Flores et al., 2015, among others). Hickson (1982) classified it as a compact group of galaxies. In this study the authors detected four members in this group, HCG 31 A, B, C and D. Rubin et al. (1990) detected four new members, HCG 31 E, F, G and Q, and concluded that member D was not part of the group due to its higher redshift. More recently, Mendes de Oliveira et al. (2006) detected another member of the system, HCG 31 R. Currently, we accept the evidence that HCG 31 consists of nine galaxies: A, B, C, E, F, G, H, Q and R. Figure 1 presents an optical image of the group with eight of the members labeled (member R is not seen due to its low brightness).
The entire group is embedded in a common envelope of neutral hydrogen, which has a total HI mass of \(2.1\times 10^{10}\ M_{\sun}\)(Williams et al., 1991). These authors also found that the HI distribution peaked at the overlap region between HCG 31 A and HCG 31 C.
Lopez-Sanchez et al. (2004) performed a complete study of this system. They developed a deep analysis of the physical properties by using optical imaging, near-infrared (NIR) imaging, and optical medium-resolution long-slit spectroscopy. An interesting result obtained by these authors was the detection of a Wolf-Rayet bump in the spectrum of the galaxy HCG 31 C, which indicates the presence of Wolf-Rayet stars with very young ages (\(<4\) Myrs). This bump can be identified with the blend of the spectral lines He\({}_{II}\) 44686 A, C\({}_{III}\)/C\({}_{IV}\) 44650 A and N\({}_{III}\)\(\lambda\lambda\)4634, 4640 A. Moreover, the most important emission in the bump arises from the helium lines. The presence of such stars is strong evidence of a young starburst in this system. In addition, these authors found that all the members of the group displayed low oxygen abundance, spanning a range of \(12+log(O/H)\)\(-\) 8.03-8.37.
Mendes de Oliveira et al. (2006) derived the luminosity-metallicity relation (LZR) for HCG 31 using K\({}_{s}\)-band magnitudes, which mainly traced stellar emission (at this redshift). The LZR obtained by the authors suggested that galaxies C, G and B followed that relationship. Members H, R, E and F showed higher metallicities for their luminosities, which led the authors to suggest that these members could be tidal debris or TDG candidates, because this type of object retains the metallicity of its parent galaxy (Weilbacher et al., 2003).
A deep study of the central region of HCG 31 was done by Alfaro-Cuello et al. (2015). The authors used IFS observations taken with GMOS/Gemini, centered in the overlap region between galaxies A and C, and found a high SFR density in the central region. Using the same data Torres-Flores et al. (2015) detected a flat gradient in the oxygen abundance map linking galaxies A and C. This gradient suggested gas mixing in the interface between galaxies A and C (Torres-Flores et al., 2015). Alfaro-Cuello et al. (2015) also detected a super star cluster (SSC) near the nucleus of galaxy HCG 31 C.
These authors speculated that this SSC was currently triggering star formation in their surroundings.
As shown above, HCG 31 offers an ideal place to seek for answers to different open questions in the field of interacting/merging galaxies.
## 3 Observation and Data Reduction
The Hickson Compact Group 31 was observed during the nights of February 18-20 of 2014, as part of the science verification strategy. To obtain a larger spatial coverage, three different FoVs were observed. In Figure 1 we represent all the FoVs observed over an optical image of HCG 31. The data was acquired in the Wide Field Mode (WFM), where each field covers a field-of-view of 1x1 arcmin\({}^{2}\), with a spatial sampling of 0.2 arcsec and a spectral range of 4750 - 9350 A for a 1.25 A step. The mean seeing during the observations was \(\sim\) 0.9 arcsec, with a mean airmass of 1.15. Considering the distance to the system (59.38 Mpc) our mean seeing implies a spatial resolution of \(\sim\) 252 pc, where each spaxel covers \(\sim\) 56 pc, and each FoV cover an area of \(\sim\) 16.8 x 16.8 kpc.
The data was reduced using the standard ESO pipelines for MUSE data reduction.
## 4 Analysis
In this section we present the analyses carried out to determine the different physical parameters and properties of the members of HCG 31. We focus on the methods used to obtain extinction, SFR, the mechanism of ionization, and the measurement of line-emission fluxes, stellar population synthesis (SPS) and kinematics.
### Emission-line measurement
The emission lines have been measured using two codes: FADO3 (fitting analysis using differential evolution optimization, Gomes & Papaderos, 2017) and irescure4(Ruschel-Dutra & Dall'Agnol De Oliveira, 2020). The first is designed to perform spectral population synthesis analysis to derive different physical parameters from a galaxy spectrum, including emission-line fluxes. The second code is a Python5 package of spectral analysis routines which fit Gaussian profiles to emission lines. We used irescure to obtain the radial velocity and velocity dispersion maps because FADOs not optimized for such calculations. Due to the spectral resolution of the MUSE data (average R \(\sim\)3000), we fitted a single Gaussian profile to the observed line instead of performing a multi component analysis.
Footnote 3: [http://spectralsynthesis.org/index.html](http://spectralsynthesis.org/index.html)
Footnote 4: [https://ifscube.readthedocs.io/en/latest/](https://ifscube.readthedocs.io/en/latest/)
Footnote 5: Python Software Foundation. Python Language Reference, version 2.7. Available at [http://www.python.org](http://www.python.org)
In Figure 2 we show the line-emission maps obtained for H\(\beta\), [O iii]\(\lambda\) 5007 A, [O i]\(\lambda\) 6300 A, H\(\alpha\), [N ii]\(\lambda\) 6584 A and [S ii]\(\lambda\) 6717 A, which were derived from FADO analysis on the entire data cube. We used FADO output to reconstruct the maps in the (x,y) plane. These maps correspond to the flux obtained from the Gaussian fit performed by FADO.
Maps were cleared of noise by including spaxels where the signal-to-noise ratio (SNR) was \(>\) 3. To measure the SNR we divided the map of the emission line with the map of the same line but in the variance layer of the datacube. Emission line fluxes were corrected for galactic and internal extinction by using the extinction laws proposed by Fitzpatrick (1999) and Calzetti et al. (2000), respectively.
### Kinematics: Radial velocity and velocity dispersion
Kinematic analysis was done by using irescube on the H\(\alpha\) emission line. The uncertainties of radial velocities were calculated using the formulas of Lenz & Ayres (1992) and the fitting parameters were derived by using irescube.
The width of a Gaussian profile is usually determined by the full width at half maximum (FWHM) which, in this case, is measured in angstroms. The standard deviation is related to the FWHM through FWHM \(=\) 2\(\sqrt{2ln2}\times\sigma\)\(\approx\) 2.35 \(\times\)\(\sigma\).
Under the assumption that the H\(\alpha\) emission line can be fitted by a Gaussian profile, its width (\(\sigma\)) represents the velocity dispersion of the atoms, providing information about its broadening mechanism. The intrinsic velocity dispersion of the ionized gas (\(\sigma_{int}\)) is defined by equation 1:
\[\sigma_{int}^{2}=\sigma_{obs}^{2}-\sigma_{inst}^{2}-\sigma_{th}^{2} \tag{1}\]
where \(\sigma_{obs}\) is the observed width, \(\sigma_{inst}\) is the instrumental width and \(\sigma_{th}\) is the thermal broadening which can be estimated by using \(\sigma_{th}=\sqrt{k_{B}T/m_{a}}\), where \(k_{B}\) is the Boltzmann constant, \(T\) is the electronic temperature and \(m_{a}\) corresponds to the mean atomic mass of the gas. In our case, we assumed a standard \(T=10^{4}K\), which is a good approximation for the temperature calculated by Lopez-Sanchez et al. (2004) for HCG 31 (\(T\sim 9400\pm 600\)\(K\) for the central
Figure 1: Optical image of HCG 31. The yellow squares show the location of the IFUs taken by MUSE. Image taken from DECAIs legacy survey.\({}^{2}\)
zone); therefore we used \(\sigma_{th}=9.4\,kms^{-1}\). For the instrumental width we use the standard \(\sigma_{inst}\) for MUSE, which has been taken from the literature (Bellocchi et al. 2019, \(\sigma_{inst}=50\,kms^{-1}\) for the H\(\alpha\) wavelength). The uncertainties of \(\sigma_{obs}\) were calculated according to the formula given by Lenz & Ayres (1992) with the fitting parameters of rescue.
The stellar kinematic analysis were not perfomed due to the instrinsic weak continuum of the spaxels, which makes the fit of the stellar absorption lines difficult.
### Oxygen abundances
The metallicity distribution in spiral galaxies has been extensively studied during the last years (van Zee et al. 1998 Bresolin et al. 2012, Sanchez et al. 2014). Most of giant galaxies show a clear abundance gradient, with the center of the galaxy being more metallic than the outskirts. On the other hand, several observational studies have proven that interacting galaxies show flatter abundance gradient. Owing to the spectral range of our data it was not possible to derive the oxygen abundance using the direct method because we could not measure the auroral line [O iii] \(\lambda\) 4363 A. In this case, we decide to use the strong-line methods as oxygen abundance indicator, given that these methods mainly use strong emission-lines which typically have quite high SNR. Specifically, we use the N2 and O3N2 empirical calibrators which requires only four intense emission-lines: H\(\alpha\), H\(\beta\), [O iii] \(\lambda\) 5007 A and [N ii] \(\lambda\) 6584 A.
We use the calibrations obtained by Marino et al. (2013) for the N2 and O3N2 calibrators
Figure 2: Emission-line flux maps of HCG31 obtained with FADO. The lines are H\(\beta\), [O iii] \(\lambda\) 5007 Γ
, [O i] \(\lambda\) 6300 Γ
, H\(\alpha\), [N ii] \(\lambda\) 6584 Γ
, and [S ii] \(\lambda\) 6717 Γ
. The maps are in pixel coordinates and the scale color represents the flux. All maps are normalized to the maximum H\(\alpha\) flux.
We also use the code HII-CHI Mistry(Perez-Montero, 2014) which is a collection of python scripts that analyze the intensities from several bright emission lines observed in the optical (in our case H\(\alpha\), H\(\beta\), [N ii] \(\lambda\)6584 A and [O iii] \(\lambda\)5007 A). This code estimates the oxygen abundance by using grids of photoionization models.
### Star formation rate determination
The SFR is a key parameter in galaxy evolution and represents the amount of gas that is converted into stars per unit of time. Several authors have studied how the environment plays a role in the star formation rate of interacting/merging systems (Mihos & Hernquist, 1996, Boselli & Gavazzi, 2006,Xu et al., 2010,Teyssier et al., 2010,Patton et al., 2011,Pontzen et al., 2017).
Kennicutt & Evans (2012) reviewed the most used SFR calibrators to date, and they refined the equations, including for \(H\alpha\). They propose, for \(L(H\alpha)\), the following expression, assuming a Chabrier IMF, and a continuous SF process:
\[logSFR(M_{\odot}yr^{-1})=logL(H\alpha)(ergs^{-1})-41.27 \tag{2}\]
On this paper we use H\(\alpha\) as tracer of SFRs. The uncertainties are calculated by propagating the flux errors given by FADO and using an uncertainty on the distance of 4.16 Mpc (explained in section 2).
### H alpha equivalent width and age determination
The H\(\alpha\) equivalent width EW(\(H\alpha\)) gives us a good estimation of the ratio between ionizing photons of massive stars and continuum photons of the underlying stellar population and it is commonly used to date star formation events. One of the models used to estimate ages is given by STARBURST99 (Leitherer, 1990), which is used in this work, given that it was previously used in the analysis of HCG 31 (Lopez-Sanchez et al., 2004, Alfaro-Cuello et al., 2015). Therefore, we will be able to compare our findings with these previous studies.
STARBURST99 generates models for the evolution of different properties of a single stellar population, considering an instantaneous burst or a continuous star formation process. The predictions are available for five different stellar metallicities: (2\(Z_{\odot}\), Z\({}_{\odot}\),0,4\(Z_{\odot}\), 0.2\(Z_{\odot}\), and 0.005\(Z_{\odot}\)), and for three different initial mass functions, covering ages from 10\({}^{6}\) to 10\({}^{9}\) years. On this work we used a model for a Z = 0.004 and Z = 0.008 metallicities, based on the observed oxygen abundance of the system.
### Ionization mechanism: BPT diagnostic diagrams
Distinguishing the different ionization mechanism in a galaxy is a very important and challenging topic in extragalactic astronomy. Baldwin et al. (1981) used a database of extragalactic objects to classify them according to their ionization mechanism. The predominant ionization mechanism in an extragalactic object could be i) photo-ionization by OB stars, ii) a power-law continuum source, and iii) shock wave heating (Baldwin et al., 1981). In this context, diagnostic diagrams based on emission line ratios, for instance \(I([O_{III}]\,5007\AA)I(H\beta)\) vs \(I([N_{II}]\,6584\AA)I(H\alpha)\), have been extensively used during the last decades to determine ionization mechanisms (hereafter, BPT diagrams).
The limits which divide the different ionization mechanisms is not fully clear. Kewley et al. (2001) studied the properties of starburst galaxies using the codes _PEGASE_ and _STARBURST99_, which allowed them to derive an upper limit for the starburst region in the BPT diagrams.
Kauffmann et al. (2003) studied a sample of 22623 AGNs at \(0.02<z<0.30\) taken from the _Sloan Digital Sky Survey_ (SDSS) with the objective to separate star-forming galaxies and AGNs. Kauffmann et al. (2003) used the BPT diagrams and derived an empirical limit to the star-forming sequence. This limit lies below the line suggested by Kewley et al. (2001), and the zone between both limits corresponds to the so-called composite region, which probably host a mixture between the different ionization mechanisms.
### Star formation History
FADO allows us to reconstruct the Star Formation History (SFH) of some object by fitting a combination of Single Stellar Population (SSP) to the spectrum of the galaxy, obtaining a population vector. This population is based on the Bruzual & Charlot (2003) stellar libraries. In order to calculate the ages and metallicities of the best-fitting vector FADO abstracts the nebular continuum from the observed spectra. The resulting stellar population is used to visualize the SFH of the galaxy by the luminosity at the normalization wavelenght of the different SSPs (see Gomes & Papaderos, 2017 for details). For HCG 31, we use 5100 A as the normalization wavelength.
## 5 Results
### Radial Velocity: The complex velocity field of HCG 31
In Figure 3 (left panel) we display the radial velocity field of HCG 31, derived from the H\(\alpha\) emission line. The contours represent H\(\alpha\) emission at the flux levels of 6.3\(\times\)10\({}^{-14}\) erg s\({}^{-1}\) arcsec\({}^{-2}\), 4.0\(\times\)10\({}^{-13}\) erg s\({}^{-1}\) arcsec\({}^{-2}\) and 1.0\(\times\)10\({}^{-11}\) erg s\({}^{-1}\) arcsec\({}^{-2}\). The velocity scale spans a range from 3950 km s\({}^{-1}\) to 4200 km s\({}^{-1}\). On this Figure, the main kinematic structures are labeled and the black lines represent the mock slits that we used to derive the position-velocity diagrams.
HCG 31 shows a complex kinematics, and we could not assign a single rotating pattern for the whole system (Amram et al., 2007 hereafter A07). We identified three different kinematic entities: i) the central region (A+C), (ii) a western member called galaxy B, and (iii) the southern tidal tail composed of three sub-regions (E1, E2), (H1, H2, H3) and (F1, F2, F3). These three entities are labeled with gray ellipses in the left panel of Figure 3. The group covers a very narrow range in velocity space (\(\sim 200\) km s\({}^{-1}\)), which was also reported by other authors (Rubin et al., 1990, Richer et al., 2003, Lopez-Sanchez et al., 2004, Amram et al., 2007). This indicates that the system has a very low velocity dispersion, which is expected for compact groups composed of late-type (or gas-rich) galaxies Hickson et al. (1988).
The central region shows a rotating pattern from East to West, with a position angle (PA) \(\sim 120^{\circ}\). A07 derived different kinematic parameters for this region. They estimated a mass of approximately \(4.5\times 10^{9}M_{\odot}\), PA = \(130\pm 3^{\circ}\), and an inclination of \(52\pm 5^{\circ}\). The amplitude in radial velocity for this central region is about \(\sim 150\) km s\({}^{-1}\). To analyze the velocity gradient of galaxy A, we simulated the PA used by Verdes-Montenegro et al. (2005) in the analysis of the HI map of HCG 31 (Figure 3, left panel, horizontal line in region A+C). The velocity gradient, shown in Figure 5, displays values similar to those presented by previous researchers.
It should be noted that the slit passes through the zone where the H\(\alpha\) double components are seen in the high-resolution Fabry-Perot data of this system (A07). Thus, the velocity gradient shown in figure 5 corresponds to the average motion of the multiple emission line components, which cannot be resolved by MUSE. Richer et al. (2003)
(2003) also showed Fabry-Perot data for HCG 31, with a resolution of R \(\sim\)7500, concluding that A+C corresponded to a single kinematic entity.
To confirm the kinematic nature of the central merger, we employed the MocKinG code6 described in Mercier et al. (2022) which allows one to fit a rotating disk model to the velocity field with fixed projection parameters across the galaxy (position angle of major axis, center, inclination and systemic velocity). We used a Courteau rotation curve model (Courteau, 1997) described by the following equation:
Footnote 6: [https://gitlab.lam.fr/bepinat/MocKinG](https://gitlab.lam.fr/bepinat/MocKinG)
\[v(r)=v_{\rm c}\,\frac{1+(r_{1}/r)^{\beta}}{(1+(r_{1}/r)^{\gamma})^{(1/\gamma)}}\, \tag{3}\]
where \(r\) is the radius, \(r_{1}\) is the transition radius, \(v_{\rm c}\) is the asymptotic velocity, \(\beta\) controls the steady rise at large radii, and \(\gamma\) is related to the sharpness of the turnover after \(r_{t}\). We use \(\beta=0\) to reduce the number of free parameters as done in Gomez-Lopez et al. (2019). We also fixed the center to a position inferred from the geometrical center of the continuum emission.
MocKinG uses a forward modeling approach that takes into account the limited spatial resolution of the data by weighting the velocity by the flux distribution within the point spread function and across each pixel (see Appendix of Epinat et al., 2010, for more details). It can use either a Levenberg-Markward algorithm (Markward, 2009) or the multimodal nest sampling algorithm Multinest (Feroz & Hobson, 2008; Buchner et al., 2014) to find the fit parameters. We used the latter solution that is much more robust to local minima.
The resulting velocity field, model and residuals are shown in Fig. 4 (top panel), where a clear pattern of rotation is observed.
The kinematics parameters obtained from the fit were: PA = \(102^{\circ}\pm 1^{\circ}\), \(v_{\rm c}=357\pm 7\) km s\({}^{-1}\), \(r_{t}=14.4\pm 0.3^{\prime\prime}\), \(\gamma=0.69\pm 0.1\), with an inclination fixed to \(i=30^{\circ}\). Overall, residuals are quite homogeneous, however locally they can reach values larger than \(\pm 50\) km s\({}^{-1}\), especially in the very center. This demonstrates that the HCG 31 A + C system cannot be described as a single rotating disk, despite exhibiting a rotational pattern. This reinforces the hypothesis proposed by A07, where they suggest that what we are observing in the central region is the merging of the HCG 31 A and HCG 31 C galaxies.
There are also two sub-structures, A1 and A2, in the central region (Fig. 3, left panel). These sub-structures are part of the northern tidal tail of the group (Verdes-Montenegro et al., 2005). These sub-structures are most likely composed of material stripped from member A due to the interaction, which now are falling back into the galaxy (Mendes de Oliveira et al., 2006, Amram et al., 2007). The slit in these objects does not show a considerable velocity variation in A1, but there is a low amplitude in A2 (\(\sim\)40 km s\({}^{-1}\)). The latter is counter-rotating relative to A+C, which suggests that it is currently falling back (A07). In contrast, A1 does not show a considerable velocity gradient. Probably it is rotating tangentially, and its rounded shape may be evidence of that. In any case, the projected distance to the central merger (\(\sim\)9 kpc) is quite small and it does not have enough mass to gravitationally separate from the central merger ( \(log(M/M_{\odot})\approx 6.6\)).
For galaxy B, A07 derived the following kinematic parameters; PA = \(-135\pm 3^{\circ}\), \(i=60\pm 5^{\circ}\), and rotational velocity is \(\sim 120\) km s\({}^{-1}\). Our velocity gradient (applying an inclination correction assuming \(i=60\pm 5^{\circ}\)) is consistent with the determinations of A07 and also with the HI map presented by Verdes-Montenegro et al. (2005).
We also ran MocKinG to obtain the kinematic parameters of this galaxy. The model results are shown in Fig. 4 (bottom panel), with the kinematic parameters obtained being \(PA=-134\pm 1^{\circ}\), and rotation curve parameters compatible with a solid body rotation curve with a slope of \(4.3\pm 0.2\) km s\({}^{-1}\) arcsec\({}^{-1}\) for an inclination of 58\({}^{\circ}\) fixed from morphology.7 Residuals are low, within \(\pm 30\) km s\({}^{-1}\). However,
Figure 3: Left panel: H\(\alpha\) radial velocity derived by HSCUIT. The black contours represent H\(\alpha\) emission. The gray ellipses show the three main kinematic entities visually identified (see text). The black lines represent the different slits used to analyze the internal kinematics of each object. Right panel: H\(\alpha\) velocity dispersion map. The zones with higher velocity dispersion are in the center of the merger between A and C. Members B, E, and F show low velocity dispersion (\(<30\) km s\({}^{-1}\)). The group spans a range of velocity dispersion of 10 km s\({}^{-1}<\sigma<95\) km s\({}^{-1}\). However, our calculations probably overestimated the values (see text). In both panels north is up and east is on the left.
it is noteworthy that an interesting pattern is observed with negative residuals in the central regions and positive residuals in the outer parts.
The southern tidal tail is the most extended structure in our field-of-view, (\(\sim\)20, kpc from E1 to F3). It is composed of three structures, from north to south: E, H, and F. In the HI map of this system (Williams et al., 1991, Verdes-Montenegro et al., 2005), this tail extends upon member G (not seen in our FoV), but its optical counterpart is formed by different small-scale structures, of the three main structures mentioned above. The velocity gradient shows a somewhat peculiar kinematic behavior (Fig. 5).
Galaxy E presents a steep velocity gradient with amplitude of \(\sim\)70 km s\({}^{-1}\). According to A07, this structure is falling back to A+C because of its counter-rotation. Within this structure we identified two main knots of star formation that we denominated E1 and E2 (Figure 3, left panel). The difference in velocity between these two members is about \(\sim\)40 km s\({}^{-1}\), but with a smooth velocity transition between them, suggesting that they are part of the same kinematic entity.
Object H consists of a chain of three star-forming knots aligned with the tail. This faint structure has the highest velocity in the tail, with a peak of 4080 km s\({}^{-1}\) in H3 and amplitude of \(\sim\) 40 km s\({}^{-1}\) from H1 to H3. The northern part of the gradient seems to be connected to the southern part of the member E gradient. However, our H\(\alpha\) map has a very low SNR in the gap between these objects and this connection cannot be strictly verified with our data. As for the southern part of the gradient, this object shows a rapid velocity decrement in a very small projected distance, immediately before the H3 center. The velocity is about 4080 km s\({}^{-1}\) at 40 arcsec (\(\sim\) 11.2 kpc) and 4040 km s\({}^{-1}\) at 37 arcsec (\(\sim\) 10.4 kpc), that is a 40 km s\({}^{-1}\) variation over about \(\sim\)850 pc. This pronounced variation could be explained with the HI velocity map of this tidal tail (Verdes-Montenegro et al., 2005, figure 16) where it is seen that the center of H3 (e5 on Verdes-Montenegro et al., 2005) lies at the edge of a region kinematically detached from the tail.
The object F is composed of three different knots: F1, F2, and F3. This object is the most plausible TDG candidate in HCG 31 (Iglesias-Paramo & Vilchez, 1997, Richer et al., 2003, Lopez-Sanchez et al.,
Figure 4: MockinG results for the central merger (Top panel) and for Galaxy HCG 31 B (Bottom panel). Black contours represent H\(\alpha\) in emission.
2004, Mendes de Oliveira et al. 2006, Amram et al. 2007, Verdes-Montenegro et al. 2005, Gallagher et al. 2010), along with member R (Mendes de Oliveira et al. 2006) (out of our FoV). Member F does not show a velocity gradient in our position-velocity diagram; our velocity gradient seems flat, with no substantial variations in velocity. We speculate that this kinematic behavior is a result to rotation on the plane of the sky. However, this hypothesis has been refuted by A07, arguing that this explanation is very unlikely because of the extended morphology of object F. The velocity of F1 and F2 is \(\sim\)3980 km s\({}^{-1}\), which is consistent with the HI motions, and more like the velocity of the E member. Considering the velocity of member H, the values obtained for member F could lead to misinterpretations about the nature of the tail, which shows the importance of studying 2D velocity fields. Using long-slit data, Lopez-Sanchez et al. (2004) obtained a velocity gradient for the tidal tail, but they interpreted it as being two different kinematic structures contained in the same object: a short and warped optical tail that ends in H, and an HI tidal tail that connects A+C with G.
F3 has a velocity of \(\sim\)4030 km s\({}^{-1}\). A smooth gradient between the south of F2 and F3 is seen on our velocity gradient, suggesting that they are part of the same object. Comparing our velocity field with the HI velocity map of the tail we see that F1, F2, and F3 are part of the region kinematically detached from the tail. This explanation favors the scenario in which F is a TDG in formation.
### Velocity dispersion
The H\(\alpha\) velocity dispersion (\(\sigma_{int}\)) (Fig. 3, right panel) was obtained by using the parameters derived from a single gaussian fitting performed by riscube and a deconvolution of instrumental and thermal dispersion.
Objects A1, A2, B, E, H, and F show low values of velocity dispersion (5 km s\({}^{-1}<\sigma_{int}<\) 30 km s\({}^{-1}\)) (Fig. 3, right panel). However, high-resolution data is required to understand the internal kinematics of these sources in detail, given that the MUSE resolution (50 km s\({}^{-1}\)) is not high enough to search for expanding shells using diagnostic diagrams; therefore, our determinations on these values are upper limits for these sources. Nonetheless, according to Moiseev et al. (2015), these velocity dispersion values (10 km s\({}^{-1}\) \(<\sigma_{int}<\) 30 km s\({}^{-1}\) ) are quite typical for H ii galaxies and giant H ii regions (Firpo et al. 2011).
The A+C complex is the most interesting zone in the velocity dispersion map. It hosts the highest values of \(\sigma_{int}\) in the HCG 31 system, with a peak of \(\sim\)95 km s\({}^{-1}\), and its distribution is not spatially correlated with H\(\alpha\) contours. The two main H\(\alpha\) knots in the central region show similar velocity dispersion values of \(\sim\)50 km s\({}^{-1}\).
It should be noted that the complex A+C shows double-peaked H\(\alpha\) profiles (A07), which cannot be resolved with the spectral resolution of our data.
Figure 5: Velocity gradients derived for the different pseudo slits represented in Figure 3 (left panel). The gradients run from east to west.
### Extinction: A 2D view of the dust distribution in HCG 31
The values of E(B-V) range from 0 to \(\sim 0.4\). In some spaxels we detected negative value, which indicates that the coefficient H\(\alpha\)/H\(\beta\) is below the theoretical value of 2.86. However, this value implies specific conditions (\(n_{e}=100\,cm^{-2}\), \(T_{e}=10^{4}K\)) which is not exactly the case for every zone in HCG 31 (Lopez-Sanchez et al., 2004). Thus, with different conditions, different values of the coefficient H\(\alpha\)/H\(\beta\) were determined. In these cases, we assumed that the extinction was null. The highest E(B-V) values were in an H\(\alpha\) knot that is part of member A. The mean values of E(B-V) for each member of HCG 31 are presented in Table 1 col (7) for the map average and col (8) for the integrated spectra. Our values are consistent, within the uncertainties, with previous determinations using spectroscopic data (Lopez-Sanchez et al., 2004) or from estimates performed with the HI density column (Williams et al., 1991). However, our current analysis significantly improves the spatial coverage of this system.
Objects A1, A2, and A3 show very similar values of E(B-V), considering their uncertainties. In those regions our values of the SNR in H\(\alpha\) and H\(\beta\) are relatively low (\(\sim 5\)-\(10\)), and many spaxels with negative extinction are found, limiting the number of spaxels with confident determinations of E(B-V). Therefore, in these regions we measured the extinction near the most intense H\(\alpha\) knot, maximizing the SNR and minimizing the uncertainties.
Objects A and A+C lie in the region with the highest SNR on our FoV, because of the overlapping of two MUSE fields (fields 1 and 2 on Figure 1). Thus, these objects present the lowest uncertainties in our E(B-V) determinations.
Object A, in particular, hosts the knot with the highest extinction on HCG 31, with a value of E(B-V) \(\sim 0.40\pm 0.06\). For A+C the absorption is lower, and it is spatially correlated with the H\(\alpha\) emission. In the south of A+C, there are several knots with high extinction (E(B-V) \(\sim 0.4\)) that are clearly distinguished in Figure 6 (top left panel).
In object B, the extinctions of its three main knots seem similar, with B2 being the most extinguished. In this galaxy we use the same procedure as in A1, A2 and A3 to obtain the extinction, we averaged the E(B-V) values for the points belonging to the peak in H\(\alpha\). It should be noted that this method only gives us an approximation of the extinction. The bridge at the NE of this object shows a mean extinction value of E(B-V) \(\sim 0.12\pm 0.08\). The two main knots in object E show similar values of E(B-V), with E2 being the one with the highest extinction and where the H\(\alpha\) emission is strongest. For the main body of E, the extinction was E(B-V) \(\sim 0-0.1\). In member H the emission of H\(\alpha\) and H\(\beta\) was very weak and had a low SNR. Thus, the uncertainties determining E(B-V) are very high in its three main knots. In object F, the extinction seems to be slightly lower than in the rest of the knots in HCG 31. In F1 and F2 there is a clear spatial correlation between H\(\alpha\) emission and extinction.
### Electron Density
In Figure 7 we show an electron density map, derived from the line ratio of the sulfur doublet [S ii]\(\lambda\lambda\) 6717, 6731 A, and the equations presented in Perez-Montero (2014). The black contours represent H\(\alpha\) emission.
In general, electron densities in HCG 31 are in the low-density region (n\({}_{e}\) <100 \(\,\)cm\({}^{-3}\)), with a peak in the knots located in members A and C. The knot in member A has an average electron density of n\({}_{e}\)\(\sim 170\pm 30\) cm\({}^{-3}\) with a peak of \(\sim 300\) cm\({}^{-3}\), and the knot in C has an average electron density of \(100\pm 70\) cm\({}^{-3}\) with a peak of \(\sim 140\) cm\({}^{-3}\). In all the other members, the electron density remained below 100 cm\({}^{-3}\). These measurements were consistent with the values known for extragalactic H ii regions (n\({}_{e}\) < 500 \(\,\)cm\({}^{-3}\) Bresolin et al., 2005).
Several authors have studied the effects of interaction on electron density. For example, Krabbe et al. (2014) studied seven pairs of interacting galaxies and determined that these systems showed higher electron densities (n\({}_{e}\) = 24-532 cm\({}^{-3}\)) than isolated galaxies (n\({}_{e}\) = 40 - 137 cm\({}^{-3}\)). Our values are consistent with those ranges.
### Oxygen abundance determinations
To have a complete view of the oxygen abundance in HCG 31, obtained oxygen abundance maps using the strong-line empirical calibrators, N2 and O3N2. The abundance maps obtained with these calibrators are presented on the top right and bottom left panels of Figure 6, respectively. In addition, in the bottom right panel of Figure 6 we show, the corresponding oxygen abundance map derived by code HII-CHII-Mistry.
In Figure 6 (top right panel) the slits used to obtain the metallicity gradients of the system are shown (see section 5.6). In this figure we observed a slight transition from \(12+log(O/H)=8.23\pm 0.16\) (N2) in HCG 31 C to \(12+log(O/H)\)= \(8.40\pm 0.16\) (N2) in HCG 31 A. Inspecting Figure 6 (bottom left panel), especially in the central region of HCG 31, we observed a very similar trend, with a more metallic knot in the south (knot A).
Using GMOS-IFU data, Torres-Flores et al. (2015) studied the central zone of HCG 31, and found a metallicity gradient in the line that connects the main burst of star formation in HCG 31 A and C. Our maps suggest the same behavior for oxygen abundance, especially for N2 and O3N2.
Objects A1, A2, and A3 have very similar values of \(12+log(O/H)\) spanning a range of \(8.3<12+log(O/H)<8.4\). Taking into account the uncertainties, we consider that these three objects have the same metallicity, with values similar to HCG 31 A (we note that this trend was observed for all calibrators). This finding is consistent with the tidal origin of these sources, as proposed by several authors (Amram et al., 2007, Lopez-Sanchez et al., 2004)
Galaxy B showed a fairly homogeneous distribution of metallicity at the surroundings of the knot B2 with \(12+log(O/H)\)\(\sim 8.3\) and a drop to lower metallicities at B1 and B3 with \(12+log(O/H)\)\(\sim 8.1\). The bridge that connects B and A+C seemed to have the same metallicity as the main body of B.
The mean values of the oxygen abundance for sources E and H are \(12+log(O/H)\)\(8.25\pm 0.16\) and \(12+log(O/H)\)\(8.20\pm 0.18\), respectively. These values are similar to the abundances of the central complex A+C, suggesting a tidal origin for sources E and H, similar to objects A1, A2, and A3.
The member F is the most metal-poor object, while F1 and F2 presented the same metallicity for all calibrators. The mean value of metallicity was \(12+log(O/H)\)\(\sim 8.06\).
The highest metallicities were detected for the main body of galaxy A and for the region between the knots of galaxy B, with values of \(12+log(O/H)\)\(\sim 8.55\) (Fig. 6, bottom right panel). This value is considerably higher than the values of these regions obtained with the N2 and O3N2 calibrators, where the metallicities span a range of \(8.2<12+log(O/H)<8.4\). This difference could be due to the dependence with respect to the ionization parameter or some relative abundances, such as the nitrogen-to-oxygen ratio.
### Oxygen abundance gradients
We proposed to study the metal distribution of HCG 31 in two structures: galaxy B and the southern tidal tail. In order to carry out this
\begin{table}
\begin{tabular}{l c c c c c c c} Object & \multicolumn{2}{c}{12 + log(O/H)} & \multicolumn{2}{c}{log Mass [\(M_{\odot}\)]} & \multicolumn{2}{c}{E(B-V)} \\ \hline (1) & N2 + O3N2 (2) & HII-CHI-Mistrxy(3) & FADO (4) & L804 (5) & M06 (6) & Map average (7) & Integrated spectra (8) \\ \hline A1 & 8.34 \(\pm\) 0.22 & 8.52 \(\pm\) 0.05 & 6.94 & β & 8.63 & 0.19 \(\pm\) 0.12 & 0.17 \(\pm\) 0.09 \\ A2 & 8.39 \(\pm\) 0.21 & 8.55 \(\pm\) 0.04 & 8.05 & β & β & 0.20 \(\pm\) 0.16 & 0.16 \(\pm\) 0.10 \\ A3 & 8.30 \(\pm\) 0.21 & 8.52 \(\pm\) 0.04 & 7.18 & β & β & 0.18 \(\pm\) 0.17 & 0.18 \(\pm\) 0.13 \\ A & 8.35 \(\pm\) 0.21 & 8.55 \(\pm\) 0.02 & 8.73 & β & β & 0.16 \(\pm\) 0.08 & 0.11 \(\pm\) 0.05 \\ A+C & 8.25 \(\pm\) 0.21 & 8.31 \(\pm\) 0.02 & 8.61 & 9.05 & 9.49 & 0.12 \(\pm\) 0.02 & 0.13 \(\pm\) 0.01 \\ B & 8.31 \(\pm\) 0.21 & 8.45 \(\pm\) 0.02 & 8.65 & 8.90 & 9.28 & 0.15 \(\pm\) 0.08 & 0.10 \(\pm\) 0.06 \\ E & 8.23 \(\pm\) 0.21 & 8.42 \(\pm\) 0.04 & 7.37 & 7.67 & 8.03 & 0.19 \(\pm\) 0.10 & 0.20 \(\pm\) 0.10 \\ F1 & 8.06 \(\pm\) 0.21 & 8.17 \(\pm\) 0.04 & 6.85 & β & 7.86 & 0.10 \(\pm\) 0.04 & 0.08 \(\pm\) 0.02 \\ F2 & 8.10 \(\pm\) 0.21 & 8.12 \(\pm\) 0.09 & 5.97 & β & 7.35 & 0.07 \(\pm\) 0.06 & 0.06 \(\pm\) 0.03 \\ F(F1 + F2) & 8.08 \(\pm\) 0.21 & 8.15 \(\pm\) 0.07 & 6.90 & 7.59 & 7.98 & 0.09 \(\pm\) 0.05 & 0.07 \(\pm\) 0.03 \\ \end{tabular}
\end{table}
Table 1: Physical parameters for the members of HCG 31. (Col. 1) ID of the object (Col. 2) Oxygen abundance obtained by the average of N2 and O3N2 methods. (Col. 3) Oxygen abundance obtained by HII-CHI-Mistrxy. (Col. 4) Oxygen abundance obtained by HII-CHI-Mistrxy. (Col. 4) Mass determination performed by FADO (Col. 5) Mass determined by LΓ³pez-SΓ‘nchez & Esteban (2010). (Col. 6) Adopted mass obtained from the K band luminosities taken from Mendes de Oliveira et al. (2006). (Col.7) E(B-V) obtained from an average over the map and (Col. 8) E(B-V) obtained from the integrated spectra of each galaxy.
Figure 6: Top left panel: E(B-V) map of HCG 31. White contours represent H\(\alpha\) in emission. The group shows low values of E(B-V) except in the central zone, where E(B-V) shows a peak in a knot of galaxy HCG 31 A. Top right panel: Oxygen abundance map obtained with the strong-line empirical calibrator N2 black lines represent the pseudo slits used to derive the oxygen abundance gradient, the red circle represent the center used for the gradient of the galaxy B. Bottom left panel: Oxygen Abundance map obtained with the strong-line empirical calibrator O3N2. Bottom right panel: Oxygen abundance map obtained with the code HII-CHI-Mistrxy.
analysis, we simulated two different slits passing through these structures (Fig. 6, top right panel). In the southern tidal tail, we used the same slit as defined in section 5.1. In galaxy B we split the slit into two parts to cover the connection between the knots B1, B2 and B3, including the bridge between B and A+C. The slits had a width of 1 arcsec and we binned the gradients in 1 arcsecs to match the seeing (seeing \(\sim\)1 arcsecs).
The oxygen abundance gradients derived for the different slits are presented in Figure 8. The data were fitted using the package curve_fit of scipy(Virtanen et al., 2020) which allowed us to consider the uncertainties in performing the linear regression. The parameters obtained for the fits are listed in Table 2. Error bars in Figure 8 represent the propagated error coming from flux uncertainties. At the top right of Figure 8 we show the dispersion associated with each calibrator (0.16 dex for N2, red bar; 0.18 dex for O3N2, blue bar). The segmented lines represent the linear fit performed. (yellow, blue and green colors represent the fit on the HII-CHI-Mistrv, O3N2 and N2 calibrators respectively).
Inspecting the gradient of galaxy B, we noted a somewhat steep gradient in the main body of the galaxy, with a peak uncorrelated with the center of the galaxy. This offset was about 5 arcsec (1.4 kpc), and it was also observed in the HII-CHI-Mistrv estimates.
For the bridge of galaxy B, which starts at around 17 arcsecs on Figure 8 (top panel), a quite flat metallicity gradient was observed.
We performed a linear fit on the metallicity distribution of member B, using the measurements on its main body and bridge and the results are shown in Table 2. We found a gradient of \(\alpha=-0.012\pm 0.002[dex/kpc]\), with the N2 method, suggesting a flat oxygen distribution which is expected for a galaxy with interaction signatures (Kewley et al., 2010, Rich et al., 2012).
We compared this metal distribution with results obtained for galaxies NGC 4656 and NGC 55, which displayed similar morphological types (SB(s)m, taken from NED) and whose gradients were studied by Munoz-Elgueta et al. (2018) and Magrini et al. (2017) respectively (Fig.8, left panel). We found slopes similar to those systems, suggesting flat metal distributions. However, the metallicities in NGC 5656 and NGC 55 were lower than HCG 31 B by \(0.1-0.2\) dex.
The drop in the central metallicities observed in HCG 31 B is an interesting feature that could be related to gas inflows in the galaxy. Sanchez et al. (2014) found similar results for some galaxies of their sample. They proposed that in many cases this drop was produced by the presence of a star formation ring in the nuclear region of the galaxy. This scenario is difficult to prove in HCG 31 B because its position is nearly edge-on, therefore we cannot discard it. However, we can speculate that a gas inflow induced by the interaction with the HCG 31 A+C complex is diluting the central metallicities and triggering star formation at the central zone of this galaxy, producing the central drop in the metallicity gradient.
In the bottom panel of figure 8 we show the oxygen abundance gradient of the southern tidal tail, which includes members E, H and F, and on the top right are represented the associated dispersion of the empirical calibrators (0.16 dex for N2 and 0.18 dex for O3N2). The gradient begins at the midpoint between knots A and C (HCG 31 A+C complex) because some authors suggested that this tail was formed from material detached from galaxy C (Amram et al., 2007). On Figure 8 (bottom panel) we include the oxygen abundance distributions of other tidal tails located in compact groups galaxies, namely NGC 92 (Torres-Flores et al., 2014) and NGC 6845 (Olave-Rojas et al., 2015). We note that NGC 92 and NGC 6845 are spiral galaxies that belong to groups at a less advanced interaction stage (no strong evidence of merger yet in these systems). The gradient reveals that the tidal tail of HCG 31 is less metallic than NGC 92 and NGC6845 by \(-0.4\) dex and \(-0.3\) dex, respectively and that the scale lengths are quite different, that tidal tail of NGC 92 stars at a radius of \(\sim\)10 kpc and extends for \(\sim\)25 kpc; for NGC 6845 the tail starts at 14 kpc and extends for \(\sim\)70 kpc, while the southern tail of HCG 31 starts at \(\sim\)3 kpc and extends about \(\sim\)20 kpc.
Flat oxygen abundance gradients can be explained by gas flows induced during galaxy-galaxy interactions (e.g Rupke et al., 2010a ).
Assuming that was the cause, one would expect to find a velocity gradient through the tidal tail; but, as explained in Section 5.1, the position-velocity diagram through the southern tidal tail revealed a velocity gradient only through objects E and H, while F showed a flat velocity gradient because it was kinematically detached from the tail. These findings suggested that gas flows could be the explanation of the flat gradient only for objects E and H. For member F, a metal-poor gas accretion that triggers star formation (Amram et al., 2007) could be the explanation for its slightly lower metallicity.
### H alpha luminosities and Star Formation Rates: Witnessing the stellar birth in a merging system
Using the H\(\alpha\) map, corrected for extinction and distance, we derived the SFR map of HCG 31, which is shown in the top left panel of
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Zero point [dex] & Slope [dex/kpc] \\ \hline \multicolumn{3}{c}{Galaxy B} \\ \hline N2 & 8.26 \(\pm\) 0.02 & -0.012 \(\pm\) 0.002 \\ O3N2 & 8.27 \(\pm\) 0.01 & -0.004 \(\pm\) 0.001 \\ HII-CHI-Mistry & 8.43 \(\pm\) 0.03 & 0.002 \(\pm\) 0.001 \\ \hline \multicolumn{3}{c}{Southern Tail} \\ \hline N2 & 8.35 \(\pm\) 0.02 & -0.012 \(\pm\) 0.001 \\ O3N2 & 8.29 \(\pm\) 0.01 & -0.007 \(\pm\) 0.001 \\ HII-CHI-Mistry & 8.61 \(\pm\) 0.03 & -0.020 \(\pm\) 0.001 \\ \hline \end{tabular}
\end{table}
Table 2: Results of the linear regression for the abundance gradients in galaxy B and the southern tidal tail with the different methods
Figure 7: Map of electron density derived with the [\(S_{H}\)] 6717,6731 Γ
lines. The contours are \(H\)\(\alpha\) in emission. Almost the whole group shows low densities with \(n_{e}<100\,[cm^{-3}]\). In central knots of the A+C it is seen a peak in the density with a knot showing \(n_{e}\sim 250[\,cm^{-3}]\).
Figure 9. This map was obtained by using the calibration proposed by Kennicutt & Evans (2012). In the central zone of the system, we obtained a total SFR of \(\sim 2.99\pm 0.49M_{\odot}\) yr\({}^{-1}\) (Fig. 9, black ellipses, top left panel). Assuming a stellar mass of \(logM_{stellar}\sim 8.5\) (Lopez-Sanchez & Esteban, 2010), we obtained a value for the specific star formation Rate (sSFR) of \(\sim log(sSFR)\sim-8.22\) for the central zone of HCG 31. If we plot HCG 31 in the sSFR vs stellar mass plane (Atek et al., 2014) we find that the system lies at the locus of galaxies located with redshifts of \(0.7<z<1.0\). This fact is indicative of the strong episode of starburst that the system is experiencing, which is similar with respect to the SFRs of galaxies located at higher redshifts. If we consider the \(H_{2}\) mass concentrated in the central zone of the system (\(2.9\times 10^{8}M_{\odot}\), Yun et al., 1997) with the current SFR, 75% of the \(H_{2}\) mass would be depleted in \(\sim\)100 Myr.
### H alpha equivalent width and ages
In the top right panel of Figure 9 we show the EW(\(H\alpha\)) map of HCG 31, derived from FADO. Black contours represent the H\(\alpha\) emission. The whole group spans a range of \(10\)A \(<EW(H\alpha)<2000\)A, where the highest values are associated with the galaxy F and the central merger. On the other hand, the lowest values are associated with member B. The EW(\(H\alpha\)) map is also used to estimate the ages of the star forming knots, by interpolating it with the models of STARBURST99. Results are shown in the bottom left and bottom right panels of Figure 9, respectively. The ages span a range from 3 Myr to 10 Myr. The younger regions are found in galaxy F and in the central zone of the system with ages \(\sim 3Myr\). The older regions are found at the center of galaxy B with ages \(\sim 10Myr\) in the maps with both metallicities.
Figure 8: Metallicity gradients and their fit parameters derived for HCG 31; (top) Galaxy B and (bottom) southern tidal tail. For comparation we include the gradients of other similar objects in both panels. For more details see section 5.5.
### The Wolf-Rayet bump
Several previous studies have detected WR features in the spectra of HCG 31 (Kunth & Schild 1986, Lopez-Sanchez et al. 2004, Lopez-Sanchez & Esteban 2010). Kunth & Schild (1986) were the first authors to report WR features in HCG 31. They detected \(He_{II}\) 4686 A and \(N_{III}\) 4640 A features that were related to the emission of nitrogen Wolf-Rayet stars (WN). Guseva et al. (2000) detected the same lines and added \(N_{III}\) 4512 A and \(S_{III}\) 4565 A at the central zone of HCG 31. The blending of all these lines in the spectra of a galaxy is the so-called blue bump and it is widely used in the literature to estimate the number of WN stars in a galaxy (Vacca & Conti 1992, Lopez-Sanchez & Esteban 2010). In the case of HCG 31, Lopez-Sanchez et al. (2004) also detected the blue bump in their long-slit spectra. In our case, the spectral coverage of our data did not allow us to completely detect the blue bump. However, we could detect another feature that is related to the emission of Carbon Wolf-Rayet stars (WC), the so-called red bump. This red bump is produced by the \(C_{IV}\) 5808 A line. In the case of HCG 31 this feature is very weak and difficult to detect. For example, Guseva et al. (2000) and Lopez-Sanchez et al. (2004) did not detect this line in their spectra, probably because the WR population of this system was dominated by WN stars (Guseva et al. 2000, Lopez-Sanchez & Esteban 2010). Nevertheless, in our data we can detect this weak feature at the central zone of this system.
To map the red bump, we performed a linear fit in the continuum region near to the spectral feature, and then removed it from the data. Using the continuum-subtracted spectra we collapsed the datacube
Figure 10: Intensity map of the red bump obtained for the central region of HCG 31; black contours represent H\(\alpha\) in emission.
Figure 9: Top left panel: Map of star formation rate for HCG 31 derived using the calibration of Kennicutt & Evans (2012), the ellipses represent the regions where the spectra were integrated (see text). Top right panel: Map of H\(\alpha\) equivalent width derived by FADO; black contours represent H\(\alpha\) in emission. Bottom left panel: Map of ages obtained by interpolating with the models of Starburst99 assuming an instantaneous star formation event. A Salpeter IMF was assumed with mass limits 1-100 \(M_{\odot}\) and Z = 0.004. Bottom right panel: Same as bottom left panel but with Z = 0.008.
in the range of 5838 A to 5934 A, producing a 2D image. This map is shown in figure 10, where we detected the red bump only in the inner central zone of HCG 31 A+C. It should be noted that the spatial location of the WC stars is consistent with the ages estimated for this zone in the previous section (5-6 Myr).
We integrated all the spectra of the spaxels seen in Figure 10 and calculated the total luminosity of the red bump. The luminosity obtained is \(\mathrm{L}_{obs}(\mathrm{C}_{IV}\ 5808\ \mathrm{\AA})=(1.09\pm 0.05)\times 10^{39}\ [ \mathrm{erg\,s^{-1}}]\). In order to obtain the number of WC stars we used the equations presented by Lopez-Sanchez & Esteban (2010) and assumed an average metallicity of \(12+log(O/H)=8.40\pm 0.19\). We obtained a total number of carbon Wolf-Rayet stars (WCE using the notation of Lopez-Sanchez & Esteban, 2010) of \(N_{WCE}=492\pm 75\). This number is twice as high as in previous studies. Guseva et al. (2000) obtained \(N(WCE)=246\) and Lopez-Sanchez & Esteban (2010) obtained \(N(WCE)=206\). It is important to note that previous studies determined the number of WCE stars using long-slit spectroscopy instead of integral field spectroscopy; therefore, aperture effects cannot be neglected.
### Ionization mechanism
We used standard diagnostic diagrams to elucidate the ionization mechanisms in HCG 31. In Figure 11 we show the spatially resolved diagnostic diagrams, where the top, middle and bottom panels are the \([O_{III}]/H\beta\) vs \([N_{II}]/H\alpha\), \([O_{III}]/H\beta\) vs \([S_{II}]/H\alpha\) and \([O_{III}]/H\beta\) vs \([O_{I}]/H\alpha\), respectively. We found that most of the system was primarily ionized by massive stars due to recent star formation (pink points in Figure 11). Yellow points represent ionization by shocks/LINER, which are negligible, as can be seen in the top panel of Figure 11. Red points are associated with AGN as the main ionization mechanism. This scenario is unlikely, because the spatial distribution of the red points is not consistent with the presence of an AGN. If an AGN was the main ionization mechanism, we should find a more central distribution, not a random distribution in the outskirts of galaxies Figure 11.
Inspecting the central zone, we found that knot A was in the composite region in the diagnostic ionization diagram that used the \([N_{II}]/H\alpha\) ratio. This outcome could be associated with a violent star formation event that triggered massive stellar winds and shocks. If we inspect the velocity dispersion map (\(\sigma_{int}\)) of this zone we find that it is \(\sigma_{int}\sim 60km\ s^{-1}\), which is lower than the velocity expected for shocks (\(\sigma_{int}>90km\ s^{-1}\), Rich et al., 2015), but higher than the velocity expected for a regular H ii region (\(\sigma_{int}\sim 30\ \mathrm{km\,s^{-1}}\), Moiseev et al., 2015). However, different regions can be observed along line of sight. Therefore these values of sigma could indicate shocks. Also, as mentioned in previous sections, the widths that we measured were likely to be overestimated due to multiple unresolved components, thus the velocity dispersion that we present is an upper limit for this region. This allowed us to conclude that this knot was ionized by a combination of shocks and star formation. This scenario was supported by a high EW(H\(\alpha\)), a high H\(\alpha\) luminosity (SF evidence) and the line ratios obtained (shocks/composite ionization evidence). The main ionization mechanism for the remaining galaxies is the star formation.
We also included a color-coded panel for sigma in the three different BPT diagrams. We did not observe a strong correlation between the increase in velocity dispersion and the contribution of shocks, at least in the sulfur and oxygen diagrams (middle and bottom panels, respectively). In the BPT diagram using nitrogen, we did observe a weak correlation between the increase in velocity dispersion and proximity to the composite zone. This may indicate stronger evidence that shocks are contributing significantly to the ionization of the central region of the system.
### The mass-metallicity relation of HCG 31
It is well established that there is a correlation between the stellar mass of galaxies and their gas-phase metallicities (Tremonti et al., 2004, Yates et al., 2020). This correlation, known as mass-metallicity relation (MZR), has been widely used to study the formation and evolution of galaxies. Weilbacher et al. (2003) showed that TDGs did not follow the MZR, thus, plotting this relation for the members of HCG 31 and inspecting the positions of the different objects should give us clues about their formation. Several works studied the luminosity-metallicity relation of HCG 31 for the purpose of discriminating among the primordial and tidal dwarf galaxies in this group. Richer et al. (2003) and Lopez-Sanchez et al. (2004) studied the M\({}_{B}\) vs \(12+\log(\)O/H\()\) relation for this group and Mendes de Oliveira et al. (2006) studied the M\({}_{K}\) vs \(12+\log(\)O/H\()\) relation. In the case of the M\({}_{B}\) vs \(12+\log(\)O/H\()\) relation, the data suggest that the members, A, B, C and G, are very luminous for their respective metallicity. This could be explained by the fact that the magnitude on the B-band filter is probably contaminated by the luminosity from the star formation processes in the system (Lopez-Sanchez et al., 2004). In contrast, Mendes de Oliveira et al. (2006) posited that the M\({}_{K}\) vs \(12+\log(\)O/H\()\) relation suggested that members, F1, F2, E2, A1, H and R, are TDGs or tidal debris.
To plot the MZR for HCG 31, we integrated the spectra of nine different objects: A1, A2, A3, A, A+C, B, E, F1 and F2. We ran FADO on each spectrum, allowing us to obtain the stellar mass of each galaxy. The metallicities for each spectrum were based on the average of N2 and O3N2, and on HII-CHI-Mstrx. The results are listed in columns (2) and (3) of Table 1. The masses obtained are listed in column (4) of Table 1. For comparison, we included other mass estimations derived in previous studies. In column 5 we show the masses obtained by Lopez-Sanchez & Esteban (2010) for the different members of HCG 31. In column 6 we list stellar masses derived from the \(K-band\) magnitudes published by Mendes de Oliveira et al. (2006). In this case we assumed a mass-to-light ratio for each member, based on their \(g^{\prime}-r^{\prime}\) colors (see Bell et al., 2003), and assuming a solar \(M_{K}=5.08\ [mag]\)Willmer (2018).
As a control sample we used the MZR derived by Lee et al. (2006), which estimated masses and metallicities for 27 nearby dwarf galaxies. This relation is shown by black points in Figure 12. It should be noted that the metallicities considered on this control samples were obtained through the direct method (Lee et al., 2003). In Figure 12, we present the MZR of HCG 31. We see that galaxies A, B and the A+C complex follow the main MZR, which is expected for these kinds of galaxies (Mendes de Oliveira et al., 2006) and also favours the fact that the LZR with the M\({}_{B}\) band is contaminated by the luminosity of newly formed stars. Other objects seems to be out of the main trend, which is not surprising considering the scenario on which all these objects were formed in the tidal tails, after the first gravitational encounter in HCG 31. (Amram et al., 2004, Amram et al., 2007).
## 6 Discussion and conclusions
In this paper we present an analysis of the physical and kinematic properties of the complex compact group of galaxies called HCG 31. Considering the large field of view covered on the current analysis, this study improves our understanding of this system, which has been previously studied by different authors from different observational
Figure 11: BPT diagrams for the whole system. The contours represent \(H\,\alpha\) in emission. Most of the points lie in the SF sequence, while nearly all of the points in the AGN/LINER zones lie in the outskirts of the galaxy. The reason for this may be the low S/N in these regions, or to an eventual contribution from shocks. In left panel we present the diagrams color coded with the velocity dispersion. Middle panel represents the BPT diagrams and right panel the 2D distribution of the points. The borderlines in the BPTs corresponds to the delimitations obtained in Kewley et al. (2001) and Kauffmann et al. (2003) (separation between SF and AGN) and in Schawinski et al. (2007) (separation between AGN and LINER).
approaches (Iglesias-Paramo & Vilchez, 1997; Johnson et al., 1999, Gallagher et al., 2010, Rubin et al., 1990, Lopez-Sanchez et al., 2004, Mendes de Oliveira et al., 2006, Amram et al., 2007, Alfaro-Cuello et al., 2015). In this section, we discuss the main findings of this work.
### Abundance gradients and the influence of the environment
The metallicity distribution in spiral galaxies has been extensively studied in recent years (van Zee et al., 1998; Bresolin et al., 2012, Sanchez et al., 2014). Most giant galaxies show a clear abundance gradient, with the center of the galaxy being more metallic than the outskirts. However, several observational studies have proven that interacting galaxies show flatter abundance gradients than isolated galaxies (Kewley et al., 2010, Torres-Flores et al., 2014, Olave-Rojas et al., 2015). Numerical simulations showed that this flattening in the gradients was mainly produced by gas inflows towards the nuclear zones that were triggered by gravitational encounters and mergers (Torrey et al., 2012). In addition, observational studies have shown that dwarf galaxies have small abundance gradients or none. (Roy et al., 1996, Hunter & Hoffman, 1999, Croxall et al., 2009, Izotov et al., 2006). Pilyugin et al. (2015) found that irregular dwarf galaxies could present an abundance gradient depending on their surface brightness profile, breaking the spiral vs dwarf dichotomy, which says that spiral galaxies display strong abundance gradients, whereas irregular dwarf galaxies do not. Recently James et al. (2020) used MUSE observations of the dwarf galaxy, JKB 18, and discovered that it had an inhomogeneous chemical distribution, providing more evidence that not all irregular dwarf galaxies were chemically homogeneous.
In this paper, we present detailed metallicity maps of the HCG 31 system, based on several strong-line empirical calibrators. The excellent spatial coverage provided by MUSE allowed us to derive the metallicity gradient for galaxy B, the southern tidal tail and for the central zone of the system. In the case of galaxy B, we detected an almost flat metallicity distribution with a slope of \(\alpha\) = -0.012 \(\pm\) 0.002 dex \(\rm\ kpc^{-1}\) (N2 calibrator), which provided information about the chemical homogeneity of this system, with no significant variations in its metallicity. This finding is expected for a merging galaxy (Rupke et al., 2010).
An interesting feature in the gradient of this galaxy was the central drop in metallicity seen at \(\sim\)5 arcsec. This kind of a decrease seems to be similar to those detected by Sanchez et al. (2014) in several galaxies in their sample. However, those authors found no correlation between these decreases and interaction features in the galaxies. Here we propose three different scenarios to explain this central drop in HCG 31 B.
Amram et al. (2007) found that the receding and approaching sides of the rotation curve of this galaxy did not match. The most prominent disagreement in the rotation curve was at the inner 5 arcsec. Amram et al. (2007) argued that this inner disagreement might be the sign of a bar, which could be inducing radial motions of gas that were flattening the central metallicities. However, it should be noted that there was no correlation between the presence of a bar and the central drop in metallicity (Sanchez et al., 2014). Another possibility is that the galaxy is currently accreting metal-poor gas, which is inducing the starburst and producing the central drop in metallicity. This scenario is unlikely because a galaxy that is accreting material shows a drop in metallicity of \(\sim\)0.5 dex (Sanchez-Almeida et al., 2014), and the central drop that we detected was \(<\)-0.2 dex regardless of the method used. Accretion of a partially enriched gas of the group could explain this drop instead of pristine gas.
Lastly, one other possibility is that the galaxy hosts a central star-forming ring which constitutes evidence of radial gas flows induced by resonance processes. This is the scenario that Sanchez et al. (2014) used to explain the central drop in the galaxies of their sample. However, they did not find evidence of signs of interaction or bars in these galaxies. In our case, galaxy B showed both signs, thus, we speculated that a combination of the presence of a bar and gas accretion could be responsible for the central metallicity drop in this galaxy.
A nonphysical explanation could be that the drop is artificial and
Figure 12: The MZR relation derived for HCG 31 (green and red points). For comparison, we included the MZR of nearby dwarf galaxies derived by Lee et al. (2006) as black points. The masses considered were calculated by FADO. The adopted metallicity was obtained from an average of N2 and O3N2 and the red points stands for HII-CHI-Mistrv determinations.
just a random effect of the strong line methods used. Indeed, if we consider the 0.16 dex uncertainties of the strong-line methods the drop dissapears. We used three different methods to plot the gradient (N2, O3N2 and HII-CHI-Mistry) and with each method we observed the same effect, which supports our conclusion that the observed drop is real. A determination of the metallicity gradient of galaxy B with the direct method is required to verify the presence of this central drop.
For the southern tidal tail we also detected a flat metallicity distribution with a slope of \(\beta=-0.012\pm 0.001\) dex kpc\({}^{-1}\) (N2 calibrator) and \(\beta=-0.007\pm 0.001\) (O3N2 calibrator) dex kpc\({}^{-1}\). Member F seems to be slightly less metallic than members E and H, which could be a consequence of the current metal-poor gas accretion. Several works have studied the metallicity gradients of tidal tails (Chien et al. 2007 Torres-Flores et al. 2014, Olave-Rojas et al. 2015) and their results suggested that tidal tails showed that metallicity gradients. A similar effect was found for larger radii (R \(>\) R\({}_{25}\)) in normal disk galaxies (Sanchez et al. 2014). However, in the literature search, we did not find a metallicity gradient determination in a system similar to HCG 31; thus, we can only compare our gradient with gradients of tails in more massive systems. The origin of this flattening effect is generally attributed to streaming motions inside the tail that redistribute the gas. If this is the case, one would expect to find a velocity gradient along the tail, and indeed, this is exactly what we found for this tail. An increasing velocity was observed from object E to H; but, in object F we did not detect the same gradient. We observed a flat velocity gradient for this galaxy, which indicated counter-rotation with respect to the tail (Verdes-Montenegro et al. 2005, Amram et al. 2007).
### Are we witnessing ionization induced by shocks in HCG 31?
An interesting region located in the central A+C complex is the brightest H\(\alpha\) knot. This knot is usually referred to as the nucleus of galaxy HCG 31 A (Torres-Flores et al. 2015) and it shows several interesting properties. It is the only region in the entire A+C complex that seems to be ionized not only by star formation, it shows the highest electron density and H\(\alpha\) emission of the entire group (Alfaro-Cuello et al. 2015) and the highest metallicity of the system.
Normally, one expects to find low metallicities in HII regions having high H\(\alpha\) luminosities, because the starburst is likely fuelled by an influx of metal-poor gas (Sanchez-Almeida et al. 2014). However, we detected a high oxygen abundance for this region of 12+\(log(O/H)\) = 8.22 \(\pm\) 0.19, on average. According to Oey & Kennicutt (1993), systematic variations in nebular density can lead to differences in metallicities of up to 0.5 dex. This effect cannot be discarded for this knot, because it shows the highest electron density in the entire system with \(n_{e}\sim\)300 cm\({}^{-3}\).
This knot is also the region with the highest oxygen abundance of the system, and it is well known that WR stars can contribute to enriching the interstellar medium (Perez-Montero et al. 2013, Kehrig et al. 2013). To confirm the existence of such stars we integrated the spectrum over a box of 1\({}^{\ast}\times\) 1\({}^{\ast}\) centered at knot A. The resulting spectrum is presented in Figure 13. A characteristic red bump was observed. Krabbe et al. (2014) proposed that the stellar winds and mass loss of WR stars were the cause of one of the denser and metal-rich HII regions in their sample, a scenario very similar to the case of this knot.
The velocity dispersion on this knot was \(\sim\)55 km s\({}^{-1}\), which seems to be consistent with shocks ionization(Rich et al. 2015). In this context, this velocity could be expected considering the location of this knot in the BPT diagram, which lies in the composite zone.
A good way to disentangle the ionization mechanism is by using 3D diagnostic diagrams (Kewley et al. 2019). Unfortunately, we could not apply this type of diagnostic to our data for two main reasons: one of the axis of the 3D plot needs the galactocentric radius which is difficult to determine in a merging system such as HCG 31, and more reliable determinations of the velocity dispersion are needed to avoid over estimations because of the multiple components. High-resolution spectroscopic data is needed to rule out the presence of shocks.
### Unveiling the star formation history of a complex compact group.
Our new spectroscopic information about HCG 31 provides evidences that allow us to uncover the star formation history of this interacting system.
In order to determine the stellar populations in HCG 31 we ran FADO over the integrated spectra of each galaxy. Figure 14 shows the luminosity fraction at the normalization wavelength (5100 A) for each of the SSPs fitted, which illustrates the star formation history of each galaxy. We showed SFHs only for the galaxies with the best SNR in the continuum (A+C, A, and B). We also included the SFH of galaxy F1.
The SFH of the A+C complex revealed that the peak of star formation occurred from \(\sim\) 10\({}^{7}\) yrs to 10\({}^{8}\) yrs ago. We find that this age is consistent with the date of the first encounter, as reported by Johnson et al. (1999) (400 Myrs). In addition, the system experienced another starburst event between \(\sim\) 10\({}^{6}\) yrs and 10\({}^{7}\) yrs ago, which is completely consistent with the ages of the star forming bursts obtained in this work.
The SFH of galaxy A is very similar to that of the A+C complex, with an important underlying old stellar population at \(\sim\) 1 Gyr, another peak at 10\({}^{7}\) yrs to 10\({}^{8}\) yrs and the current star formation event at 10\({}^{6}\) yrs to 10\({}^{7}\) yrs ago. It should be noted that the SFH of galaxy A represents only the SFH of the extended body that is seen in the optical images. It is not a representation of the SFH of the galaxy HCG 31 A as whole. It is not possible to completely separate galaxies A and C because they probably overlap each other (Amram et al. 2007). For galaxy F1, we found a very interesting aspect in its SFH, with only two peaks of star formation, one at \(\sim\) 10\({}^{8}\) yrs to 10\({}^{9}\) yrs and the other at about \(\sim\) 10\({}^{6}\) yrs. We did not find evidence of a very old stellar population with ages \(>\) 1 Gyr. Our results are in very good agreement with the photometric ages of the SSCs found by Gallagher et al. (2010), who reported no evidence of an old population in this object.
In summary, all the galaxies of the group display an underlying stellar population of \(\sim\)1 Gyr old and are currently forming stars. The first encounter between HCG 31 A and HCG 31 C occurred about \(\sim\) 400 Myrs ago. Galaxy F1 shows a bimodal age distribution, with intermediate and young stellar populations.
This type of analysis has also been done for other interacting systems. For example, Buzzo et al. (2021) studied the stellar populations of the merging system NGC 1487. They found an age distribution very similar to ours for HCG 31, with a peak in the younger population which correlates with the current SF episode of the system and an intermediate population with ages of \(\sim\) 1-5 Gyrs.
### Is HCG 31F really a TDG candidate?
One of the most frequent questions in the literature about this system concerns the nature of object F. In optical images, it appears as a
Figure 14: Luminosity fraction at the normalization wavelength (5100 Γ
) as a function of age for galaxies HCG 31 A+C, HCG31 A, HCG31 B and HCG 31 F1. The colors, light-blue, blue, light-green and orange represent metallicities of 0.02 Z\({}_{\odot}\), 0.2 Z\({}_{\odot}\), 0.4 Z\({}_{\odot}\) and 1 Z\({}_{\odot}\), respectively. The shaded area represent the Akima-smoothed (Akima, 1970) version of the populations. This representation provides an illustration of the star formation history of the different galaxies.
Figure 13: Fine structure of the integrated spectrum in a 1βx1β box centered at knot A. The spectrum shows WR features, a clear red bump and the beginnings of a blue bump. The most intense emission lines are labeled.
bright tidal object located in the southern tail of the system. The photometry of this object was quite uncertain because of contamination from a nearby projected star. In addition, Hunsberger et al. (1996) did not consider the F galaxy as a TDG candidate although they did consider the other five objects (E1, E2, A1, A2 and A3) as TDG candidates. It is probably that they did not obtain good photometry on F and discarded it from the analysis. Using H\(\alpha\) imaging, Iglesias-Paramo & Vilchez (2001) proposed that objects F1, F2 and F3 were the most likely TDG candidates in HCG 31, based on their H\(\alpha\) luminosities (\(>10^{39}\) ergs s\({}^{-1}\)) and their large projected distances from the parent galaxy. A TDG candidate needs to fulfill two main conditions (based on Weilbacher et al. 2003): (i) it is quite metallic for its mass (out of the MZR), and (ii) it shows independent kinematics decoupled from the tail. To reach the first condition, many works studied the luminosity-metallicity relation for HCG 31 using filters \(B\) and \(K\) (Richer et al. 2003, Lopez-Sanchez et al. 2004, Mendes de Oliveira et al. 2006). In all these works, objects F1 and F2 showed high metallicities for their luminosities. In section 5.11 we note that F1 and F2 seemed to be out of the main MZR of dwarf galaxies, and it is well established that object F has a tidal rather than a primordial origin. The H\(\alpha\) kinematics of member F is quite peculiar, because it shows no rotation in any axis (Amram et al. 2007), and, our radial velocity map (Section 5.1) confirmed that result. The HI kinematics of F (Verdes-Montenegro et al. 2005) shows that it is kinematically decoupled from the tail, with an axis of rotation that is perpendicular to the axis of the tail, suggesting that this object was already decoupled from the tail. Amram et al. (2007) proposed that this object was accreting material from the tail, but that scenario is difficult to prove, and simulations are needed to confirm it. In this work we detect high SFRs and high EW(H\(\alpha\)) (young ages) in member F, which is fully consistent with this scenario. In conclusion, the true nature of object F is not yet fully understood. We confirmed its tidal origin using the MZR relation, and it is very likely that the object is already decoupled from the tail based on its HI kinematics.
## 7 Summary
On this paper we perform a deep analysis of the Hickson Compact group 31 using IFS data observed with MUSE. We used different maps for analyzing the kinematics, ionization mechanisms, physical properties, chemical abundances, SFRs, ages, among others. Our most remarkable results are:
* The group shows a complex velocity field, with clear evidences of an ongoing merger process in the central region between HCG 31 A and HCG 31 C galaxies.
* The central zone shows the higher velocity dispersions, with velocities up to \(\sim~{}90~{}km~{}s^{-1}\). These high velocities are spatially correlated with the merging zone of the system. A more detailed kinematics analysis, that consider the resolved physical properties of the system, is needed to understand the origin of these high velocities.
* The electron density shows a peak in the central zone of the system with \(n_{e}\sim 300cm^{-3}\). This peak is associated with a very peculiar knot located in the Galaxy HCG 31 A which probably hosts ionization produced by shocks.
* The main ionization mechanism through the whole group is the star formation, with a small contribution of shocks only at the nucleus of galaxy A.
* The oxygen abundance is mainly high for the mass of the galaxies, and it shows a flat distribution across the different galaxies. This suggests gas mixing through the whole group probably triggered by the merger.
* The star formation rate is high compared to the mass of the system. There are two simultaneous bursts of star formation: In the central zone and Galaxy F.
* There is a prominent population of carbon Wolf-Rayet stars in the central zone of the group. The presence of these stars is in perfect agreement with the ages obtained from the H\(\alpha\) equivalent width. We cannot estimate the population of WN stars.
* The Mass-Metallicity relation confirms the tidal origin of objects E, H and F.
* We reconstruct the star formation history of the youngest population in HCG 31. The ages obtained are in perfect agreement with the scenario of a first encounter \(\sim 400\) Myr ago and a current strong episode of star formation.
## 8 Acknowledgements
DGE and STF acknowledges the financial support of the Direccion de Investigacion of the Universidad de La Serena, through a 'Courso de Apoyo a Tesis 2019'. We warmly thank Mariane Girard for preliminary analysis on the HCG 31 data in the frame of her master internship.
## 9 Data Availability
The data used on this work is available via the ESO science archive facility [http://archive.eso.org/scienceportal/home/](http://archive.eso.org/scienceportal/home/).
|
2310.07668 | GRaMuFeN: Graph-based Multi-modal Fake News Detection in Social Media | The proliferation of social media platforms such as Twitter, Instagram, and
Weibo has significantly enhanced the dissemination of false information. This
phenomenon grants both individuals and governmental entities the ability to
shape public opinions, highlighting the need for deploying effective detection
methods. In this paper, we propose GraMuFeN, a model designed to detect fake
content by analyzing both the textual and image content of news. GraMuFeN
comprises two primary components: a text encoder and an image encoder. For
textual analysis, GraMuFeN treats each text as a graph and employs a Graph
Convolutional Neural Network (GCN) as the text encoder. Additionally, the
pre-trained ResNet-152, as a Convolutional Neural Network (CNN), has been
utilized as the image encoder. By integrating the outputs from these two
encoders and implementing a contrastive similarity loss function, GraMuFeN
achieves remarkable results. Extensive evaluations conducted on two publicly
available benchmark datasets for social media news indicate a 10 % increase in
micro F1-Score, signifying improvement over existing state-of-the-art models.
These findings underscore the effectiveness of combining GCN and CNN models for
detecting fake news in multi-modal data, all while minimizing the additional
computational burden imposed by model parameters. | Makan Kananian, Fatima Badiei, S. AmirAli Gh. Ghahramani | 2023-10-11T17:17:40Z | http://arxiv.org/abs/2310.07668v1 | # GraMuFeN: Graph-based Multi-modal Fake News Detection in Social Media
###### Abstract
The proliferation of social media platforms such as Twitter, Instagram, and Weibo has significantly enhanced the dissemination of false information. This phenomenon grants both individuals and governmental entities the ability to shape public opinions, highlighting the need for deploying effective detection methods. In this paper, we propose GraMuFeN, a model designed to detect fake content by analyzing both the textual and image content of news. GraMuFeN comprises two primary components: a text encoder and an image encoder. For textual analysis, GraMuFeN treats each text as a graph and employs a Graph Convolutional Neural Network (GCN) as the text encoder. Additionally, the pre-trained ResNet-152, as a Convolutional Neural Network (CNN), has been utilized as the image encoder. By integrating the outputs from these two encoders and implementing a contrastive similarity loss function, GraMuFeN achieves remarkable results. Extensive evaluations conducted on two publicly available benchmark datasets for social media news indicate a 10 % increase in micro F1-Score, signifying improvement over existing state-of-the-art models. These findings underscore the effectiveness of combining GCN and CNN models for detecting fake news in multi-modal data, all while minimizing the additional computational burden imposed by model parameters.
Fake news detection Social media Graph Convolutional Networks CNN Graph Neural Networks GCN ResNet-152 Multi-modal Text classification Image analysis F1-score.
## 1 Introduction
In the current landscape of information dissemination, the proliferation of social media platforms, including Twitter, Instagram, Weibo, and others, has resulted in an era of interconnectedness and instantaneous communication. This digital revolution has not only facilitated global connectivity but has also inadvertently given rise to a formidable challenge - the rapid propagation of fake news Wessel et al. (2016).
In sharp contrast to the controlled environments of traditional media, social media's dynamic nature accelerates the spread of fake information, rendering the dissemination of fake news more pervasive than ever before. This concerning
phenomenon has led to the potential manipulation of public sentiments by both malicious entities and governmental actors Woolley and Howard (2018)Kalsnes (2018), thereby compromising the authenticity of the information.
Given this urgent situation, identifying and addressing fake news has become an important concern for researchers, experts, and decision-makers. While traditional methods for spotting fake news mostly focused on analyzing text, the diverse aspects of social media now require new approaches that can comprehend the complex interplay between text, images, and various forms of media.
In our research, we tackle the challenge of detecting fake news in a multi-modal context, where news articles comprise both textual and image-based content. To address this complex problem, we propose the Graph-based Multi-modal Fake News Detection model (GraMuFeN). Our approach involves the integration of text and image encoders, utilizing the power of three distinct neural network architectures: Long Short Term Memory (LSTM), Graph Convolutional Neural Network (GCN), and Convolutional Neural Network (CNN).
Within the Text Encoder component, each textual input is treated as both sequential data and a graph-like structure, enabling the combination of LSTM and GCN networks. This enables us to capture complex relationships within the textual content effectively. Simultaneously, the Image Encoder utilizes the pre-trained ResNet-152 He et al. (2016) as the CNN model, enabling the extraction of features from the visual elements contained within each news.
Subsequently, the feature vectors extracted by the Text and Image Encoders were combined to predict the label associated with each news by minimizing the cross entropy loss between the predictions and the ground-truth labels. At the same time and by employing a contrastive similarity loss, the GraMuFeN is trained to produce similar feature vectors for each text and its corresponding image. This dual training process enhances the model's ability to align text and image representations effectively while simultaneously improving its capacity for accurate label prediction. As a result, we achieved a 10 percent enhancement in the F1-score micro for multi-modal fake news detection on well-established benchmark datasets like Twitter Boididou et al. (2015).
The rest of this paper is organized as follows. In Section 2, we present a brief review of the methods for spotting fake news in social media. We formally define the problem that is going to be solved in this research in Section 3. Section 4 provides the details related to the Text Encoder, the Image Encoder, the contrastive similarity loss, and the classifier used in our method. We describe the characteristics of the datasets used in this paper in Section 5. Our extensive evaluations and their outcomes are summarized in Section 6. Finally, we conclude the paper in Section 7.
## 2 Literature Review
Following the guidance of key studies Ruchansky et al. (2017)Shu et al. (2017), we describe 'fake news' as stories that are intentionally made up and can be proven wrong with real facts. Identifying fake news is not easy because it involves analyzing different aspects of the story including the message it is trying to convey, its place in society, and the images used. Research into detecting fake news is a field that is constantly growing, with many different methods being developed to tackle it.
In the early stages of fake news detection, most research focused on using single-mode data, mainly text. _Wu et al._, used a special kind of system called a graph kernel-based hybrid support vector machine to understand how news spreads and to spot fake news Wu et al. (2015). Some researchers looked at the way language was used in tweets to figure out if the news was fake Shu et al. (2017), while others used structural and cognitive features to detect fake news on social networks Kwon et al. (2013). But these early methods had problems. They were only good for specific topics and relied too much on manually picking out features, which could lead to biased results Shu et al. (2017); Kwon et al. (2013).
Later on, researchers started using other kinds of data like images from social media to find fake news Ping Tian et al. (2013); Gupta et al. (2012). The use of deep learning models also started, improving the accuracy of fake news detection Jin et al. (2017). _Steinebach et al._, delve into the problem of fake images accompanying fake news Steinebach et al. (2019). They highlight that these manipulated images, often photo montages spliced from several images, are designed to make the fake news appear more authentic. To combat this, they developed a concept based on feature detection, indexing these features using a nearest-neighbor algorithm. This allows for the rapid comparison of a large number of images, identifying montages even if they have been manipulated in various ways.
In the paper written by _Meng et al._Meng et al. (2022), the authors propose a proactive strategy to counteract fake news using traceable and authenticable image tagging. This strategy leverages a Decoupled Invertible Neural Network (DINN) to embed dual tags into news images before publication. However, the approach is not without its limitations. One significant concern is its generalization to unseen manipulations; the current setup might struggle to handle new types of manipulations that were not included in the training dataset. This indicates a potential area for further development to enhance the robustness of the system against a broader array of manipulations. Even though now we
are moving towards using multiple types of data, these early single-type data methods helped build the base for today's more advanced fake news detection techniques.
As research progressed, it became clear that using different types of data together could make fake news detection more accurate. For example, some systems were designed to answer questions using deep learning networks Xi et al. (2020). Others combined text and images to better detect fake news Farajtabar et al. (2017).
A big step forward was the creation of EANN Wang et al. (2018), Unlike traditional methods that often rely on event-specific features, EANN is designed to identify fake news across various events, making it adaptable to new and unexpected news topics. It achieves this through a combination of multi-modal feature extraction and an event discriminator, ensuring both textual and visual content are analyzed while filtering out event-specific biases. This approach enhances its versatility in the ever-changing landscape of social media news. Following this, other powerful models like MVAE Khattar et al. (2019) emerged. MVAE is a special model designed to understand and analyze both these types of content together. It works by creating a combined representation of the text and images in news articles. This combined understanding helps MVAE decide if an article is fake or real. We should also mention SpotFake Singhal et al. (2019), which is designed to detect fake news by looking at both the words and pictures in an article. It uses transformers like BERT for understanding the text and convolutional neural networks like VGG-19 for analyzing images.
New methods kept coming, with CARMN Song et al. (2021). The special way that the CARMN model uses to look at both words and pictures in news stories is called "Crossmodal Attention Residual." This technique allows the model to focus on how words and pictures relate to each other. Additionally, this model uses "Multichannel convolutional neural Networks" to process and understand the information deeply. In simpler terms, "Crossmodal Attention Residual" helps CARMN pay special attention to the connections between text and images, while "Multichannel convolutional neural Networks" allow it to analyze the content from different perspectives or channels. This combination makes CARMN effective at detecting fake news.
A significant advancement was AMFB Kumari and Ekbal (2021) which stands out because it understands how image and text intertwine. The heart of AMFB lies in its "Attention-based multimodal Factorized Bilinear Pooling" technique. This means that AMFB meticulously focuses on the most crucial parts of the words and pictures, discerning which elements are vital for determining the authenticity of the news. It adeptly processes multiple types of information simultaneously, such as text and visuals. Furthermore, instead of merely merging the information from words and pictures, AMFB employs a sophisticated method of blending them, capturing their complex relationship. This approach empowers AMFB to discriminate between real and fake news stories with high accuracy.
Another model called FNR Ghorbanpour et al. (2023) was developed which evaluates how closely the content of a news story (both text and images) matches with known genuine news. By gauging this similarity, it gets an initial sense of the story's authenticity, FNR uses transformers (Bert and ViT (vision transformer) to deeply analyze the content, capturing intricate patterns.
In the paper written by _Zhou et al._ (2020), the authors introduce the Similarity-Aware Fake news detection (SAFE) method. The technique focuses on the identification of mismatches between text and images in news content, utilizing a multi-modal neural network to analyze the relationship between textual and visual elements and identify inconsistencies that are indicative of fake news.
### Comparison with Existing Methods
In this study, we present the GraMuFeN, a model that leverages the power of Graph Convolutional Networks for text and ResNet-152 for images.
Table 1 compares our proposed model with existing state-of-the-art models, including EANN, MVAE, SpotFake, CARMN, AMFB, and FNR, in terms of the type of text encoder, image encoder and the fusion used to combine textual and image features.
## 3 Problem statement
Both of our datasets (Twitter and Weibo) contain some text that has been posted by a user with one or several images corresponding to that text. Each text \(T\) has an image \(T_{I}\) and a label \(T_{L}\). For the same events with different texts, the image could be the same, especially in the Twitter dataset. Our model aims to use \(T\) and \(T_{I}\) to predict \(T_{L}\), whether it is Fake or Real. In the next section, we delve deeper into our model.
## 4 Proposed Method
The architecture of GraMuFeN consists of three main components. The Text Encoder, employs a sequence of multiple SageConv layers, generating embeddings for the provided textual content. The second component, the Image Encoder, employs the pre-trained ResNet-152 model to create embeddings corresponding to the images accompanying each textual instance. Ultimately, the text and image embeddings are combined and given to the third component, the classifier, which undertakes the responsibility of predicting the label of the news item. At the same time, the text and image embeddings are used to compute a supervised contrastive similarity loss Khosla et al. (2020). In Figure 1, the architecture of our model is presented. In the subsequent sections, we explain each of the mentioned components in detail.
### The Text Encoder
The initial phase entails the transformation of each text (or sentence) into a graph. For this purpose, each word within a sentence constitutes an individual node, even instances of duplicate words. The establishment of connections between nodes is achieved through the utilization of a fixed-size context window. An example is provided in Figure 2. In this example, the context window size is 2. This context window slides through the sentence, forging links between nodes corresponding to words co-occurring within the window. Thus, the underlying graph structure for each sentence is derived. Moreover, to account for the requirements of GCN layers that will be applied later, we add self-loops for each node of the resulting graph. This procedure is carried out by the _SentenceToGraph_ module, as shown in Figure 1. It is noteworthy, however, that at this time the node's features within the graph remain unspecified.
Following the creation of a distinct graph corresponding to each text (or sentence), the text is fed to an embedding layer. The embedding layer's output is then fed to an LSTM network creating hidden states for each word in the sentence.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Method** & **Text Encoder** & **Image Encoder** & **Fusion Type** \\ \hline EANN & Text-CNN & VGG19 & Concat \\ Wang et al. (2018) & & & \\ \hline MVAE & \multirow{2}{*}{BiLSTM} & \multirow{2}{*}{VGG19} & \multirow{2}{*}{auto-encoder} \\ Khattar et al.(2019) & & & \\ \hline SpotFake & \multirow{2}{*}{BERT} & \multirow{2}{*}{VGG19} & \multirow{2}{*}{Concat} \\ Singhal et al.(2019) & & & \\ \hline CARMN & Word level & \multirow{2}{*}{VGG19} & \multirow{2}{*}{Concat +} \\ Song et al.(2021) & sentence embeddings & & \\ \hline AMFB & Attention-based & Attention-based & Element wise \\ Kumari and Ekbal (2021) & BiLSTM & CNN-RNN & multiplication \\ \hline FNR & BERT & Visual & Concat + \\ Ghorbanpour et al.(2023) & & transformer (ViT) & Similarity \\ \hline GraMuFeN (Our Model) & GCN-LSTM & CNN & Concat + \\ & & (resnet-152) & Similarity \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of GraMuFeN with state-of-the-art models
Figure 1: Architecture of GraMuFeN
At this time, these hidden states serve as the feature vectors for the respective nodes in the corresponding graph. So far, we've successfully transformed each sentence into a graph, where the words in the sentence become nodes in the graph, and the LSTM-generated hidden states become the nodes' features. Next, we input this graph into our GCN (Graph Convolutional Neural Network) layers. Specifically, we utilize SageConv Hamilton et al. (2017) as GCN layers in GraMuFeN. Our model consists of three GCN layers, each incorporating a ReLU activation function to introduce non-linearity, enabling us to capture greater complexity.
To aggregate information from the GCN layers, we employ Global Mean Pooling, an aggregation technique that calculates the mean (average) of nodes' feature vectors for each graph, as follows:
\[\text{Global Mean Pooling}(X)=\frac{1}{N}\sum_{i=1}^{N}X_{i} \tag{1}\]
Here, \(X\) represents the feature vectors of all nodes in a graph, \(N\) is the number of nodes in the graph, and \(X_{i}\) represents the feature vector of the \(i\)-th node. The result is a single feature vector representing the aggregated information for each graph.
After obtaining the embeddings from the GCN, we employ a projection head Ghorbanpour et al. (2023) to map these embeddings to a new representational space:
\[z_{text}=\omega_{2}\times(\text{\emph{gelu}}(\omega_{1}\times X+b_{1}))+( \omega_{1}\times X+b_{1})+b_{2} \tag{2}\]
Where \(w_{1}\), \(w_{2}\), \(b_{1}\) and \(b_{2}\) are weights and biases of linear layers inside the text projector and \(X\) is the output of the Global Mean pool. The structure of this projection head is illustrated in Figure 3. \(z_{text}\) is the final embedding for texts with the shape of \((B,e_{text})\) where \(B\) is the batch size and \(e_{text}\) is the size of the final embedding in the text projection head.
### Image Encoder
As the Image Encoder, we employ the pre-trained ResNet-152 network. This network takes batches of images in the shape of (B, C, W, H) representing the batch size, number of image channels, width, and height of input images. ResNet-152 network projects the inputs into vectors of shape (B, 2048), where B is the batch size and 2048 is the size of the last linear layer in ResNet-152. Similar to the Text Encoder, we employ a projection head to map the outputs of
Figure 3: Projection Head
Figure 2: Converting Text to Graph
the Image Encoder into a new space:
\[z_{img}=\omega_{4}\times(\textit{gelu}(\omega_{3}\times V+b_{3}))+(\omega_{3} \times V+b_{3})+b_{4} \tag{3}\]
Where \(w_{3}\), \(w_{4}\), \(b_{3}\) and \(b_{4}\) are weights and biases of linear layers inside the image projector. The structure of the image projection head is identical to that of the Text Encoder as illustrated in Figure 3. After applying the image projector, we obtain \(z_{img}\), which is the final embedding for images with the shape of \((B,e_{img})\) where \(B\) is the batch size and \(e_{img}\) is the size of the final embedding in the image projection head.
### Similarity Loss
To train the proposed model, we use a supervised contrastive similarity loss Khosla et al. (2020). As explained previously, \(z_{img}\) and \(z_{text}\) are the final embeddings for images and texts with the shape of \((B,e_{img})\) and \((B,e_{text})\) for images and texts, where \(e_{img}\) and \(e_{text}\) are vector size of image and text embedding respectively. To obtain the level of similarity between these embeddings, their inner product is calculated:
\[P=z_{text}z_{img}^{T} \tag{4}\]
Here, we consider \(P\) as the prediction matrix with the shape of \((B,B)\).
The contrastive loss function tries to maximize the similarity of each text and image to itself. To this aim and by means of inner product, we define the expected matrix as the average similarity of text-to-text and image-to-image based on the following formula Radford et al. (2021); Salama (2021):
\[E=\textit{softmax}(\frac{z_{img}z_{img}^{T}+z_{text}z_{text}^{T}}{2}) \tag{5}\]
After calculating the \(E\) matrix, we use cross-entropy to find the similarity loss. The contrastive similarity loss is the average of the text similarity loss \(l_{text}\), and the image similarity loss \(l_{img}\)Radford et al. (2021):
\[l_{\text{text}}=-(E*\textit{log}(P)+(1-E)*\textit{log}(1-P)) \tag{6}\] \[l_{img}=-(E^{T}*\textit{log}(P^{T})+(1-E^{T})*\textit{log}(1-P^ {T}))\] (7) \[l_{s}=\frac{l_{text}+l_{img}}{2} \tag{8}\]
### The Classifier
We concatenate the final text (\(z_{text}\)) and image (\(z_{img}\)) embeddings to create the combined embedding vector as follows Ghorbanpour et al. (2023):
\[z_{\text{combined}}=\text{Concat}(z_{\text{gen}},z_{\text{img}}) \tag{9}\]
This embedding vector is then passed through two linear layers for fake news classification. After this linear mapping, we consider a vector of size (B, 2) with two classes. According to Figure 4 and assuming \(w_{6}\), \(w_{5}\), \(b_{4}\) and \(b_{5}\) are weights and biases of linear layers inside the classifier, the classifier output (\(Z\)) can be defined as follows:
\[Z=\text{softmax}\left(\omega_{6}\times(\text{gelu}\left(\omega_{5}\times z_{ \text{combined}}+b_{5}\right)+b_{6})\right) \tag{10}\]
After passing the vector through the softmax function, the classification loss can be stated as follows:
Figure 4: Classifier
\[l_{c}=-(L\times\textit{log}(Z)+(1-L)\times\textit{log}(1-Z)) \tag{11}\]
The final total loss is obtained by combining the classification loss and the similarity loss as follows:
\[Loss=l_{c}+l_{s} \tag{12}\]
## 5 Datasets
In this section, we elaborate on datasets that we used to benchmark our model, GraMuFeN. To this aim, we first begin with a description of the datasets used in this study. Then we explain the pre-processing employed for handling text and image data.
For the sake of evaluation, we choose two major datasets in this study:
* Twitter dataset
* Weibo dataset
### Twitter Dataset
The Twitter dataset is from MediaEval Verifying Multimedia Use benchmark Boididou et al. (2015), which is used for detecting fake content on Twitter. This dataset has two parts: the train set and the test set. The tweets in this dataset contain text content, attached images/videos, and additional social context information. In this work, we focus on detecting fake news by incorporating both text and image information. Thus, we remove the tweets without any text or image. We translated the whole dataset into English language using the Google Translate package For these two sets, there are no overlapping events among them. We further split the training dataset into training and validation sets, with the validation set comprising _20 percent_ of our training dataset. The important characteristics of this dataset are reported in Table 2
In previous studies, authors who worked with this Twitter dataset, like us, performed several pre-processing steps, including the removal of links, duplicates, and more. Upon examining the dataset, we discovered that there are even more duplicates when tweets containing retweets (indicated by "RT" or "rt" in their text) and mentions of other users (e.g., "@sample-user") are removed. Previous works reported that the Twitter dataset comprises 12,237 tweets. However, after thorough pre-processing and the removal of all duplicates, this number was reduced to 11,817 tweets.
We also analyzed the length of tweets in the Twitter dataset. The results are illustrated in Figure 5. Notably, the length of tweets in the Twitter dataset is shorter compared to the Weibo dataset with an average length of 10 words.
### Weibo Dataset
This dataset Jin et al. (2017) was collected from Weibo social media from 2012 to 2016 and is written in the Chinese language. Each line of data in this dataset also contains text, user information, and an image. The app's authentication system has tagged texts. This database is divided by Wang et al. (2018)1 into three sets of train, validation, and test data, as the news events of each set are different. We utilized the same Datasets. We translated the whole dataset into English language using the Google Translate package For these two sets. Only the data with text and images are used. Moreover, in the Weibo dataset, each text (tweet) was linked with a different number of images. In this study, we considered only the first image linked to each text. Table 3 lists the number of train/test data and fake/real data of the Weibo dataset.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Dataset** & **Label** & **Count** \\ \hline \multirow{2}{*}{Train} & Fake & 6649 \\ \cline{2-3} & Real & 4599 \\ \hline \multirow{2}{*}{Test} & Fake & 545 \\ \cline{2-3} & Real & 444 \\ \hline \multicolumn{2}{|c|}{All Data} & 12237 \\ \hline \end{tabular}
\end{table}
Table 2: Twitter Dataset Information
We also analyzed the length of tweets in the Weibo dataset. The results are presented in Figure 6. As evident, the sequence length in the Weibo dataset is significantly greater in comparison to the Twitter dataset.
## 6 Results
We utilized PyTorch version 2.0.1 with CUDA 11.8 (cu118) support and Torch Geometrics for the development and training of our proposed model. The experiments were conducted on the Google Colab Pro platform, leveraging a T4 GPU with approximately 15GB of VRAM, 16GB of RAM, and 30GB of storage.
Figure 5: Tweet Length Distribution in Twitter Dataset
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Dataset** & **Label** & **Count** \\ \hline \multirow{2}{*}{Train} & Fake & 3748 \\ \cline{2-3} & Real & 3758 \\ \hline \multirow{2}{*}{Test} & Fake & 999 \\ \cline{2-3} & Real & 995 \\ \hline \multicolumn{2}{|c|}{All Data} & 9500 \\ \hline \end{tabular}
\end{table}
Table 3: Weibo Dataset Information
Figure 6: Tweet length Distribution in Weibo Dataset
### Results in Twitter Dataset
For the Twitter dataset, we first pre-train the text encoder and image encoder separately on texts and images, respectively. Then we combine the pre-trained encoders resulting in the combined model and we start to finetune it. In the following sections, we elaborate on these training procedures.
#### 6.1.1 Pre-training Text Encoder
We trained the Text Encoder separately on the text data in the Twitter dataset. We employed a pre-trained embedding layer, specifically the Google News Word2Vec embedding Mikolov et al. (2013). This embedding layer has been trained on extensive text corpora and provides embeddings for 3,000,000 words, each consisting of a 300-dimensional feature vector. It's worth noting that we have frozen all the pre-trained embeddings. Consequently, this layer generates a 300-dimensional feature vector for each word. The necessary information for the pretraining procedure is reported in Table 4.
Furthermore, we incorporated a learning rate scheduler, _ReduceLROnPlateau_, with a factor of 0.8 and a patience parameter of 3. Additionally, to enhance training stability, we implemented gradient clipping, capping the gradients at a maximum norm of 1, We utilized Optuna for hyperparameter tuning.
The configuration of the Text Encoder is presented in Table 5.
The performance of the pre-trained Text Encoder over the Twitter dataset is presented in Table 6. In this table, the proposed Text Encoder is named _GCNLSTM_, and the performance of other models is presented based on what is reported in Ghorbanpour et al. (2023).
#### 6.1.2 Pre-training Image Encoder
We also pre-trained our Image Encoder, Resnet-152, on Twitter images. The Resnet-152 has been trained on the Imagnet Dataset and has a powerful understanding of image embeddings.
Like our Text Encoder, we used a learning rate scheduler _ReduceOnPlateau_ with a factor of 0.7 and a patience parameter of 5. The necessary information for the pretraining procedure of the image encoder is reported in Table 7.
\begin{table}
\begin{tabular}{l|l} \hline
**Initial learning rate** & 3e-3 \\ \hline
**batch size** & 256 \\ \hline
**Epoch** & 30 \\ \hline
**Optimizer** & AdamW \\ \hline \end{tabular}
\end{table}
Table 4: The Text Encoder Pretraining for Twitter Dataset
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{**Text Encoder for Twitter**} \\ \hline
**Layers** & **Configs** & **Activation Function** \\ \hline \multirow{2}{*}{Embedding} & n vocabulary: 3,000,000 & \multirow{2}{*}{} \\ & output dim: 300 & \\ \hline \multirow{4}{*}{LSTM} & input dim: 300 & \multirow{4}{*}{} \\ & n Layers: 3 & \\ & Dropout = 0.3 & \\ & hidden dim: 256 & \\ \hline \multirow{4}{*}{Sageconv} & input dim: 256 & \multirow{4}{*}{} \\ & n layers: 3 & \\ \cline{1-1} \cline{3-3} & L2-normalized & \\ \cline{1-1} \cline{3-3} & hidden dim: 512 & \\ \hline \multirow{2}{*}{Global Mean pool} & input dim:512 & \multirow{2}{*}{} \\ & output dim:512 & \\ \hline Dropout & Percentage: 50\% & \\ \hline \multirow{2}{*}{Linear (Classifier)} & input dim: 512 & \multirow{2}{*}{} \\ & n Layers: 1 & \\ \cline{1-1} \cline{3-3} & output dim: 2 & \\ \hline \end{tabular}
\end{table}
Table 5: Twitter Text Encoder Configuration
The performance of the pre-trained Image Encoder over the Twitter Image dataset is presented in Table 8. In this table, our Image Encoder is named _ResNet-152_.
#### 6.1.3 Multi-modal Results
In this section, we report results related to our final model (GraMuFeN) that combines the pre-trained Text and Image encoders as what was illustrated previously in Figure 1. It is important to note that the embedding layer of the Text Encoder has been frozen during the training procedure. Also, we incorporated a learning rate scheduler, ReduceLROnPlateau, with a factor of 0.9 and a patience parameter of 2. Additionally, to enhance training stability, we implemented gradient clipping, capping the gradients at a maximum norm of 1, plus Torch AMP Auto cast for mixed precision training. The necessary information related to the training procedure of this model is reported in Table 9.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{3}{c|}{Fake} & \multicolumn{3}{c|}{Real} \\ \cline{3-10} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{Accuracy} & \multicolumn{1}{c|}{F1-micro} & \multicolumn{1}{c|}{Precision} & \multicolumn{1}{c|}{Recall} & \multicolumn{1}{c|}{F1-score} & \multicolumn{1}{c|}{Precision} & \multicolumn{1}{c|}{Recall} & \multicolumn{1}{c|}{F1-score} \\ \hline \multirow{4}{*}{Text} & Logistic & \multirow{2}{*}{0.62} & \multirow{2}{*}{0.62} & \multirow{2}{*}{0.69} & \multirow{2}{*}{0.55} & \multirow{2}{*}{0.61} & \multirow{2}{*}{0.56} & \multirow{2}{*}{0.70} & \multirow{2}{*}{0.62} \\ & Regression & & & & & & & \\ \cline{1-1} \cline{2-10} & SVM & 061 & 0.61 & 0.68 & 0.55 & 0.61 & 0.55 & 0.68 & 0.62 \\ \cline{1-1} \cline{2-10} & BiLSTM & 0.61 & 0.60 & 0.62 & 0.73 & 0.67 & 0.58 & 0.45 & 0.51 \\ \cline{1-1} \cline{2-10} & Bert & 0.69 & 0.64 & 0.67 & 0.68 & 0.68 & 0.60 & 0.59 & 0.59 \\ \cline{1-1} \cline{2-10} & GCN & \multirow{2}{*}{**0.70**} & \multirow{2}{*}{**0.69**} & \multirow{2}{*}{**0.70**} & \multirow{2}{*}{**0.69**} & \multirow{2}{*}{**0.69**} & \multirow{2}{*}{**0.69**} & \multirow{2}{*}{**0.70**} & \multirow{2}{*}{**0.70**} \\ \cline{1-1} \cline{2-10} & LSTM & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 6: Pre-trained Text Encoder Performance on Twitter Dataset
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Initial learning rate** & 1e-4 \\ \hline
**batch size** & 128 \\ \hline
**Epoch** & 10 \\ \hline
**Optimizer** & AdamW \\ \hline
**Weight Decay** & 0.07 \\ \hline \end{tabular}
\end{table}
Table 7: The Image Encoder Pretraining for Twitter Dataset
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{3}{c|}{Fake} & \multicolumn{3}{c|}{Real} \\ \cline{3-10} \multicolumn{2}{|c|}{} & Accuracy & F1-micro & Precision & Recall & F1-score & Precision & Recall & F1-score \\ \hline \multirow{4}{*}{Image} & CNN & 0.62 & 0.62 & 0.69 & 0.55 & 0.61 & 0.56 & 0.55 & 0.61 \\ \cline{2-10} & VGG19 & 0.68 & 0.68 & 0.74 & 0.64 & 0.69 & 0.62 & **0.72** & 0.67 \\ \cline{1-1} \cline{2-10} & ViT & 0.72 & **0.76** & 0.74 & 0.86 & 0.80 & **0.79** & 0.63 & **0.7** \\ \cline{1-1} \cline{2-10} & Resnet & \multirow{2}{*}{**0.74**} & \multirow{2}{*}{0.74} & \multirow{2}{*}{**0.74**} & \multirow{2}{*}{**0.74**} & \multirow{2}{*}{**0.91**} & \multirow{2}{*}{**0.82**} & \multirow{2}{*}{0.73} & \multirow{2}{*}{0.44} & \multirow{2}{*}{0.55} \\ \cline{1-1} \cline{2-10} & 152 & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 8: Pre-trained Image Encoder Performance on Twitter Dataset
In Figure 7 the learning curve of training and validation is reported. Obviously, the training procedure is smooth and stable.
We report the performance of our model (GraMuFeN) in Table 10. Results related to other models are presented based on reports in Ghorbanpour et al. (2023).
Regarding the findings presented in this table, the combination of the Text and Image Encoders has led to a noteworthy enhancement in accuracy, precision, recall, and the F1 score. Concerning accuracy, there is an observed advancement of 19 percent and 15 percent when compared to the Text Encoder (see Table 4) and the Image Encoder (see Table 8), respectively. The joint utilization of Text and Image Encoders yielded an F1-micro score of 0.89, whereas the F1-micro scores for the Text Encoder and Image Encoder were 0.69 and 0.74, respectively. A similar pattern emerges for precision, recall, and the F1-Score pertaining to both fake and real samples within the dataset.
When compared with other models in Table 10, GraMuFeN outperforms them across various performance metrics. Specifically, in regard to accuracy, our proposed model achieves an accuracy of 0.89, which is notably 10 percent higher than the most advanced state-of-the-art model (referred to as FNR-S). This same trend is evident for the F1-micro score as well.
For instances of fake samples, GraMuFeN attains a precision of 0.92, surpassing the leading model by a substantial 14 percent. However, in terms of recall, the SpotFake model takes the lead. Conversely, when it comes to the F1-Score, GraMuFeN again emerges as the frontrunner, showcasing an improvement of over 7 percent in comparison to the best existing model.
For the category of real samples, a parallel pattern unfolds. Our model enhances performance metrics compared to the prevailing state-of-the-art models. Notably, GraMuFeN achieves increases of 7 percent, 17 percent, and 14 percent in terms of precision, recall, and F1-score, respectively, when matched against the existing state-of-the-art models.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{|c|}{} & \multicolumn{3}{|c|}{Fake} & \multicolumn{3}{|c|}{Real} \\ \cline{3-10} \multicolumn{2}{|c|}{} & \multicolumn{1}{|c|}{Accuracy} & F1-micro & Precision & Recall & F1-score & Precision & Recall & F1-score \\ \hline \multirow{4}{*}{Multi} & EANN & 0.69 & 0.69 & 0.75 & 0.58 & 0.65 & 0.62 & 0.76 & 0.69 \\ \cline{2-10} & MVAE & 0.67 & 0.67 & 0.70 & 0.69 & 0.69 & 0.63 & 0.64 & 0.63 \\ \cline{2-10} & SpotFake & 0.77 & 0.76 & 0.72 & **0.92** & 0.81 & 0.85 & 0.56 & 0.68 \\ \cline{2-10} & CARMN & 0.73 & 0.73 & 0.70 & 0.88 & 0.78 & 0.78 & 0.54 & 0.64 \\ \cline{2-10} & AMFD & 0.75 & 0.75 & 0.76 & 0.79 & 0.78 & 0.73 & 0.70 & 0.71 \\ \cline{2-10} & FNR-S & 0.79 & 0.79 & 0.78 & 0.85 & 0.82 & 0.79 & 0.71 & 0.75 \\ \cline{2-10} & GraMuFeN & **0.89** & **0.89** & **0.92** & 0.85 & **0.89** & **0.86** & **0.93** & **0.89** \\ \hline \end{tabular}
\end{table}
Table 10: Performance of the GraMuFeN on Twitter Dataset
Figure 7: Training and Validation loss in Twitter Dataset
### Results in Weibo Dataset
In the case of the Weibo dataset, we began by pre-training the image encoder. In contrast to the Twitter dataset, we did not employ pre-trained embeddings for our Text Encoder. Instead, we allowed the embedding layer in the Text Encoder to train concurrently with our projection heads and classifier during our multi-modal training phase. We will thoroughly discuss the reasons for choosing this approach for the Weibo dataset. In the following sections, we will provide detailed explanations of these training procedures.
#### 6.2.1 Text Encoder Results
In our approach for the Weibo dataset, we deliberately chose not to use pre-trained embeddings in the Text Encoder. The reason behind this decision is that, unlike the Twitter dataset, the Weibo dataset contains longer (as illustrated in Figure 6) and more meaningful sentences with a greater variety of words. As such, we decided to train our own embedding layer on Weibo dataset from scratch.
With the aim of showcasing the performance of the Text Encoder, we started to train it on the Weibo text dataset. The configuration of the Text Encoder is presented in Table 12, We utilized Optuna for hyperparameter tuning. Necessary information for using our Text Encoder during training is provided in Table 11
Furthermore, we incorporated a learning rate scheduler, _ReduceLROnPlateau_, with a factor of 0.8 and a patience parameter of 3. Additionally, to enhance training stability, we implemented gradient clipping, capping the gradients at a maximum norm of 1.
The performance of our Text Encoder over the Weibo dataset is presented in Table 13. In this table, the proposed Text Encoder is named GCNLSTM.
#### 6.2.2 Pre-training Image Encoder
We did pre-train our Image Encoder, **Resnet-152**, on Weibo images. Like our Text Encoder, we used a learning rate scheduler _ReduceOnPlateau_ with a factor of 0.8 and a patience parameter of 5. Additionally, to enhance training stability, we implemented gradient clipping, capping the gradients at a maximum norm of 1. The necessary information for the pretraining procedure of the image encoder is reported in Table 14.
\begin{table}
\begin{tabular}{l|l} \hline \multicolumn{2}{c|}{**Text Model Weibo**} \\ \hline
**Layers** & **Configs** & **Activation Function** \\ \hline \multirow{2}{*}{Embedding} & n vocabulary: 8,991 \\ & output dim: 16 \\ \hline \multirow{2}{*}{Dropout} & Percentage: 50\% \\ \hline \multirow{2}{*}{LSTM} & input dim: 16 \\ & n Layers: 2 \\ & hidden dim: 32 \\ \hline \multirow{2}{*}{Sageconv} & input dim: 32 \\ & n layers: 3 \\ & hidden dim: 32 \\ \hline \multirow{2}{*}{Global Mean pool} & input dim:32 \\ & output dim:32 \\ \hline \multirow{2}{*}{Dropout} & Percentage: 50\% \\ \hline \multirow{2}{*}{Linear (Classifier)} & input dim: 32 \\ & n Layers: 1 \\ \hline \end{tabular}
\end{table}
Table 12: Weibo Text Encoder Configuration
The performance of the pre-trained Image Encoder over the Weibo Image dataset is presented in Table 15. In this table, our Image Encoder is named _ResNet-152_.
#### 6.2.3 Multi-modal Results
In this section, we report results related to our final model (GraMuFeN) that combines the Text and pre-trained Image encoder as what was illustrated previously in Figure 1. It is important to note that the text encoder is not pre-trained and it will be trained from the ground up during this training procedure. Also, we incorporated a learning rate scheduler, ReduceLROnPlateau, with a factor of 0.9 and a patience parameter of 2 plus, Torch AMP Auto cast for mixed precision training. Additionally, to enhance training stability, we implemented gradient clipping, capping the gradients at a maximum norm of 1. The necessary information related to the training procedure of this model is reported in Table 16.
In Figure 8 the learning curve of training and validation is reported. Obviously, the training procedure is smooth and stable. We report the performance of our proposed model in Table 17.
When compared to other models presented in Table 17, our proposed approach outperforms all of them and has the same results as the FNR-S model, across various performance metrics.
\begin{table}
\begin{tabular}{l|l} \hline
**Batch size** & 30 \\ \hline
**Epochs** & 20 \\ \hline
**Optimizer** & AdamW \\ \hline
**Weight Decay for Image Encoder** & 0.07 \\ \hline
**Weight Decay for Image Projection Head** & 0.07 \\ \hline
**Learning Rate for Text Encoder** & 5e-3 \\ \hline
**Learning Rate For Text Projection Head** & 5e-3 \\ \hline
**Learning Rate for Image Encoder** & 1e-7 \\ \hline
**Learning Rate for Image Projection Head** & 5e-3 \\ \hline
**Learning Rate for Classifier** & 5e-3 \\ \hline \end{tabular}
\end{table}
Table 16: The Model Training Setting for Weibo Dataset
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{c|}{} & & \multicolumn{4}{c|}{Fake} & \multicolumn{4}{c|}{Real} \\ \cline{3-10} \multicolumn{2}{c|}{} & \multicolumn{1}{c|}{Accuracy} & F1-micro & Precision & Recall & F1-score & Precision & Recall & F1-score \\ \hline \multirow{4}{*}{Text} & Logistic & Accuracy & F1-micro & Precision & Recall & F1-score & Precision & Recall & F1-score \\ \cline{2-10} & Logistic & 0.71 & 0.71 & 0.71 & 0.80 & 0.75 & 0.71 & 0.59 & 0.65 \\ \cline{2-10} & SVM & 0.70 & 0.70 & 0.72 & 0.74 & 0.73 & 0.67 & 0.65 & 0.66 \\ \cline{2-10} & BiLSTM & 0.66 & 0.66 & 0.62 & 0.78 & 0.69 & 0.73 & 0.55 & 0.63 \\ \cline{2-10} & Bert & **0.81** & **0.81** & **0.81** & **0.81** & **0.81** & 0.69 & **0.82** & **0.81** \\ \cline{2-10} & GCN & \multirow{2}{*}{0.73} & \multirow{2}{*}{0.73} & \multirow{2}{*}{0.73} & \multirow{2}{*}{0.76} & \multirow{2}{*}{0.75} & \multirow{2}{*}{**0.74**} & \multirow{2}{*}{0.70} & \multirow{2}{*}{0.72} \\ \cline{2-10} & LSTM & & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 13: Text Encoder Performance on Weibo Dataset
\begin{table}
\begin{tabular}{|c|c|} \hline
**Initial learning rate** & 1e-4 \\ \hline
**batch size** & 64 \\ \hline
**Epoch** & 30 \\ \hline
**Optimizer** & AdamW \\ \hline \end{tabular}
\end{table}
Table 14: The Image Encoder Pretraining for Weibo Dataset
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{c|}{} & & \multicolumn{4}{c|}{Fake} & \multicolumn{4}{c|}{Real} \\ \cline{3-10} \multicolumn{2}{c|}{} & Accuracy & F1-micro & Precision & Recall & F1-score & Precision & Recall & F1-score \\ \hline \multirow{4}{*}{Image} & CNN & 0.52 & 0.50 & **0.79** & 0.24 & 0.38 & 0.58 & 0.55 & 0.63 \\ \cline{2-10} & VGG19 & 0.60 & 0.60 & 0.60 & 0.61 & 0.60 & 0.60 & **0.72** & 0.59 \\ \cline{1-1} \cline{2-10} & ViT & 0.68 & 0.68 & 0.67 & 0.69 & 0.68 & **0.79** & 0.68 & 0.67 \\ \cline{1-1} \cline{2-10} & Resnet & **0.71** & **0.71** & 0.72 & **0.72** & **0.72** & 0.70 & 0.70 & **0.70** \\ \cline{1-1} \cline{2-10} & 152 & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table}
Table 15: Pre-trained Image Encoder Performance on Weibo Dataset
We attribute the primary challenge encountered during the training of our model on the Weibo dataset to its length. Our proposed model can certainly outperform most state-of-the-art models except FNR-S, where we achieved the same results. There are several contributing factors. Notably, we did not impose a specific word limit on the text, whereas in Ghorbanpour et al. (2023), texts were limited to 32 words on Twitter and 200 characters on Weibo. Overall results are the same. The slight score difference where we fell short could also be attributed to errors. The Weibo dataset is in the Chinese language, and the translation method, as well as data pre-processing before and after translation, can impact results significantly.
## 7 Conclusion and Future Works
In this paper, we introduced GraMuFeN, a model that leverages the capabilities of Graph Convolutional Neural Networks (GCN) and Long Short-Term Memory (LSTM) networks as the core of its text encoder, and Resnet-152 as its image encoder. Graph neural networks have demonstrated exemplary performance across Natural Language Processing (NLP) tasks. Our Vision was to illustrate the potential of Graph Neural Networks in the domain of Fake News Detection. As evidenced by our results, we achieved commendable outcomes with a model that is considerably smaller compared to BERT and ViT. Our model was benchmarked using two datasets, namely Twitter and Weibo, which have been utilized in previous analogous works. As delineated in our results section 6, there is a noticeable enhancement in the F1 score and the overall performance of the model. We believe that there exists further scope for enhancement. For example, incorporating more contextual features to the model, such as the current event, the circle of followers and followings, the verification status of the channel or user and clustering the events based on time and trends, may further refine and improve the performance of the model.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{4}{*}{} & \multirow{2}{*}{} & \multicolumn{3}{c|}{Fake} & \multicolumn{3}{c|}{Real} \\ \cline{3-10} & & Accuracy & F1-micro & Precision & Recall & F1-score & Precision & Recall & F1-score \\ \hline \multirow{4}{*}{Multi} & EANN & 0.81 & 0.81 & 0.89 & 0.66 & 0.76 & 0.77 & 0.93 & 0.85 \\ \cline{2-10} & MVAE & 0.79 & 0.79 & **0.89** & 0.65 & 0.75 & 0.74 & **0.93** & 0.82 \\ \cline{2-10} & SpotFake & 0.86 & 0.86 & 0.87 & 0.92 & **0.90** & 0.81 & 0.70 & 0.75 \\ \cline{2-10} & CARMN & 0.84 & 0.85 & 0.86 & **0.93** & 0.89 & 0.81 & 0.66 & 0.73 \\ \cline{2-10} & AMFD & 0.83 & 0.83 & 0.86 & 0.90 & 0.88 & 0.75 & 0.68 & 0.71 \\ \cline{2-10} & FNR-S & 0.87 & 0.87 & 0.87 & 0.89 & 0.88 & 0.88 & 0.87 & **0.88** \\ \cline{2-10} & GraMuFeN & **0.87** & **0.87** & 0.86 & 0.89 & 0.87 & **0.88** & 0.84 & 0.86 \\ \hline \end{tabular}
\end{table}
Table 17: Performance of the GraMuFeN on Weibo Dataset
Figure 8: Training and Validation loss in Weibo Dataset |
2307.04976 | Multi-fidelity Emulator for Cosmological Large Scale 21 cm Lightcone
Images: a Few-shot Transfer Learning Approach with GAN | Large-scale numerical simulations ($\gtrsim 500\rm{Mpc}$) of cosmic
reionization are required to match the large survey volume of the upcoming
Square Kilometre Array (SKA). We present a multi-fidelity emulation technique
for generating large-scale lightcone images of cosmic reionization. We first
train generative adversarial networks (GAN) on small-scale simulations and
transfer that knowledge to large-scale simulations with hundreds of training
images. Our method achieves high accuracy in generating lightcone images, as
measured by various statistics with mostly percentage errors. This approach
saves computational resources by 90% compared to conventional training methods.
Our technique enables efficient and accurate emulation of large-scale images of
the Universe. | Kangning Diao, Yi Mao | 2023-07-11T02:33:34Z | http://arxiv.org/abs/2307.04976v1 | Multi-fidelity Emulator for Cosmological Large Scale 21 cm Lightcone Images: a Few-shot Transfer Learning Approach with GAN
###### Abstract
Large-scale numerical simulations (\(\gtrsim 500\mathrm{Mpc}\)) of cosmic reionization are required to match the large survey volume of the upcoming Square Kilometre Array (SKA). We present a multi-fidelity emulation technique for generating large-scale lightcone images of cosmic reionization. We first train generative adversarial networks (GAN) on small-scale simulations and transfer that knowledge to large-scale simulations with hundreds of training images. Our method achieves high accuracy in generating lightcone images, as measured by various statistics with mostly percentage errors. This approach saves computational resources by 90% compared to conventional training methods. Our technique enables efficient and accurate emulation of large-scale images of the Universe.
Machine Learning, ICML 2023 Workshop on Machine Learning for Astrophysics, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).
## 1 Introduction
In preparation for the upcoming era of 21 cm cosmology, many models have been developed to extract information from observations. These models range from the semi-numerical simulation, e.g. 21cmFAST(Mesinger et al., 2011; Murray et al., 2020) to hydrodynamical radiation transfer simulation, e.g. THESAN(Kannan et al., 2021), with varying levels of accuracy and computational cost. In addition, different approaches have been applied to infer cosmological and astrophysical parameters, including the Markov Chain Monte Carlo (MCMC) code, e.g. 21CMMC (Greig and Mesinger, 2017) to the machine learning boosted simulation-based inference (e.g. Alsing et al., 2019; Zhao et al., 2022). However, parameter inference typically requires many forward simulations. Given the large field of view of the next-generation telescopes, large-scale simulations are required to fully exploit the information contained in the observations. However, these large-scale simulations are computationally expensive, which has inspired the development of emulators as an alternative.
Building emulators typically requires numerous training samples. For large-scale simulations, the cost of obtaining these training samples can be prohibitive in and of itself. To address this issue, the concept of multi-fidelity emulation (Kennedy and O'Hagan, 2000; Ho et al., 2021) has been proposed. This approach first uses low-cost (low-fidelity) simulations to create an emulator. The emulator is then calibrated with a small number of high-cost (high-fidelity) simulations, reducing the computational cost while still maintaining the output quality.
Here we choose GAN (Goodfellow et al., 2014; List and Lewis, 2020; Andrianomena et al., 2022) as our emulation model. GAN emulation has previously demonstrated the ability to produce high-quality samples. However, GAN training is known to suffer very often from mode collapse, especially with a dataset smaller than \(\sim 1000\) images. In the context of 21 cm lightcone emulation, this would typically require \(\gtrsim 1000\) expensive simulations which are sometimes impossibly costly. In this paper, we propose the few-shot transfer learning (e.g. Ojha et al., 2021) to train a faithful large-scale 21 cm lightcone image emulator with a limited number of simulations. Few-shot transfer learning allows us to learn a new task with a limited number of samples, which serves as the 'calibrating' procedure in multi-fidelity emulation. This multi-fidelity emulation allows us to significantly reduce the number of simulations required to train an accurate lightcone image emulator.
## 2 Methodology
Our approach involves a two-step process. First, we train our GAN with 120000 small-scale (size of \((2,64,512)\)) images. In the second step, we train our large-scale GAN on 320 large-scale (size of \((2,256,512)\)) images while preserving the diversity of GAN results. We will explain our approach in detail in the following.
**StyleGAN 2**: The GAN architecture used in this work is StyleGAN 2 (Karras et al., 2020). Our generator \(G\) consists of two parts: First, a mapping network \(f\) takes the astro
physical parameter \(\mathbf{c}\) and a random vector \(\mathbf{z}\) and returns a style vector \(\mathbf{w}\). Second, a synthesis network \(g\) uses the style vector \(\mathbf{w}\) to shift the weights in convolution kernels, and Gaussian random noise is injected into the feature map right after each convolution to provide variations in different scales of features. Our discriminator \(D\) has a ResNet (He et al., 2015)-like architecture.
**Cross-Domain Correspondence (CDC)**: Assuming we have a good small-scale StyleGAN emulator, we expand the size of the generator's first layer, resulting in a final output size of \((2,256,512)\).
Next, we retrain our GAN with large-scale images. We first employ the patchy-level discriminator and cross-domain correspondence as described in Ojha et al. (2021). We mark the small-scale GAN as our source model \(G_{s}\) and the large-scale GAN as the target model \(G_{t}\). First, we use the same batch of vectors \((\mathbf{z},\mathbf{c})\) feeding both \(G_{s}\) and \(G_{t}\), getting the corresponding small-scale images \(G_{s}(\mathbf{z},\mathbf{c})\) and large-scale \(G_{t}(\mathbf{z},\mathbf{c})\). Then we calculate the cosine similarity \(s_{(i,j)}\) between any pair of images in \(G_{s}(\mathbf{z},\mathbf{c})\) as
\[\mathbf{S}_{s}(\mathbf{z},\mathbf{c})=\{\cos(G_{s}(z_{i},c_{i}),G_{s}(z_{j},c _{j}))_{\forall i\neq j}\} \tag{1}\]
and similarly for \(G_{t}\) we have:
\[\mathbf{S}_{t}(\mathbf{z},\mathbf{c})=\{\cos(G_{t}(z_{i},c_{i}),G_{s}(z_{j},c _{j}))_{\forall i\neq j}\} \tag{2}\]
Here the \(\cos\) denotes the cosine similarity. Next, we normalize these two vectors using softmax and calculate the KL divergence between vectors:
\[\mathcal{L}_{\mathrm{CDC}}=D_{\mathrm{KL}}\left(\mathrm{Softmax}(\mathbf{S}_{ s}),\mathrm{Softmax}(\mathbf{S}_{t})\right) \tag{3}\]
In this way, one can encourage the \(G_{t}\) to generate samples with a diversity similar to \(G_{s}\), relieving the mode collapse problem.
**Other Techniques**: A patchy-level discriminator is also adopted in this work. We divided the astrophysical parameter space into two parts: the anchor region and the rest. The anchor region is a spherical region around training set parameters with a small radius. In this region, the GAN image \(G_{t}(\mathbf{z},\mathbf{c}_{\mathrm{anch}})\) has a good training sample to compare with. Thus, we apply the full discriminator with these parameters. If \(\mathbf{c}\) is located outside the anchor region, we only apply a patch discriminator: in this case, the discriminator does not calculate the loss of the whole image but calculates the loss of different patches of the image.
Since the small-scale information in both training sets is identical, we freeze the first two layers of the discriminator (Mo et al., 2020). We add the small-scale discriminator \(D_{s}\) loss to ensure the correctness of small-scale information. Our code is public-available in this GitHub repo1.
Footnote 1: [https://github.com/dkn16/multi-fidel-gan-21cm](https://github.com/dkn16/multi-fidel-gan-21cm)
## 3 Dataset
The training dataset for this project consists of two parts: a small-scale dataset and a large-scale dataset. All the data are generated with 21cmFAST(Mesinger et al., 2011; Murray et al., 2020), and each simulation has distinct reionization parameters. Our parameters are the ionizing efficiency \(\zeta\) and the minimum virial temperature \(T_{\mathrm{vir}}\). We explored a range of \(10<\zeta<250\) and \(4<\log T_{\mathrm{vir}}<6\), and the parameters are sampled with Latin-Hypercube Sampling(McKay et al., 2000).
The small-scale dataset has a resolution of \((64,64,512)\) and consists of 30,000 simulations with a comoving box length
Figure 1: An illustration of the cross-domain correspondence (CDC). 1 We first generate a set of samples with both small-scale GAN and large-scale GAN. 2 We then calculate the similarity between each image pair generated by the same GAN. 3 Finally, we normalize the similarity vector of each GAN with softmax, then compute the KL-divergence as the CDC. The samples shown here are results from our large-scale GAN.
Figure 3: _The upper panel_: the 2D power spectrum of GAN results versus test set PS, each calculated with clips of size (2,256,128). The redshift denotes the redshift of the center slice. Legends are the same as Fig. 2. _The lower panel_: the relative error, same as Fig. 2 but for power spectrum here.
Figure 2: _The upper panel_: global signal reproduced with large-scale GAN. Different colors denote different parameters, the solid line is calculated with the test set, while the dashed line is the GAN result. The shallow shaded region is the \(2\sigma\) scatter of the GAN images, while the thick shaded region is the \(2\sigma\) scatter of the test set images. _The lower panel_: relative error between GAN global signal and test set global signal, the grey dot-dashed line represents the 10% error line, while the data points near 0 are neglected.
of \((128,128,1024)\rm{Mpc}\). The third axis (\(z\)-axis) is along the line of sight (LoS), spanning a redshift range of \(7.51<z<11.93\). For each redshift, we run a realization and select the corresponding slice for our final data. We include the matter overdensity field \(\delta_{m}\) and the 21 cm brightness temperature \(T_{b}\) field for training. Since the overdensity field is highly correlated with other intensity mappings (IM) like CO and C[II] lines, we expect our method can be transferred to other IM images smoothly. For each sample, we cut four image slices, resulting in 120000 lightcone images with a size of \((2,64,512)\) in our small-scale dataset, containing both the overdensity and brightness temperature field.
The large-scale dataset has a \((256,256,512)\) resolution and consists of 80 simulations with a comoving box length of \((512,512,1024)\rm{Mpc}\), covering the same redshift range. As before, for each sample, we cut four slices and obtained 320 lightcone images with a size of \((2,256,512)\) in our large-scale dataset.
## 4 Results
Here we present the evaluation of our model results. A visual inspection of generated samples is shown in Fig. 1. We tested our result on 3 combinations of parameters, each having distinct evolution history. For each parameter combination, we run 4 simulations with distinct initial conditions generated with different random seeds for testing.
**Global Signal**: We calculated the global 21 cm signal of the GAN results. Limited by the size of the test set, the mean value is calculated with 1024 image samples. Our result is shown in Fig. 2. We see that GAN works well, with an error of mostly less than 5% and a well-matched \(2\sigma\) region.
**Power spectrum (PS)**: Fig. 3 shows the \(T_{b}\) auto-PS, \(T_{b}-\delta_{m}\) cross-PS and \(\delta_{m}\) auto-PS. GAN results perform well on small scales, with an error of less than \(10\%\), except when the PS is close to 0. On extremely large scales, the error can exceed \(50\%\). This is unsurprising because we lack training samples. The GAN still captures the large-scale power when the \(T_{b}\) signal has a high amplitude. Moreover, the relative error is insignificant compared with the sampling variance.
From \(T_{b}\) auto-PS figures (Fig. 3, top row), the change of lines shows an evolution with the time that power is transferred from small scale to large scale. Again, the accuracy of the cross-PS (Fig. 3, middle row) guarantees the correlation between \(T_{b}\) and \(\delta_{m}\). At early stages, the HI traces the matter field well, and the GAN \(T_{b}\) and \(\delta_{m}\) fields have positive cross-correlation at all scales. Later, the cross-correlation becomes negative due to the fact that dense regions hosted ionizing sources earlier and ionized first. Our GAN performs well in reproducing these features. The GAN samples with different parameters have similar matter PS (Fig. 3, bottom row), which agrees with the truth.
**Non-Gaussianity**: Here we employ the scattering transform (ST, e.g. Mallat, 2012; Allys et al., 2019; Cheng et al., 2020; Greig et al., 2022) coefficients as a non-Gaussian statistic to evaluate our GAN. A detailed description can be found in e.g. Cheng and Menard (2021). We calculated the second-order ST coefficients \(S_{2}\) as measures for non-Gaussianity with Kymatio(Andreux et al., 2020). As the image sample size grows, we set the kernel size scale \(j=0,3,6\) to capture more large-scale information. Results are shown in Fig. 4. When \((j_{1},j_{2})=(0,3)\), the error is less significant as \(\lesssim 10\%\). When \(j_{2}=6\) the error exceeds \(20\%\).
## 5 Summary
In this paper, we introduce the few-shot transfer learning technique to build an emulator for large-scale 21 cm simulations. The large-scale GAN is trained with 80 simulations, and the relative error of statistics is less than \(10\%\) on small scales. On large scales, a mild increase in error arises due
Figure 4: The upper panel: the second order scattering coefficients of GAN results versus test set result, each calculated with clips of size (2,256,128). The redshift denotes the redshift of the center slice. Legends are the same as Fig. 2. The lower panel: relative error, same as Fig. 2 but for second order scattering coefficients here.
to insufficient training samples.
Generating our multi-fidelity dataset requires \(\sim 1.2\times 10^{5}\) CPU hours, while a purely large scale dataset requires \(\sim 1.5\times 10^{5}\) CPU hours, with 5000 simulations, an optimistic estimate of dataset size consistent with e.g. Hassan et al. (2022); Andrianomena et al. (2022). Our method reduces the computational cost by 90%, which will enable us to emulate more complex simulations in the future.
## Acknowledgements
This work is supported by the National SKA Program of China (grant No. 2020SKA0110401), NSFC (grant No. 11821303), and the National Key R&D Program of China (grant No. 2018YFA0404502). We thank Xiaosheng Zhao, Ce Sui, and especially Richard Grumitt for inspiring discussions. We acknowledge the Tsinghua Astrophysics High-Performance Computing platform at Tsinghua University for providing computational and data storage resources that have contributed to the research results reported within this paper.
|
2308.05235 | Spatial Gated Multi-Layer Perceptron for Land Use and Land Cover Mapping | Convolutional Neural Networks (CNNs) are models that are utilized extensively
for the hierarchical extraction of features. Vision transformers (ViTs),
through the use of a self-attention mechanism, have recently achieved superior
modeling of global contextual information compared to CNNs. However, to realize
their image classification strength, ViTs require substantial training
datasets. Where the available training data are limited, current advanced
multi-layer perceptrons (MLPs) can provide viable alternatives to both deep
CNNs and ViTs. In this paper, we developed the SGU-MLP, a learning algorithm
that effectively uses both MLPs and spatial gating units (SGUs) for precise
land use land cover (LULC) mapping. Results illustrated the superiority of the
developed SGU-MLP classification algorithm over several CNN and CNN-ViT-based
models, including HybridSN, ResNet, iFormer, EfficientFormer and CoAtNet. The
proposed SGU-MLP algorithm was tested through three experiments in Houston,
USA, Berlin, Germany and Augsburg, Germany. The SGU-MLP classification model
was found to consistently outperform the benchmark CNN and CNN-ViT-based
algorithms. For example, for the Houston experiment, SGU-MLP significantly
outperformed HybridSN, CoAtNet, Efficientformer, iFormer and ResNet by
approximately 15%, 19%, 20%, 21%, and 25%, respectively, in terms of average
accuracy. The code will be made publicly available at
https://github.com/aj1365/SGUMLP | Ali Jamali, Swalpa Kumar Roy, Danfeng Hong, Peter M Atkinson, Pedram Ghamisi | 2023-08-09T21:39:57Z | http://arxiv.org/abs/2308.05235v1 | # Spatial Gated Multi-Layer Perceptron for Land Use and Land Cover Mapping
###### Abstract
Convolutional Neural Networks (CNNs) are models that are utilized extensively for the hierarchical extraction of features. Vision transformers (ViTs), through the use of a self-attention mechanism, have recently achieved superior modeling of global contextual information compared to CNNs. However, to realize their image classification strength, ViTs require substantial training datasets. Where the available training data are limited, current advanced multi-layer perceptrons (MLPs) can provide viable alternatives to both deep CNNs and ViTs. In this paper, we developed the SGU-MLP, a learning algorithm that effectively uses both MLPs and spatial gating units (SGUs) for precise land use land cover (LULC) mapping. Results illustrated the superiority of the developed SGU-MLP classification algorithm over several CNN and CNN-ViT-based models, including HybridSN, ResNet, iFormer, Efficientformer and CoAtNet. The proposed SGU-MLP algorithm was tested through three experiments in Houston, USA, Berlin, Germany and Augsburg, Germany. The SGU-MLP classification model was found to consistently outperform the benchmark CNN and CNN-ViT-based algorithms. For example, for the Houston experiment, SGU-MLP significantly outperformed HybridSN, CoAtNet, Efficientformer, iFormer and ResNet by approximately 15%, 19%, 20%, 21%, and 25%, respectively, in terms of average accuracy. The code will be made publicly available at [https://github.com/aj1365/SGUMLP](https://github.com/aj1365/SGUMLP)
Attention mechanism, image classification, spatial gating unit (SGU), vision transformers.
## I Introduction
Land use and land cover (LULC) is one of the most significant indicators of anthropogenic interaction with the natural environment. Massive growth in LU because of forest destruction, urbanization and soil erosion has altered the global landscape and caused greater stress on natural ecosystems across the world [1]. Modelers' dominant perception of cities is of a sophisticated structure with characteristics like occurrence, self-organization and non-linear relationships. Urbanization and growing urban populations have led to significant scientific debate since they lead to substantial shifts in agricultural use and environmental degradation [2]. Analysis of urban growth, including intense growth in urban sprawl, is essential for understanding its environmental consequences, as well as promoting the adoption of more sustainable forms of urban expansion. As a result, it is essential to organize and structure the process of LULC change in natural ecosystems.
Precise LULC mapping serves as the foundation for non-monetary assessment and is generally obtained by integrating a machine learning or deep learning algorithm with imagery from remote sensing. Deep learning models are capable of extracting adaptively the most important features from data using a data-driven approach. During the training phase, these models can achieve an effective parametric configuration by simultaneously training the associated classification model. This greatly enhances their ability to accurately represent complex data and avoid ambiguity [3]. Deep learning models have been progressively used for LULC mapping in recent years [4, 5]. In particular, Convolutional Neural Networks (CNNs) are widely utilized models for hierarchical feature extraction. Due to their self-attention system, vision transformers (ViTs) can model global contextual information more effectively than CNNs [6], but they require larger training datasets to maximize image classification accuracy. On the other hand, where fewer training data are available, current advanced Multi-layer Perceptrons (MLPs) can be used as an alternative to both deep CNNs and ViTs [7].
In this paper, we develop and propose an SGU-MLP, a deep learning classifier that employs MLPs and a spatial gating unit (SGU) for accurate LULC modeling. The SGU concept enables the algorithm to efficiently characterize complex spatial interactions across input data tokens without the use of positional information embedding as utilized in popular ViTs. The SGU-MLP model's final layer employs a structure entirely composed of multi-layer perceptrons (MLPs), eliminating the requirement for CNNs or ViTs and, consequently, minimizing the necessity for extensive training data.
This Letter introduces the SGU-MLP in Section II, illustrates the experiments and analyses the results in Section III, and highlights the concluding remarks in Section IV.
## II Proposed Classification Framework
As illustrated in Fig. 1, the SGU-MLP, is developed for image classification using a small number of training data. For efficient application of the multi-scale representation in
the classification task, we incorporated a computationally light and straightforward depth-wise CNN-based architecture. As presented in Fig. 2, the MLP-Mixer layer of the developed model includes two different types of layers: (i) MLPs utilized across image patches for extraction of spatial information and (ii) MLPs utilized individually to extract per-location features from image inputs. In addition, in each MLP block, the SGU is utilized to enable the developed algorithm to effectively learn intricate spatial relationships among the tokens of the input data.
### _Depth-wise Convolution Block (DWC):_
The DWC architecture is light and straightforward, and is based on CNNs. With so many variables and the limited available training data, a higher probability of overfitting exists during the training process. Hence, to address overfitting and capture multi-scale feature information, we incorporated three depth-wise convolutions in parallel. These convolutions consist of filters with a size of 20 and kernel (\(k\)) sizes of \(1\times 1\), \(3\times 3\), and \(5\times 5\), respectively. Feature maps \(X\) with a size of \(9\times 9\times d\) are the input for the DWC block that produces output \(D_{Z}\), where \(d\) is the number of bands.
\[D_{Z}=\sum_{j=1,3,5}\mathrm{DWConv2D_{(k\times k)}(\textit{X})} \tag{1}\]
The output maps of the three depth-wise CNNs are added and fed to the MLP-Mixer blocks.
### _Spatial gating unit (SGU):_
The SGU is designed to extract complex spatial interaction across tokens. Unlike, the current ViT models, the SGU does not necessitate the use of positional embedding. In other words, the positional embedding information is obtained through the use of spatial depth-wise convolutions [8] similar to inverted bottlenecks employed in MobileNetV2 [9]. Considering the dense layer of \(D\) in the MLP block, as illustrated in Fig. 1, the SGU uses a linear projection layer that benefits from a contraction operation across the spatial dimension of the cross-tokens interaction as defined by:
\[f_{W,b}(D)=WD+b \tag{2}\]
where \(W\in R^{n\times n}\) defines a matrix that has a size equal to the input sequence length, while \(n\) and \(b\) present the sequence length and biases of the tokens. It should be highlighted that the spatial projection matrix of \(W\) is not dependent on the input data, contradicting the self-attention models where \(W(D)\) is created dynamically from the \(D\). The SGU can be formulated as:
\[S(D)=D\cdot f_{W,b}(D) \tag{3}\]
where element-wise multiplication is represented by \((\cdot)\). The SGU equation can be improved by dividing \(D\) into \(D1\) and \(D2\) along the channel dimension. Thus, the SGU can be formulated as:
\[S(D)=D1\cdot f_{W,b}(D2) \tag{4}\]
The output map of the DWC block is flattened and fed to the MLP-Mixer layer. Considering a dense layer of size \(256\times 256\), The \(D1\) and \(D2\) both have sizes of \(256\times 128\). The \(f_{W,b}(D2)\) has a size of \(256\times 128\), where the \(S(D)\) has a size of \(256\times 128\).
### _Multi-layer Perceptron Mixer Block (MLP-Mixer):_
In current advanced deep vision architectures, layers combine features in one or more of the following ways: (1) at a given spatial location, (2) among various spatial locations, or (3) both simultaneously, with kernel of \(k\times k\) convolutions (for \(k>1\)) and pooling operations (2), incorporated in CNNs. Convolutions with kernel size \(1\times 1\) perform the operation (1), whereas convolutions with larger kernels accomplish both operations (1) and (2). Self-attention layers in ViTs and other attention-based structures include operations (1) and (2), while models based on the MLPs only perform (1). The objective of the MLP-Mixer architecture is to distinguish between cross-location (height and width mixing) operations and per-location (channel-mixing) operations, as presented in Fig. 2[7]. A series of non-overlapping patches of images \(E\) from the output feature of the DWC block \(D_{Z}\) are the input to the MLP-Mixer that is projected to a given hidden dimension of \(C\), resulting in two-dimensional data of \(\textbf{M}\in\mathcal{R}^{E\times d}\), where \(d\) illustrates the image input band number. Given the input
Fig. 1: Graphical representation of spatial gated multi-layer perceptron framework for land use and land cover classification. The MLP-Mixer layer includes two MLPs to extract spatial information. \(\odot\) represents channel-wise concatenation.
image of size \(H\times W\), and patches of \(F\times F\), the number of patches would be \(E=\frac{H\times W}{F^{2}}\), where all resulting patches of images are projected into the same projection matrix. The MLP-Mixer consists of several identical size layers, where each layer has two MLP blocks. The first token-mixing (i.e., height and width mixing) MLP block is applied on the columns of \(M\), while the second MLP block (i.e., channel mixing) is utilized on the rows of the \(M\). Two fully connected layers are in each MLP block, and a non-linearity function is applied independently to each row of the input image tensors. As such, each MLP-Mixer can be formulated as:
\[U_{i,i} =M_{i,i}+W_{2}\xi(W_{1}LN(M)_{i,i})),i=1,...,B \tag{5}\] \[Y_{j,i} =U_{j,i}+W_{4}\xi(W_{3}LN(U)_{i,i})),j=1,\dots,E \tag{6}\]
Notably, the MLP-Mixer has a linear computation complexity, which distinguishes it from vision transformers with quadratic computation complexity and, consequently, exhibits a high level of computational efficiency.
### _Spatial Gating Unit Multi-layer Perceptron (SGU-MLP):_
Let us consider three data modalities, \(X_{1}\), \(X_{2}\), and \(X_{3}\). From these datasets, image patches with the size of \(9\times 9\) are extracted and then concatenated. As seen in Fig. 1, the concatenated layer is fed to the DWC layer. After being fed into the DWC block, the input images of size \(9\times 9\times B\) result in equal feature maps of size \(9\times 9\times B\), where \(B\) represents the number of bands. The resulting feature map is then flattened and passed on to the MLP-Mixer blocks. The MLP-Mixer includes four blocks with patch sizes of 4, token dimension of 256, and channel dimension of 256. As discussed, in each MLP block, before the activation function (i.e., GELU), the SGU is employed to extract complex spatial interactions between the tokens. Finally, the last layer of the MLP-Mixer is a dense layer with a softmax activation function. The size of the last layer is equal to the number of existing classes in each study area.
## III Experimental Results
### _Experimental Data_
**Houston dataset:** This dataset was captured over the University of Houston campus and the neighboring urban area. It consists of a coregistered hyperspectral and multispectral dataset containing 144 and 8 bands, respectively, with \(349\times 1905\) pixels. More information can be found at [10].
**Berlin dataset:** This dataset has a spatial resolution of \(797\times 220\) pixels and contains 244 spectral bands with wavelengths ranging from 0.4 \(\mu\)m to 2.5 \(\mu\)m over Berlin. The Sentinel-1 dual-Pol (VV-VH) single-look complex (SLC) product represents the SAR data. The processed SAR data have a spatial resolution of \(1723\times 476\) pixels and a 13.89 m GSD. The HS image is nearest neighbor interpolated, as for the Houston dataset, to provide the same image size as the SAR data [11].
**Augsburg dataset:** This scene over the city of Augsburg, Germany includes three distinct datasets: a spaceborne HS image, a dual-Pol PolSAR image and a DSM image. The PolSAR data were obtained from the Sentinel-1 platform, and the HS and DSM data were obtained by DAS-EOC and DLR. All image spatial resolutions were downscaled to a single 30 m GSD. The scene describes four features from the dual-Pol (VV-VH) SAR image, 180 spectral bands from 0.4 \(\mu\)m to 2.5 \(\mu\)m for the HS image and one DSM image of \(332\times 485\) pixels [12].
### _Classification Results_
The classification capability of the developed SGU-MLP was evaluated against several CNN-based and cutting-edge CNN-ViT algorithms, including HybridSN [13], ResNet [14], iFormer [15], EfficientFormer [16] and CoAtNet [17]. In the Augsburg dataset, as seen in Table I, the developed SGU-MLP algorithm demonstrated superior classification performance with an average accuracy of 66.79% compared to ResNet (43.57%), CoAtNet(49.9%), Efficientformer (52.81%), iFormer (52.96%) and HybridSN (55.76%). The developed SGU-MLP classifier significantly increased the classification accuracy of the CNN-ViT-based algorithms of iFormer, Efficientformer, and CoAtNet by about 21%, 21%, and 25% in terms of average accuracy, as illustrated in Table I and Fig. 3.
In the Berlin study area, the SGU-MLP classifier with an average accuracy of 66.26% considerably increased the classification accuracy of the other CNN-ViT algorithms iFormer, CoAtNet and Efficientformer by approximately 5%, 9% and 9%, respectively, as shown in Table II and Fig. 4. The MLP-SGU achieved F-1 scores of 0.27, 0.41, 0.46, 0.48, 0.67, 0.72, 0.73 and 0.82 for the recognition of commercial areas, industrial areas, allotment, water, soil, low plants, forest and residential areas, respectively.
As shown in Table III and Fig. 5, with a kappa index of 86.91%, the SGU-MLP algorithm noticeably surpassed the classification performance of the ResNet (65.49%), iFormer (68.71%), Efficientformer (69.25%), CoAtNet (70.56%), and HybridSN (73.59%), respectively, in the Houston pilot site. The developed SGU-MLP classification model outperformed the other CNN and CNN-ViT-based algorithms of the HybridSN, CoAtNet, Efficientformer, iFormer, and ResNet by
Fig. 2: Graphical representation of MLP-Mixer layer.
about 15%, 19%, 20%, 21%, and 25%, respectively, in terms of average accuracy, as demonstrated in Table III.
### _Ablation study_
An ablation study was performed to better understand the contribution and significance of different parts of the developed SGU-MLP classification algorithm. As seen in Table IV, the inclusion of the DWC block and SGU block increased the classification accuracy of the MLP-Mixer model by approximately 4% and 1%, respectively, in terms of average accuracy for the Augsburg dataset. The highest classification accuracy was achieved by the inclusion of both the DWC and SGU blocks with an average accuracy of 66.79%, increasing the classification accuracy of the MLP-Mixer algorithm by about 8%.
In the Berlin dataset, as illustrated in Table V, the inclusion of the SGU block and DWC block increased the classification accuracy of the MLP-Mixer algorithm by about 3% and 4%, respectively, in terms of Kappa index. By incorporating both the DWC and SGU blocks, the highest classification was attained with a Kappa index of 58.06%. This increased the accuracy of the MLP-Mixer classifier by approximately 10%.
As demonstrated in Table VI, the inclusion of the DWC block and SGU block increased the accuracy of the MLP-Mixer algorithm by approximately 1% and 3%, respectively, in terms of average accuracy for the Houston dataset. By the inclusion of both the DWC and SGU blocks, the MLP-Mixer's classification accuracy was increased by approximately 9% to 89.33%.
## IV Conclusion
Convolutional Neural Networks (CNNs) are commonly utilized frameworks for hierarchical feature extraction. At the same time, due to the use of a self-attention system, vision transformers (ViTs) can achieve better modeling of global contextual information than CNNs. However, to realize their image classification capability, ViTs require large training datasets. To overcome this limitation, we developed the SGU-MLP algorithm based on advanced MLP models and a spatial gating unit for land use land cover mapping which demonstrated superior classification accuracy compared to several CNN and CNN-ViT-based models. For the Houston experiment, for example, with a Kappa index of 86.91%, the SGU-MLP algorithm significantly outperformed the classification accuracy of the ResNet (65.49%), iFormer (68.71%), Efficientformer (69.25%), CoAtNet (70.56%) and HybridSN (73.59%) algorithms. For the Augsburg dataset, the SGU-MLP algorithm, with an average accuracy of 66.26%, again demonstrated increased classification accuracy compared to ResNet (43.57%), CoAtNet(49.9%), Efficientformer (52.81%), iFormer (52.96%) and HybridSN (55.76%).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Class & MLP & SUU \(\times\) MLP & DWC \(\times\) MLP & SGUMLP \\ \hline Forest & 0.71 & **0.72** & 0.70 & 0.72 \\ Residual & 0.78 & 0.80 & 0.81 & **0.82** \\ Residual & 0.74 & 0.70 & 0.70 & **0.72** \\ Industrial & 0.73 & 0.80 & 0.81 & **0.82** \\ Low Plants & 0.84 & 0.70 & 0.66 & **0.72** \\ Soil & 0.66 & **0.70** & **0.70** & 0.67 \\ Alliment & 0.40 & 0.43 & **0.46** & **0.46** \\ Commercial & **0.31** & 0.28 & 0.25 & 0.27 \\ Water & 0.42 & 0.47 & **0.53** & 0.48 \\ \hline A\(\times\)100 & 64.17 & 64.56 & 69.11 & **0.59** \\ A\(\times\)100 & 64.11 & 66.2 & 65.00 & **66.26** \\ \(\times\)100 & 52.5 & 56.1 & 55.88 & **58.06** \\ \hline \end{tabular}
\end{table} TABLE V: Classification results of Augsburg dataset in terms of F-1 score where \(\kappa=\) Kappa index, OA = Overall Accuracy, AA = Average Accuracy, respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Class & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}\) & \multicolumn{1}{c|}{\multirow{2}{*}{ResNet}} & \multicolumn{1}{c|}{\multirow{2}{*}{Parameter}} & \multicolumn{1}{c|}{\multirow{2}{*}{Hidden}} & \multicolumn{1}{c|}{\multirow{2}{*}{CAMNet}} & \multicolumn{1}{c|}{\multirow{2}{*}{IMCP}} \\ \cline{2-2} \cline{6-7} \cline{8-8} \cline{10 |
2310.10314 | Recurrence of the plane Elephant random walk | We give a short proof of the recurrence of the two-dimensional elephant
random walk in the diffusive regime. This was recently established by Shuo Qin,
but our proof only uses very rough comparison with the standard plane random
walk. We hope that the method can be useful for other applications. | Nicolas Curien, Lucile Laulin | 2023-10-16T11:48:54Z | http://arxiv.org/abs/2310.10314v1 | # Recurrence of the plane Elephant random walk
###### Abstract
We give a short proof of the recurrence of the two-dimensional elephant random walk in the diffusive regime. This was recently established by Shuo Qin [5], but our proof only uses very rough comparison with the standard plane random walk. We hope that the method can be useful for other applications.
## 1 Introduction
The elephant random walk on \(\mathbb{Z}^{d}\) has been introduced in dimension \(1\) by Schutz and Trimper [6] and is a well-studied discrete process with reinforcement, see [3] for background and references. Its definition (see (1.2)) depends on a memory parameter3\(\alpha>0\) and it exhibits a phase transition going from a diffusive when \(\alpha<\alpha_{c}=\frac{1}{2}\) to a superdiffusive behavior when \(\alpha>\alpha_{c}\). We focus here on the two-dimensional case and establish recurrence of the process in the diffusive regime. This has been recently proved by Shuo Qin [5] but our approach is different and much shorter, however it gives less quantitive information and does not directly apply in the critical regime \(\alpha=\alpha_{c}\).
Footnote 3: The usual definition uses a memory parameter \(p\in[0,1]\) which is the probability to reproduce a (uniform) former step of the walk, or to move in one of the \(3\) remaining directions with the same probability \((1-p)/3\) so that \(\alpha=(4p-1)/3\), see [3, Eq. (1.4)].
Notation.We write \(\mathbf{e}_{i}\) the four directions of \(\mathbb{Z}^{2}\) for \(1\leq i\leq 4\). We shall write \((X_{k}:k\geq 0)\) for the canonical underlying process starting from \(\mathbf{0}:=(0,0)\in\mathbb{Z}^{2}\), we denote its steps by \(\Delta X_{k}=X_{k+1}-X_{k}\in\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}, \mathbf{e}_{4}\}\) and we introduce for \(1\leq i\leq 4\) the centered counting direction processes \(D_{k}^{[X]}(\mathbf{e}_{i})\) defined by
\[D_{k}^{[X]}(\mathbf{e}_{i})=\sum_{j=0}^{k-1}\mathbf{1}\{X_{j+1}-X_{j}= \mathbf{e}_{i}\}-\frac{k}{4},\qquad\text{ in particular notice that }\quad\sum_{i=1}^{4}D_{n}^{[X]}(\mathbf{e}_{i})=0. \tag{1.1}\]
For any stopping time \(\theta\), we denote by \(X^{(\theta)}\) the shifted process \(X_{k}^{(\theta)}=X_{\theta+k}-X_{\theta}\) for \(k\geq 0\). Finally \(\mathcal{F}_{n}\) is the canonical filtration generated by the first \(n\) steps of the walk and we use \(X_{[0,n]}\) as a shorthand for \((X_{k}:0\leq k\leq n)\).
Under the law \(\mathbb{P}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! |
2302.07479 | Colossal room-temperature electrocaloric strength aided by hydrostatic
pressure in lead-free multiferroic solid solutions | Solid-state cooling applications based on the electrocaloric (EC) effect are
particularly promising from a technological point of view due to their downsize
scalability and natural implementation in circuitry. However, EC effects
typically occur far from room temperature, involve materials that contain toxic
substances and require relatively large electric fields ($\sim 100$-$1000$ kV
cm$^{-1}$) that cause fateful leakage current and dielectric loss problems.
Here, we propose a possible solution to these practical issues that consists in
concertedly applying hydrostatic pressure and electric fields on lead-free
multiferroic materials. We theoretically demonstrate this strategy by
performing first-principles simulations on supertetragonal
BiFe$_{1-x}$Co$_{x}$O$_{3}$ solid solutions (BFCO). It is shown that
hydrostatic pressure, besides adjusting the occurrence of EC effects to near
room temperature, can reduce enormously the intensity of the driving electric
fields. For pressurized BFCO, we estimate a colossal room-temperature EC
strength, defined like the ratio of the adiabatic EC temperature change by the
applied electric field, of $\sim 1$ K cm kV$^{-1}$, a value that is several
orders of magnitude larger than those routinely measured in uncompressed
ferroelectrics. | CΓ©sar MenΓ©ndez, Claudio Cazorla | 2023-02-15T05:51:35Z | http://arxiv.org/abs/2302.07479v1 | Colossal room-temperature electrocaloric strength aided by hydrostatic pressure in lead-free multiferroic solid solutions
###### Abstract
Solid-state cooling applications based on the electrocaloric (EC) effect are particularly promising from a technological point of view due to their downsize scalability and natural implementation in circuitry. However, EC effects typically occur far from room temperature, involve materials that contain toxic substances and require relatively large electric fields (\(\sim\) 100-1000 kV cm\({}^{-1}\)) that cause fateful leakage current and dielectric loss problems. Here, we propose a possible solution to these practical issues that consists in concertedly applying hydrostatic pressure and electric fields on lead-free multiferroic materials. We theoretically demonstrate this strategy by performing first-principles simulations on supertetragonal BiFe\({}_{1-x}\)Co\({}_{x}\)O\({}_{3}\) solid solutions (BFCO). It is shown that hydrostatic pressure, besides adjusting the occurrence of EC effects to near room temperature, can reduce enormously the intensity of the driving electric fields. For pressurized BFCO, we estimate a colossal room-temperature EC strength, defined like the ratio of the adiabatic EC temperature change by the applied electric field, of \(\sim\) 1 K cm kV\({}^{-1}\), a value that is several orders of magnitude larger than those routinely measured in uncompressed ferroelectrics.
One of the limiting factors of modern microelectronic devices is their tremendous heat dissipation density, which needs to be mitigated through cooling in order to ensure proper performance. Current refrigeration technologies, however, rely on compression cycles of environmentally harmful gases and cannot be scaled down to microchip dimensions. Electrocaloric (EC) cooling is a highly promising solid-state refrigeration technology for thermal management of chips and microcircuitity owing to its high efficiency, environmental friendliness, and easy miniaturization [1]. EC refrigeration exploits the reversible thermal change of ferroelectric materials resulting from phase transitions induced by external electric field variations. Large EC isothermal entropy changes, \(\Delta S_{\rm EC}\), of \(\sim\) 10 J K\({}^{-1}\)kg\({}^{-1}\) and adiabatic temperature changes, \(\Delta T_{\rm EC}\), of \(\sim\) 1 - 10 K have been measured in ferroelectric materials like BaTiO\({}_{3}\)[2; 3], Pb(Zr,Ti)O\({}_{3}\)[4] and HfO\({}_{2}\)[5], to cite few examples.
Nonetheless, unfortunately, the largest EC effects observed to date normally occur at temperatures far from room temperature [6], involve materials that contain toxic substances like lead and require large electric fields that are energetically costly and produce adverse leakage currents and dielectric losses [7; 8]. Recently, several materials design strategies have been proposed to overcome these common EC problems. For instance, by exploiting electrostatic coupling and interface effects in lead-free ferroelectric relaxor heterostructures, an unprecedentedly large EC adiabatic temperature shift of \(\approx\) 23 K has been realized near room temperature for moderate electric bias (\(\varepsilon_{c}\sim\) 100 kV cm\({}^{-1}\)) [9]. Nevertheless, the magnitude of such EC effects can be strongly influenced by the specific details of the heterostructure synthesis process and thus in practice \(\Delta T_{\rm EC}\) may strongly fluctuate from one sample to another. Another recent EC advancement has been reported for the layered hybrid perovskite ferroelectric [(CH\({}_{3}\))\({}_{2}\)CHCH\({}_{2}\)NH\({}_{3}\)]\({}_{2}\)PbCl\({}_{4}\)[10], in which a sharp first-order ferroelectric phase transition associated to a high-entropy change occurs instead of the continuous phase transformation associated to a low-entropy change that is characteristic of inorganic ferroelectric perovskites [11]. In this case, a giant \(\Delta T_{\rm EC}\) of 11.1 K has been measured at room temperature for a small electric field of 29.7 kV cm\({}^{-1}\). However, the implicated material still contains lead and the degree of reversibility associated to such giant EC effects appears to be quite limited.
In this work, we propose a completely different approach for the enhancement of EC effects that consists in the application of multiple external fields on lead-free multiferroic materials able to undergo sharp first-order phase transitions. In particular, we demonstrate by means of computational first-principles methods that the sequential operation of hydrostatic pressure and electric fields in BiFe\({}_{1-x}\)Co\({}_{x}\)O\({}_{3}\) solid solutions (BFCO) can trigger large and inverse EC effects of \(\Delta S_{\rm EC}\approx\) 5 J K\({}^{-1}\)kg\({}^{-1}\) and \(\Delta T_{\rm EC}\approx-\)5 K at room temperature. Moreover, aided by pressure BFCO displays a colossal EC strength of \(\sim\) 1 K cm kV\({}^{-1}\), defined like \(|\Delta T_{\rm EC}|/\varepsilon_{c}\), which surpasses by several orders of magnitude the typical values reported for uncompressed ferroelectrics.
## Results
**Phase competition in BFCO under pressure.** At room temperature and zero pressure, BiFe\({}_{1-x}\)Co\({}_{x}\)O\({}_{3}\) solid solutions (BFCO) can be stabilized in two different polymorphs, depending on the relative content of Fe/Co atoms, exhibiting hombohedral (\(\mathcal{R}\)) and tetragonal (\(\mathcal{T}\)) symmetries [12; 13; 14]. For relative cobalt contents of \(0\leq x\lesssim 0.25\), the BFCO ground state is the \(\mathcal{R}\) phase, which is analogous to the ground state of bulk BiFeO
[12; 13; 14; 15]. This rhombohedral phase presents an electric polarization of 60-80 \(\mu\)C cm\({}^{-2}\) that is oriented along the pseudocubic direction [111] (Fig. 1a) and G-type antiferromagnetic spin ordering (AFM-G, the net magnetic moment of each transition metal ion is antiparallel to those of its six first nearest neighbours). For larger relative cobalt contents, \(0.25<x\), the BFCO ground state corresponds to the \(\mathcal{T}\) phase, which is analogous to the ground state of bulk BiCoO\({}_{3}\)[12; 13; 14; 16; 17]. This tetragonal phase presents a giant electric polarization of 160-180 \(\mu\)C/cm\({}^{2}\) oriented along the pseudocubic direction [001] (Fig. 1a), hence sometimes it is referred to as "supertetragonal", and C-type antiferromagnetic spin ordering (AFM-C, the net magnetic moment of each transition metal ion is parallel to those of its two first nearest neighbours located along the polar axis and antiparallel to those of its other four first nearest neighbours).
Under increasing temperature and for relative cobalt contents of \(x\lesssim 0.25\), the supertetragonal \(\mathcal{T}\) phase can be stabilized over the \(\mathcal{R}\) phase owing to its larger vibrational entropy [12; 13; 14]. Such a \(T\)-induced phase transition clearly is of first-order type (or discontinuous) since the volume change associated to it is huge (\(\sim 10\%\)). To the best of our knowledge, there are not experimental studies on BFCO under pressure. Here, we amend for such a lack of information by carrying out accurate first-principles calculations based on density functional theory (DFT, Methods) [14; 18]. Figure 1b shows the estimated hydrostatic pressure that is necessary to drive the \(\mathcal{T}\rightarrow\mathcal{R}\) phase transition at low temperatures (i.e., disregarding entropy and also likely quantum nuclear effects) and for compositions in the interval \(0.25\leq x\leq 0.50\). This transition pressure is found to steadily, and significantly, decrease under increasing Fe content. For instance, \(p_{t}\) amounts to 1.4 GPa at \(x=0.50\) and to 0.3 GPa at \(x=0.25\). As expected, the closer the cobalt content is to the \(\mathcal{T}\)-\(\mathcal{R}\) morphotropic phase boundary (\(x_{c}\approx 0.25\)), the easier results to switch from the supertetragonal to the rhombohedral phase with pressure.
Simulating temperature effects in materials with first-principles methods is computationally very intensive and laborious. However, temperature effects are critical for the assessment of possible caloric phenomena hence cannot be neglected in the present study. We employed the quasi-harmonic approximation (QHA) [14; 18] to calculate _ab initio_ Gibbs free energies for BFCO in the \(\mathcal{T}\) and \(\mathcal{R}\) phases over broad pressure, temperature and electric field conditions, thus allowing for the estimation of barocaloric and electrocaloric effects (Methods).
Figure 1c shows the \(p\)-\(T\) phase diagram calculated for BFCO at a composition of \(x=0.50\), hereafter referred to as BFCO\({}_{0.5}\). Therein, it is appreciated that \(p_{t}\) consistently increases under raising temperature, reaching a value of 1.24 GPa at room temperature. In spite of such a relatively large pressure, in what follows we present multicaloric results obtained for bulk BFCO\({}_{0.5}\) at and near room temperature since from a computational point of view this solid solution is highly affordable (i.e., the size of the corresponding simulation cells are among the smallest thus making the QHA free energy calculations feasible). In practice, much smaller pressures of the order of 0.1 GPa can be attained by reducing the relative content of Co ions (Fig. 1b) without significantly affecting the main conclusions presented in the next sections.
**Barocaloric performance of BFCO\({}_{0.5}\).** We start by analyzing the barocaloric effects induced by hydrostatic pressure in bulk BFCO\({}_{0.5}\) in the absence of electric fields. Figures 2a-b show the compression required
Figure 1: **Phase competition in BFCO under pressure.****a** Sketch of the competitive tetragonal (\(\mathcal{T}\)) and rhombohedral (\(\mathcal{R}\)) multiferroic phases. The corresponding electric polarization and antiferromagnetic spin ordering are indicated. **b** The \(\mathcal{T}\rightarrow\mathcal{R}\) transition pressure calculated at \(T=0\) K and disregarding likely quantum nuclear effects expressed as a function of composition. **c** First-principles \(p\)β\(T\) phase diagram of BFCO\({}_{0.5}\). Phase transition points were determined under the condition \(\Delta G(p,T_{t})\equiv G_{\mathcal{T}}(p,T_{t})-G_{\mathcal{R}}(p,T_{t})=0\).
to induce the \(\mathcal{T}\rightarrow\mathcal{R}\) phase transition as a function of temperature, \(p_{t}\), and the accompanying relative volume change. The estimated phase transition volume change is negative and very large as it amounts to \(\sim 8\%\) in absolute value. Such a huge relative volume change augurs a large phase transition entropy change, as it can be inferred from the Clausius-Clapeyron relation \(\Delta S_{t}=\Delta V\cdot\frac{dp_{t}}{dt}\). However, after doing the calculations and assuming that \(\Delta S_{\mathrm{BC}}\approx\Delta S_{t}\) (Methods), it was found that the ensuing barocaloric isothermal entropy shifts were actually quite modest (Fig. 2c). For instance, at room temperature we obtained \(|\Delta S_{\mathrm{BC}}|=1.7\) J K\({}^{-1}\) mol\({}^{-1}\) (5.4 J K\({}^{-1}\) kg\({}^{-1}\)), which is about one order of magnitude smaller than the giant barocaloric entropy changes found in superionic and plastic crystals (\(\sim 100\) J K\({}^{-1}\) kg\({}^{-1}\)) [19; 20; 21; 22; 23; 24; 25; 26; 27]. Under decreasing temperature, \(|\Delta S_{\mathrm{BC}}|\) slightly increases (e.g., 2.8 J K\({}^{-1}\) mol\({}^{-1}\) at \(T=200\) K) however the estimated values still are quite reduced. The reason for these outcomes is that \(p_{t}\) barely changes with temperature in the explored thermodynamic range (i.e., the temperature derivative of the phase transition pressure amounts only to \(\sim 10^{-3}\) GPa K\({}^{-1}\), Fig. 2a).
The revealed minute \(T\)-induced \(p_{t}\) variation, on the other hand, implies sizeable changes in the phase transition temperature, \(T_{t}\), induced by small pressure shifts (since \(dT_{t}/dp=\left[dp_{t}/dT\right]^{-1}\)), thus suggesting possibly large barocaloric thermal shifts in bulk BFCO\({}_{0.5}\). Figure 2d shows the barocaloric adiabatic temperature changes, \(\Delta T_{\mathrm{BC}}\), estimated as a function of temperature (Methods). At room temperature (\(T=200\) K),
Figure 2: **Barocaloric descriptors of BFCO\({}_{0.5}\) estimated with DFT-based first-principles methods.****a**\(\mathcal{T}\rightarrow\mathcal{R}\) phase transition pressure expressed as a function of temperature. **b** Relative volume change occurring during the \(p\)-induced \(\mathcal{T}\rightarrow\mathcal{R}\) phase transition as referred to that of the tetragonal phase. **c** Barocaloric isothermal entropy change, \(\Delta S_{\mathrm{BC}}\), expressed as a function of temperature. **d** Barocaloric adiabatic temperature change, \(\Delta T_{\mathrm{BC}}\), expressed as a function of temperature. Both \(\Delta S_{\mathrm{BC}}\) and \(\Delta T_{\mathrm{BC}}\) were estimated indirectly by using the Clausius-Clapeyron relation (Methods). Solid lines in the figure are simple eye-guides.
was found to amount to 4.7 K (6.5 K) which, although it cannot rival with the barocaloric adiabatic temperature changes reported for superionic and plastic crystals (\(\sim 10\) K) [19; 20; 21; 22; 23; 24; 25; 26; 27], it shows promise in the context of electrocaloric effects (\(\sim 1\)-10 K).
The barocaloric results presented above were obtained with the indirect Clausius-Clayperon (CC) method, which is not exact [19]. Aimed to assess the extent of the employed approximations, we mimicked with theory quasi-direct barocaloric experiments [19; 23] in which entropy curves are estimated as a function of pressure and temperature and from which \(\Delta S_{\rm BC}\) and \(\Delta T_{\rm BC}\) can be straightforwardly determined (Figs. 3a-b) [27]. Moreover, with this quasi-direct estimation approach is also possible to determine for a given pressure shift, \(\Delta p\), the temperature span, \(T_{\rm span}\), over which barocaloric effects can be operated (Fig. 3b). In view of the huge \(dT_{t}/dp\) of \(\sim 10^{3}\) K GPa\({}^{-1}\) estimated for BFCO\({}_{0.5}\), giant \(T_{\rm span}\) values are anticipated [28].
Figure 3c shows the results of our quasi-direct barocaloric descriptor estimations. At room temperature and \(T=200\) K, we obtained adiabatic temperature changes of \(2.0\pm 2.5\) and \(4.0\pm 2.5\) K, respectively. Within the numerical uncertainties, these results are compatible with our previous estimations obtained with the CC method; however, it goes without saying that the reported error bars are unacceptably too large. The reasons for the relatively huge numerical uncertainties on \(\Delta T_{\rm BC}\) are the small \(\Delta S_{t}\) and great \(p\)-induced \(T_{t}\) shifts involved in its quasi-direct estimation (Fig. 3a). Thus, unfortunately, in the present case it is not possible to discern the actual precision of the barocaloric adiabatic temperature changes obtained with the approximate CC method. Nevertheless, the estimation of \(T_{\rm span}\) is still possible given its noticeably large size (Figs. 3a-b). By considering an outset compression of 1.18 GPa, we obtained \(T_{\rm span}\approx 60\) K for a small pressure shift of 0.06 GPa (calculated by adding up all the \(\Delta T_{\rm span}\) increments shown in Fig. 3c). This result is very encouraging since it indicates that, in spite of the relative smallness of \(\Delta S_{\rm BC}\) and \(\Delta T_{\rm BC}\), barocaloric effects in BFCO\({}_{0.5}\) could be operated over unusually ample temperature ranges.
**Electrocaloric performance of pressurized BFCO\({}_{0.5}\).** The electric polarization, \(P\), of BFCO\({}_{0.5}\) in the \(\mathcal{R}\) and \(\mathcal{T}\) phases are significantly different; for instance, \(P\) in the supertetragonal phase is more than two times larger than that in the rhombohedral phase [14], adding up to polarization module differences of \(>100\)\(\mu\)C cm\({}^{-2}\) (Fig. 4b). Such a huge electric polarization disparity seems very promising from an electrocaloric (EC) point of view, as it can be inferred from the electric Clausius-Clapeyron relation \(\Delta S_{t}=\Delta\mathbf{P}\cdot\frac{d\mathbf{\mathcal{E}}}{dT}\), where \(\Delta S_{t}\) represents the entropy change associated to the field-induced phase transition and \(\mathcal{E}_{c}\) the necessary electric field to switch from the \(\mathcal{R}\) to the \(\mathcal{T}\) phase. Figure 4a shows the \(\mathcal{E}_{c}\) estimated for a fixed pressure of 1.25 GPa as a function of temperature (Methods), which has been selected to ensure proper stabilization of the \(\mathcal{R}\) phase under conditions \(T\leq 300\) K. As clearly appreciated therein, the critical electric field steadily decreases under increasing temperature, ranging from 43 kV cm\({}^{-1}\) at 200 K to \(\approx 2\) kV cm\({}^{-1}\) at room temperature.
Figures 4c-e show the electrocaloric isothermal entropy and adiabatic temperature changes, \(\Delta S_{\rm EC}\) and \(\Delta T_{\rm EC}\), estimated for compressed BFCO\({}_{0.5}\) using the indirect CC approach (Methods). In this case, the sign of the EC descriptors indicate that the caloric effect is inverse (i.e., \(\Delta T<0\)), which follows from the fact that the high-entropy phase \(\mathcal{T}\) presenting the largest electric polarization is stabilized via application of the external electric bias. As expected, the size and temperature dependence
Figure 3: **Barocaloric performance of BFCO\({}_{0.5}\) directly estimated with DFT-based first-principles methods.****a** Entropy curves expressed as a function of temperature and applied pressure shift, \(\Delta p\equiv p-p_{0}\). **b** Direct estimation of the adiabatic temperature change, \(\Delta T_{\rm BC}\), and temperature span increment, \(\Delta T_{\rm span}\). The latter quantity is calculated among consecutive pressure shifts of 0.01 GPa, hence for a total pressure shift of \(\Delta p=\sum_{i}\Delta p_{i}\) the corresponding temperature span is \(T_{\rm span}=\sum_{i}\Delta T_{\rm span,i}\). **c** Barocaloric descriptors expressed as a function of the applied pressure shift.
of \(|\Delta S_{\rm EC}|\) and \(|\Delta T_{\rm EC}|\), which are directly related through the temperature and heat capacity (Fig. 4d, Methods), are very much similar to those of \(|\Delta S_{\rm BC}|\) and \(|\Delta T_{\rm BC}|\) since the underlying phase transitions are equivalent. For instance, at \(T=200\) K we estimated an electrocaloric adiabatic temperature change of \(-6.9\) K and at room temperature of \(-4.8\) K, to be compared with the analogous barocaloric shifts of \(+6.5\) and \(+4.7\) K. These \(\Delta T_{\rm EC}\) values are very much promising, specially when considering the small size of the required driving electric fields (that is, \(\mathcal{E}_{c}\sim 1\)-\(10\) kV cm\({}^{-1}\)).
Figure 4f encloses results for the electrocaloric strength of BFCO\({}_{0.5}\), \(\Lambda_{\rm EC}\), expressed as a function of temperature; this quantity is defined like the ratio of \(\Delta T_{\rm EC}\) by the corresponding electric bias. At \(T=200\) K, the attained adiabatic temperature change is highest however the required switching electric field is also largest, thus the resulting electrocaloric strength is smaller than obtained at higher temperatures. Still, the calculated \(\Lambda_{\rm EC}\) amounting to \(0.2\) K cm kV\({}^{-1}\) is already comparable to the record experimental values reported for oxide and hybrid organic-inorganic perovskites [2; 3; 10]. Remarkably, under increasing temperature the electrocaloric strength of BFCO\({}_{0.5}\) noticeably increases reaching a maximum, and colossal, value of \(2.2\) K cm kV\({}^{-1}\) at \(T=300\) K. These figures will be put into context in a next section; in what follows, we explain how the dual response of BFCO\({}_{0.5}\) to mechanical and electric stimuli may be exploited in practical solid-state cooling cycles.
**Proposed \(p\)-\(E\) multicaloric cycle.** Single stimulus solid-state cooling cycles typically consists of four thermodynamic steps, two involving the adiabatic switching on and off of the applied external field and the other two constant-field heat transfer processes with the environment and the system to be refrigerated [27]. In the present work, we propose an original multi-stimuli solid-state cooling cycle consisting of eight thermodynamic steps that has been designed to minimize the applied electric field, thus maximizing \(\Lambda_{\rm EC}\), and with a cumulative multicaloric performance of \(|\Delta T_{\rm MC}|=|\Delta T_{\rm BC}|+|\Delta T_{\rm EC}|\) and \(|\Delta S_{\rm MC}|=|\Delta S_{\rm BC}|+|\Delta S_{\rm EC}|\).
Figure 5 sketches the envisaged multi-stimuli solid-state cooling cycle comprising hydrostatic pressure and electric fields being applied on multiferroic lead-free
Figure 4: **Electrocaloric performance of BFCO\({}_{0.5}\) estimated with DFT-based first-principles methods at a fixed pressure of \(1.25\) GPa.****a** Critical electric field applied along the [001] direction inducing the \(\mathcal{R}\rightarrow\mathcal{T}\) phase transition. **b** Electric polarization change along the [001] direction occurring during the \(\mathcal{E}\)-induced \(\mathcal{R}\rightarrow\mathcal{T}\) phase transition. **c** Electrocaloric isothermal entropy change, \(\Delta S_{\rm EC}\), calculated for the \(\mathcal{E}\)-induced \(\mathcal{R}\rightarrow\mathcal{T}\) phase transformation in compressed BFCO\({}_{0.5}\). **d** Heat capacity of compressed BFCO\({}_{0.5}\). **e** Electrocaloric adiabatic temperature change, \(\Delta T_{\rm EC}\), calculated for the \(\mathcal{E}\)-induced \(\mathcal{R}\rightarrow\mathcal{T}\) phase transformation in compressed BFCO\({}_{0.5}\). **f** Electrocaloric strength of compressed BFCO\({}_{0.5}\). Both \(\Delta S_{\rm EC}\) and \(\Delta T_{\rm EC}\) were estimated indirectly by using the Clausius-Clapeyron relation (Methods). Solid lines in the figure are simple eye-guides.
BFOO solid solutions near room temperature. The cycle starts with multiferroic BFCO in the supertetragonal \(\mathcal{T}\) phase at temperature \(T\). Subsequently, hydrostatic pressure is adiabatically applied on BFCO so that it transforms into the \(\mathcal{R}\) phase and experiences a temperature increase of \(|\Delta T_{\text{BC}}|\). In the third step, heat is released to the ambient, \(\delta Q_{\text{BC}}\), and the initial temperature of the cycle is restored; compressed BFCO still remains in the \(\mathcal{R}\) phase. Next, an electric field is adiabatically applied on compressed BFCO so that it transforms into the \(\mathcal{T}\) phase, thus experiencing a temperature decrease of \(|\Delta T_{\text{EC}}|\). In the fifth step, heat is absorbed by the system, \(\delta Q_{\text{EC}}\), and the initial temperature of the cycle is restored; compressed and electrically biased BFCO remains in the \(\mathcal{T}\) phase. Subsequently, the electric field is adiabatically removed thus BFCO transforms into the \(\mathcal{R}\) phase and experiences a temperature increase of \(|\Delta T_{\text{EC}}|\). In the seventh step, heat is released to the ambient, \(\delta Q_{\text{EC}}\), and the initial temperature of the cycle is restored; compressed BFCO remains in the \(\mathcal{R}\) phase. Finally, hydrostatic pressure is adiabatically released so that BFCO transforms back into the \(\mathcal{T}\) phase and experiences a temperature decrease of \(|\Delta T_{\text{BC}}|\). Heat then is absorbed by the system, \(\delta Q_{\text{BC}}\), and the initial temperature of the cycle is restored, thus completing an entire multi-stimuli cycle.
Upon completion of a multi-stimuli cycle, multiferroic BFCO is able to remove an amount of heat equal to \(|\delta Q_{\text{BC}}|+|\delta Q_{\text{EC}}|\), or equivalently, \(T\cdot(|\Delta S_{\text{BC}}|+|\Delta S_{\text{EC}}|)\), from the targeted system to be refrigerated and release it to the ambient (thus cooling it down). The described multi-stimuli cycle lends itself to several useful variations. For instance, the state reached in the seventh step is thermodynamically equivalent to that attained in the third; therefore, one could recursively perform the electrocaloric subcycle consisting of steps (3)-(4)-(5)-(6) which entails application and removal of an electric bias under fixed hydrostatic pressure (dashed lines in Fig. 5). Likewise, if the multi-stimuli cooling cycle starts with multiferroic BFCO in the rhombohedral \(\mathcal{R}\) phase instead of the \(\mathcal{T}\) phase, due to some composition synthesis constraints, for example, then the sequential application of hydrostatic pressure and electric field explained above needs to be swapped.
Figure 5: **Sketch of the proposed \(p\)β\(\mathcal{E}\) multicaloric cycle for enhancement of the electrocaloric strength.** (1) The multiferroic compound BFCO is at equilibrium in the \(\mathcal{T}\) phase at temperature \(T\). (2) Hydrostatic pressure is adiabatically applied on BFCO so that it transforms into the \(\mathcal{R}\) phase and experiences a temperature increase of \(\Delta T_{\text{BC}}\). (3) Heat, \(\delta Q_{\text{BC}}\), is released to the ambient and the initial temperature is restored; compressed BFCO remains in the \(\mathcal{R}\) phase. (4) An electric field is adiabatically applied on compressed BFCO so that it transforms into the \(\mathcal{T}\) phase and experiences a temperature decrease of \(|\Delta T_{\text{EC}}|\). (5) Heat, \(\delta Q_{\text{EC}}\), is absorbed by the system and the initial temperature is restored; compressed and electrically biased BFCO remains in the \(\mathcal{T}\) phase. (6) The electric field is adiabatically removed from compressed BFCO, thus it transforms into the \(\mathcal{R}\) phase and experiences a temperature increase of \(|\Delta T_{\text{EC}}|\). (7) Heat, \(\delta Q_{\text{EC}}\), is released to the ambient and the initial temperature is restored; compressed BFCO remains in the \(\mathcal{R}\) phase. The state reached in this step is equivalent to that in step (3), thus one can repeatedly run the electrocaloric subcycle (3)-(4)-(5)-(6) entailing application and removal of an electric bias under fixed hydrostatic pressure (dashed lines). (8) Hydrostatic pressure is adiabatically released from BFCO so that it transforms into the \(\mathcal{T}\) phase and experiences a temperature decrease of \(\Delta T_{\text{BC}}\). Heat, \(\delta Q_{\text{BC}}\), is absorbed by the system and the starting temperature is restored, realizing an entire multicaloric (1)-(8) cycle.
## Discussion
Table 1 summarizes some representative materials for which EC effects occurring at or near room temperature have been experimentally measured and reported in the literature. The selected compounds belong to three different families of ferroelectric materials, namely, oxides (e.g., HfO\({}_{2}\) and BaTiO\({}_{3}\)), hybrid organic-inorganic perovskites ([(CH\({}_{3}\))\({}_{2}\)CHCH\({}_{2}\)NH\({}_{3}\)]\({}_{2}\)PbCl\({}_{4}\)) and polymers (Terpolymer). In terms of largest \(|\Delta T_{\rm EC}|\), the oxides Y-HfO\({}_{2}\)[5] and BNBT-BCZT [9] and the elastomer Terpolymer/PMN-PT [30] emerge as the most promising since they display colossal values of 20-30 K. Nevertheless, these record materials require of quite large electric fields to realize their full EC potential (\(\mathcal{E}_{c}\sim 10^{3}\) kV cm\({}^{-1}\)), hence with no exception their associated electrocaloric strengths turn out to be quite mediocre, namely, \(\Lambda_{\rm EC}\sim 0.01\) K cm kV\({}^{-1}\).
Ferroelectric materials exhibiting moderate or even small \(|\Delta T_{\rm EC}|\) but attained under smaller electric fields (\(\mathcal{E}_{c}\sim 10\) kV cm\({}^{-1}\)), on the other hand, become the clear winners in terms of largest \(\Lambda_{\rm EC}\). For instance, the archetypal perovskite oxide BaTiO\({}_{3}\) renders an adiabatic temperature change of roughly 1 K driven by a minute electric field of 4 kV cm\({}^{-1}\), thus leading to a huge electrocaloric strength of 0.23 K cm kV\({}^{-1}\)[2; 3]. Likewise, the hybrid organic-inorganic perovskite [(CH\({}_{3}\))\({}_{2}\)CHCH\({}_{2}\)NH\({}_{3}\)]\({}_{2}\)PbCl\({}_{4}\) holds the record \(\Lambda_{\rm EC}\) value of 0.37 K cm kV\({}^{-1}\), which results from a small electric field of 30 kV cm\({}^{-1}\) and an adiabatic temperature change of 11.1 K [10]. It is worth noting that all these figures correspond to experimental data.
Table 1 encloses also the EC results that we have theoretically estimated in this study for pressurized BFCO\({}_{0.5}\) at room temperature. According to our QHA-DFT calculations, compressed multiferroic BFCO solid solutions have the potential to surpass all previously known EC materials in terms of largest \(\Lambda_{\rm EC}\). In particular, we predict an outstanding electrocaloric strength of 2.18 K cm kV\({}^{-1}\) that arises from an adiabatic temperature change of 4.8 K and an electric bias of \(\approx 2\) kV cm\({}^{-1}\). This theoretically estimated \(\Lambda_{\rm EC}\) value is from one to two orders of magnitude larger than those experimentally measured in uncompressed ferroelectrics. The key mechanism in achieving such a colossal figure is to employ an ancillary field, in our case hydrostatic pressure, to bring the system towards the verge of a ferroelectric phase transition so that it is possible to drive it with a minuscule electric field.
In the specific case considered here, the pressure required to achieve the colossal \(\Lambda_{\rm EC}\) value of 2.18 K cm kV\({}^{-1}\) is higher than 1 GPa. Obviously, this compression is too large to be considered for practical applications. Nevertheless, as it was argued at the beginning of the Results section, it is possible to significantly reduce the size of this ancillary pressure to the order of 0.1 GPa by decreasing the relative content of cobalt ions down to the critical composition of \(\approx 0.25\). Moreover, the \(\Lambda_{\rm EC}\) enhancement approach proposed in this study, and theoretically demonstrated for BFCO\({}_{0.5}\), in principle should be generalizable to many other well-known EC materials since most of them are responsive to pressure as well (even though the magnitude of the resulting BC effects may be quite small in comparison to those achieved in state-of-the-art barocaloric materials). Take the archetypal ferroelectric compound BaTiO\({}_{3}\) as an example. The ferro- to paraelectric phase transition temperature in this material can be effectively shifted with pressure, namely, \(dT_{t}/dp\approx-25\) K GPa\({}^{-1}\)[34], thus its room-temperature EC performance could be potentially improved with our proposed strategy. Finally, to mention that recent developments in the synthesis of ferroelectric membranes and thin films may also allow for the enhancement of the \(\Lambda_{\rm EC}\) figure-of-merit by combining electric fields with other types of mechanical
\begin{table}
\begin{tabular}{c c c c c c} & \(T\) & \(\varepsilon_{c}\) & \(\Delta T_{\rm EC}\) & \(|\Delta T_{\rm EC}|/\varepsilon_{c}\) & References \\ & (K) & (kV cm\({}^{-1}\)) & (K) & (K cm kV\({}^{-1}\)) & \\ \hline Y-HfO\({}_{2}\) & 358 & 3500 & 24.8 & 0.01 & [5] \\
0.93PMN-0.07PT & 298 & 723 & 9.0 & 0.01 & [29] \\ (NH\({}_{4}\))\({}_{2}\)SO\({}_{4}\) & 220 & 400 & 4.5 & 0.01 & [33] \\ Terpolymer/PMN-PT & 303 & 1800 & 31.0 & 0.02 & [30] \\ Ba\({}_{0.65}\)Sr\({}_{0.35}\)TiO\({}_{3}\) & 293 & 130 & 3.1 & 0.02 & [32] \\ BaZrO\({}_{2}\)Ti\({}_{0.5}\)O\({}_{3}\) & 313 & 145 & 4.5 & 0.03 & [31] \\ BNBT-BCZT & 370 & 620 & 23.0 & 0.04 & [9] \\ PbZr\({}_{0.46}\)Sn\({}_{0.46}\)Ti\({}_{0.08}\)O\({}_{3}\) & 317 & 30 & 1.6 & 0.05 & [4] \\ BaTiO\({}_{3}\) & 400 & 4.0 & 0.9 & 0.23 & [2] [3] \\ ((CH\({}_{3}\))\({}_{2}\)CHCH\({}_{2}\)NH\({}_{3}\))\({}_{2}\)PbCl\({}_{4}\) & 302 & 30 & 11.1 & 0.37 & [10] \\ BFCO\({}_{0.5}\) (pressurized) & 300 & 2.2 & -4.8 & 2.18 & This work \\ \end{tabular}
\end{table}
Table 1: **Electrocaloric performance of several ferroelectric materials at or near room temperature.** The electrocaloric strength of compressed BFCO\({}_{0.5}\) is significantly larger than those of other uncompressed ferroelectric compounds.
stimuli like uniaxial [35] and biaxial [36] stress.
In conclusion, we have proposed a new strategy for the enhancement of the electrocaloric strength of ferroelectric materials that consists in concertedly applying pressure and electric fields. We have theoretically proved our new concept on multifunctional BFCO solid solutions, an intriguing family of compounds displaying a discontinuous phase transition between two multiferroic states. In particular, for compressed BFCO\({}_{0.5}\) we estimated a record \(\Lambda_{\rm EC}\) parameter of 2.18 K cm kV\({}^{-1}\) at room temperature resulting from an adiabatic temperature change of 4.8 K and an electric bias of \(\approx\) 2 kV cm\({}^{-1}\). This electrocaloric strength turns out to be colossal since it is from one to two orders of magnitude larger than those experimentally measured in uncompressed ferroelectrics. The demonstrated \(\Lambda_{\rm EC}\) enhancement strategy can be applied to other types of ferroelectric materials, not necessarily magnetic, and be modified at convenience on the mechanical component. Thus, the combination of multiple stimuli opens new horizons in the field of caloric materials and solid-state refrigeration by expanding the design of possible cooling cycles and boosting current caloric performances. We hope that the present theoretical study will motivate new experimental works on the engineering of original and environmentally friendly solid-state cooling devices.
## Methods
Spin-polarized DFT calculations were performed with the generalized gradient approximation proposed by Perdew, Burke and Ernzerhof (PBE) as it is implemented in the VASP package [37; 38]. The "Hubbard-\(U\)" scheme due to Dudarev _et al._ was employed in the PBE calculations for treating better the Co (Fe) \(3d\) electrons, adopting a \(U\) value of 6 (4) eV [14; 15; 16; 17; 39]. The "projected augmented wave" method [40] was used to represent the ionic cores considering the following electronic states as valence: Co \(4s^{1}3d^{8}\), Fe \(3p^{6}4s^{1}3d^{7}\), Bi \(6s^{2}5d^{10}6p^{3}\), and O \(2s^{2}2p^{4}\). An energy cut-off of 800 eV and a \(\Gamma\)-centered \({\bf k}\)-point grid of \(4\times 6\times 6\) were employed for a \(2\times\sqrt{2}\times\sqrt{2}\) simulation cell containing 20 atoms [41], thus obtaining zero-temperature energies converged to within 0.5 meV/f.u. Geometry relaxations were performed for an atomic force threshold of 0.005 eV\(\cdot\)A\({}^{-1}\). Electric polarizations were accurately estimated with the hybrid HSE06 functional [42] and the Berry phase formalism [43; 44; 45].
_Ab initio_ free energies were estimated within the quasi-harmonic approximation (QHA) [15; 18; 46] as a function of \(p\) and \(T\). Phonon frequencies were calculated with the small displacement method [46]. The following technical parameters provided QHA free energies converged to within 5 meV per formula unit: 160-atom supercells, atomic displacements of 0.01 A, and q-point grids of \(16\times 16\times 16\) for integration within the first Brillouin zone. The effects of chemical disorder were addressed by generating all possible atomic Co-Fe and magnetic spin arrangements (ferromagnetic -FM- and antiferromagnetic -AFM- of type A, C, and G) for a \(2\times 2\sqrt{2}\times\sqrt{2}\) supercell containing 40 atoms. Quasi-harmonic free energies were calculated only for the lowest-energy configurations. Our spin-polarized DFT calculations were performed for bulk BiFe\({}_{0.5}\)Co\({}_{0.5}\)O\({}_{3}\).
Within the QHA [15; 18; 46], the Gibbs free energy of a given crystal phase, \(G_{\rm harm}\), is expressed as:
\[G_{\rm harm}(p,T)=E(p)+pV(p,T)+F_{\rm harm}(p,T)\, \tag{1}\]
where \(E\) is the static energy of the system (i.e., as directly obtained from zero-temperature DFT calculations), \(p\) the pressure, \(V\) the volume, and \(F_{\rm harm}\) the lattice Helmholtz free energy. (The dependence of the different energy terms on \(p\) and \(T\) have been explicitly noted.) For given \(V\) and \(T\), \(F_{\rm harm}\) can be determined with the formula:
\[F_{\rm harm}(V,T) = \frac{1}{N_{q}}\ k_{B}T\times \tag{2}\] \[\sum_{{\bf q}s}\ln\left[2\sinh\left(\frac{\hbar\omega_{{\bf q}s}} {2k_{B}T}\right)\right]\,\]
where \(\omega_{{\bf q}s}(V)\) are the phonon frequencies obtained at the reciprocal lattice vector \({\bf q}\) and phonon branch \(s\), \(N_{q}\) the total number of wave vectors used for integration in the Brillouin zone, and \(k_{B}\) the Boltzmann constant. Meanwhile, the hydrostatic pressure \(p\) is calculated via the expression:
\[p(V,T)=-\frac{\partial\left[E(V)+F_{\rm harm}(V,T)\right]}{\partial V}\, \tag{3}\]
which numerically allows to determine \(V(p,T)\). Thus, by performing \(E\) and \(\omega_{{\bf q}s}\) DFT calculations for a set of \(V\) points, over which interpolation is applied to describe \(F_{\rm harm}\) and \(p\) continuously, and by using Eqs. (1)-(3), it is possible to estimate \(G_{\rm harm}(p,T)\). To quantify the temperature at which the \({\cal T}\leftrightarrow{\cal R}\) phase transition occurs at a given pressure, \(T_{t}\), the condition \(\Delta G_{\rm harm}(p,T_{t})\equiv G_{\rm harm}^{\cal T}(p,T_{t})-G_{\rm harm} ^{\cal R}(p,T_{t})=0\) was employed.
Likewise, the entropy of the crystal can be obtained through the expression:
\[S(V,T)=-\left(\frac{\partial F_{\rm harm}}{\partial T}\right)_{V}\, \tag{4}\]
and the heat capacity like:
\[C(V,T) = k_{B}\sum_{{\bf q}s}\left(\frac{\hbar\omega_{{\bf q}s}}{k_{B}T} \right)^{2}\times \tag{5}\] \[\frac{\exp\left(\hbar\omega_{{\bf q}s}/k_{B}T\right)}{\left[\exp \left(\hbar\omega_{{\bf q}s}/k_{B}T\right)-1\right]^{2}}\.\]
Through the knowledge of \(V(p,T)\) and Eqs. (2)-(5), then it is possible to determine \(S(p,T)\) and \(C(p,T)\).
In the absence of electric fields, the isothermal entropy change associated to barocaloric effects was approximately estimated with the Clausius-Clapeyron (CC) method like [26]:
\[\Delta S_{\rm BC}(p,T)=\Delta V\cdot\frac{dp_{t}}{dT}\, \tag{6}\]
where \(\Delta V\) is the change in volume occurring during the phase transition and \(p_{t}(T)\) the critical pressure. Likewise, the corresponding adiabatic temperature change can be approximated with the expression [47]:
\[\Delta T_{\rm BC}(p,T)=-\frac{T}{C}\cdot\Delta S_{\rm BC}(p,T)\, \tag{7}\]
where \(C(T)\) is the heat capacity of the system at zero pressure.
In the presence of electric fields, and assuming zero pressure, the thermodynamic potential that describes a particular phase is the Gibbs free energy defined as \(G_{\rm harm}=E-\mathbf{\mathcal{E}}\cdot\mathbf{P}+F_{\rm harm}\), where \(E\) and \(F_{\rm harm}\) are the same terms that appear in Eq. (1), \(\mathbf{P}\) the electric polarization and \(\mathbf{\mathcal{E}}\) the applied electric field. In this case, the thermodynamic condition that determines a \(\mathcal{E}\)-induced phase transition is \(G_{\rm harm}^{\mathcal{T}}(T,\mathcal{E}_{c})=G_{\rm harm}^{\mathcal{R}}(T, \mathcal{E}_{c})\). The value of the corresponding critical electric field then can be estimated like:
\[\mathcal{E}_{c}(T)=\frac{\Delta\left(E+F_{\rm harm}(T)\right)}{\Delta P(T)}\, \tag{8}\]
where \(\Delta\left(E+F_{\rm harm}\right)\) is the Helmholtz free energy difference between the two phases, and \(\Delta P\) the resulting change in the electric polarization along the electric field direction. For \(p\neq 0\) conditions, an additional \(p\Delta V\) term should appear in the right-hand side of Eq. (8).
Once the value of \(\mathcal{E}_{c}\) and its dependence on temperature are determined through Eq. (8), the isothermal entropy change associated to electrocaloric effects can be approximately estimated with the CC method like [17]:
\[\Delta S_{\rm EC}(\mathcal{E},T)=-\Delta P\cdot\frac{d\mathcal{E}_{c}}{dT}. \tag{9}\]
Likewise, the corresponding adiabatic temperature change was approximated with the expression [47]:
\[\Delta T_{\rm EC}(\mathcal{E},T)=-\frac{T}{C}\cdot\Delta S_{\rm EC}(\mathcal{E },T)\, \tag{10}\]
where \(C(T)\) is the heat capacity of the system at zero electric field.
|
2306.16083 | UnitSpeech: Speaker-adaptive Speech Synthesis with Untranscribed Data | We propose UnitSpeech, a speaker-adaptive speech synthesis method that
fine-tunes a diffusion-based text-to-speech (TTS) model using minimal
untranscribed data. To achieve this, we use the self-supervised unit
representation as a pseudo transcript and integrate the unit encoder into the
pre-trained TTS model. We train the unit encoder to provide speech content to
the diffusion-based decoder and then fine-tune the decoder for speaker
adaptation to the reference speaker using a single $<$unit, speech$>$ pair.
UnitSpeech performs speech synthesis tasks such as TTS and voice conversion
(VC) in a personalized manner without requiring model re-training for each
task. UnitSpeech achieves comparable and superior results on personalized TTS
and any-to-any VC tasks compared to previous baselines. Our model also shows
widespread adaptive performance on real-world data and other tasks that use a
unit sequence as input. | Heeseung Kim, Sungwon Kim, Jiheum Yeom, Sungroh Yoon | 2023-06-28T10:30:39Z | http://arxiv.org/abs/2306.16083v1 | # UnitSpeech: Speaker-adaptive Speech Synthesis with Untranscribed Data
###### Abstract
We propose UnitSpeech, a speaker-adaptive speech synthesis method that fine-tunes a diffusion-based text-to-speech (TTS) model using minimal untranscribed data. To achieve this, we use the self-supervised unit representation as a pseudo transcript and integrate the unit encoder into the pre-trained TTS model. We train the unit encoder to provide speech content to the diffusion-based decoder and then fine-tune the decoder for speaker adaptation to the reference speaker using a single \(<\)unit, speech\(>\) pair: UnitSpeech performs speech synthesis tasks such as TTS and voice conversion (VC) in a personalized manner without requiring model re-training for each task. UnitSpeech achieves comparable and superior results on personalized TTS and any-to-any VC tasks compared to previous baselines. Our model also shows widespread adaptive performance on real-world data and other tasks that use a unit sequence as input1.
Footnote 1: Code: [https://github.com/gmltrd789/UnitSpeech](https://github.com/gmltrd789/UnitSpeech)
\({}^{1}\) Data Science and AI Lab, ECE, Seoul National University, Seoul 08826, Korea
\({}^{2}\) Interdisciplinary Program in AI, Seoul National University, Seoul 08826, Korea
{gmltmd789, ksw0306, quilava1234, sryoon}@snu.ac.kr
**Index Terms**: speaker adaptation, text-to-speech, voice conversion, diffusion model, self-supervised unit representation
## 1 Introduction
As text-to-speech (TTS) models have shown significant advances in recent years [1, 2], there have also been works on adaptive TTS models which generate personalized voices using reference speech of the target speaker [3, 4, 5, 6, 7]. Adaptive TTS models mostly use a pre-trained multi-speaker TTS model and utilize methods such as using target speaker embedding [3, 4, 5] or fine-tuning the model with few data [3, 6, 7]. While the former allows easier adaptation compared to the latter, it suffers from relatively low speaker similarities.
Most fine-tuning-based approaches require a small amount of target speaker speech data and may also require a transcript paired with the corresponding speech. AdaSpeech 2 [7] proposes a pluggable mel-spectrogram encoder (mel encoder) to fine-tune the pre-trained TTS model with untranscribed speech. Since the mel encoder is introduced to replace the text encoder during fine-tuning, AdaSpeech 2 does not require a transcript when fine-tuning the decoder on the target speaker. However, its results are bounded only to adaptive TTS and show limitations such as requiring a relatively large amount of target speaker data due to its deterministic feed-forward decoder.
Recent works on diffusion models [8, 9] show powerful results on text-to-image generation [10] and personalization with only a few images [11, 12], and such trends are being extended to speech synthesis [13, 14] and adaptive TTS [15, 16]. Guided-TTS 2 leverages the fine-tuning capability of the diffusion model and the classifier guidance technique to build high-quality adaptive TTS with only a ten-second-long untranscribed speech. However, Guided-TTS 2 requires training of its unconditional generative model, which results in more challenging and time-consuming training compared to typical TTS models.
In this work, we propose UnitSpeech, which performs personalized speech synthesis by fine-tuning a pre-trained diffusion-based TTS model on a small amount of untranscribed speech. We use the multi-speaker Grad-TTS as the backbone TTS model for speaker adaptation which requires transcribed data for fine-tuning. Likewise AdaSpeech 2, we introduce a new encoder model to provide speech content to the diffusion-based decoder without transcript. While AdaSpeech 2 directly uses mel-spectrogram as the input of the encoder, we use the self-supervised unit representation [17] which contains speech content disentangled with the speaker identity to better replace the text encoder. The newly introduced encoder, named unit encoder, is trained to condition the speech content into the diffusion-based decoder using the input unit. For speaker adaptation, we fine-tune the pre-trained diffusion model conditioned on the unit encoder output with a \(<\)unit, speech\(>\) pair of the target speaker. By customizing the diffusion decoder to the target speaker, UnitSpeech is capable of performing multiple adaptive speech synthesis tasks that receive transcript or unit as input.
We show that UnitSpeech is comparable to or outperforms baseline models on adaptive TTS and any-to-any VC tasks. We further ablate how each factor of UnitSpeech affects the pronunciation and speaker similarity for adaptive speech synthesis. In addition to samples for evaluation, we provide samples for a wide range of scenarios, including various real-word reference data from YouTube and other tasks using units on demo page2.
Footnote 2: Demo: [https://unitspeech.github.io/](https://unitspeech.github.io/)
Our contributions are as follows:
* To the best of our knowledge, this is the first work that introduces unit representation to utilize untranscribed speech for speaker adaptation.
* We propose a pluggable unit encoder for pre-trained TTS model, enabling fine-tuning using untranscribed speech.
* We introduce a simple guidance technique to improve pronunciation accuracy in adaptive speech synthesis.
## 2 Method
Our aim is the personalization of existing diffusion-based TTS models using only untranscribed data. To personalize a diffusion model [8, 9] without any transcript, we introduce a unit encoder that learns to encode speech content for replacing the text encoder during fine-tuning. We use the trained unit encoder to adapt the pre-trained TTS model to the target speaker on various tasks. We briefly explain the pre-trained TTS model in Section 2.1, explain methods used for unit extraction and unit encoder
training in Section 2.2, and show how the trained UnitSpeech is used to perform various tasks in Section 2.3.
### Diffusion-based Text-to-Speech Model
Following the success of Grad-TTS [14] in single-speaker TTS, we adopt a multi-speaker Grad-TTS as our pre-trained diffusion-based TTS model. It consists of a text encoder, a duration predictor, and a diffusion-based decoder, just like Grad-TTS, and we additionally provide speaker information for multi-speaker TTS. To provide speaker information, we use a speaker embedding extracted from a speaker encoder.
The diffusion-based TTS model defines a forward process that gradually transforms mel-spectrogram \(X_{0}\) into Gaussian noise \(z=X_{T}\sim N(0,I)\), and generates data by reversing the forward process. While Grad-TTS defines the prior distribution using mel-spectrogram-aligned text encoder output, we use the standard normal distribution as the prior distribution. The forward process of the diffusion model is as follows:
\[dX_{t}=-\frac{1}{2}X_{t}\beta_{t}dt+\sqrt{\beta_{t}}dW_{t},\quad t\in[0,T], \tag{1}\]
where the \(\beta_{t}\) is a pre-defined noise schedule, and \(W_{t}\) denotes the Wiener process. We set \(T\) to 1 as in [14].
The pre-trained diffusion-based decoder predicts the score which is required when sampling through the reverse process. For pre-training, the data \(X_{0}\) is corrupted into noisy data \(X_{t}=\sqrt{1-\lambda_{t}}X_{0}+\sqrt{\lambda_{t}}\epsilon_{t}\) through the forward process, and the decoder learns to estimate the conditional score given the aligned text encoder output \(c_{y}\) and the speaker embedding \(e_{S}\) with the training objective in Eq. 2.
\[L_{grad}=\mathbb{E}_{t,X_{0},\epsilon_{t}}\|[(\sqrt{\lambda_{t}}s_{\theta}(X_ {t},t|c_{y},e_{S})+\epsilon_{t}\|_{2}^{2}]], \tag{2}\]
where \(\lambda_{t}=1-\mathrm{e}^{-\int_{0}^{t}\beta_{s}ds}\), and \(t\in[0,1]\). Using the estimated score \(s_{\theta}\), the output of the diffusion-based decoder, the model can generate mel-spectrogram \(X_{0}\) given the transcript and speaker embedding using the discretized reverse process which is as follows:
\[X_{t-\frac{1}{N}}=X_{t}+\frac{\beta_{t}}{N}(\frac{1}{2}X_{t}+s_{\theta}(X_{t },t|c_{y},e_{S}))+\sqrt{\frac{\beta_{t}}{N}}z_{t}, \tag{3}\]
where \(N\) denotes the number of sampling steps.
In addition to \(L_{grad}\) in Eq. 2, the pre-trained TTS model aligns the output of the text encoder with the mel-spectrogram using monotonic alignment search (MAS) proposed in Glow-TTS [2] and minimizes the distance between the aligned text encoder output \(c_{y}\) and the mel-spectrogram \(X_{0}\) using the encoder loss \(L_{enc}=MSE(c_{y},X_{0})\). To disentangle the text encoder output with speaker identity, we minimize the distance between the speaker-independent representation \(c_{y}\) and \(X_{0}\) without providing the speaker embedding \(e_{S}\) to the text encoder.
### Unit Encoder Training
While we aim to fine-tune the pre-trained TTS model for high-quality adaptation given minimal amounts of untranscribed reference data, the pre-trained TTS model alone is structurally challenging of doing so. Our pre-trained TTS model is restricted only to training with transcribed speech data, whereas the larger half of real-world speech data is occupied by untranscribed data. As a solution to this problem, we combine a unit encoder with the pre-trained TTS model to expand the generation capabilities for adaptation.
The unit encoder is a model identical to the text encoder of the TTS model in both architecture and role. In contrast to the text encoder which uses transcripts, the unit encoder uses a discretized representation known as unit, which broadens the model's generation capabilities, enabling adaptation on untranscribed speech. Specifically, unit is a discretized representation obtained by HuBERT [17], a self-supervised model for speech. The leftmost part of Fig. 1 shows the unit extraction process, where speech waveform is used as input of HuBERT, and output representation is discretized by \(K\)-means clustering into unit clusters, resulting in a unit sequence. Note that by setting an appropriate number of clusters, we can constrain the unit to contain mainly the desired speech content. The obtained unit sequence from HuBERT is upsampled to mel-spectrogram length, where we then compress into unit duration \(d_{u}\) and squeezed unit sequence \(u\).
The center of Fig. 1 shows the training process of the unit encoder. With squeezed unit sequence \(u\) as input, the unit encoder, plugged into the pre-trained TTS model, plays the same role as the text encoder. The unit encoder is trained with the same training objective \(L=L_{grad}+L_{enc}\), only having \(c_{y}\) replaced with \(c_{u}\), an extended unit encoder output using ground-truth duration \(d_{u}\). This results in \(c_{u}\) being placed in the same space as \(c_{y}\), enabling our model to replace the text encoder with
Figure 1: The overall procedure of UnitSpeech.
the unit encoder during fine-tuning. Note that the diffusion decoder is frozen, and only the unit encoder is to be trained.
### Speaker-Adaptive Speech Synthesis
Combining the pre-trained TTS model and the pluggable unit encoder, we are able to perform various speech synthesis tasks in an adaptive fashion by using a single untranscribed speech of the target speaker. Using squeezed unit \(u^{\prime}\) and unit duration \(d_{u^{\prime}}\) extracted from the reference speech as in the previous section, we fine-tune the decoder of the TTS model using the unit encoder. When doing so, the unit encoder is frozen to minimize pronunciation deterioration, and we only train the diffusion decoder using the objective in Eq. 2 with \(c_{y}\) switched into \(c_{u^{\prime}}\).
Our trained model is capable of synthesizing adaptive speech using either transcript or unit as input. For TTS, we provide \(c_{y}\) as a condition to the fine-tuned decoder to generate personalized speech with respect to the given transcript. When performing tasks using units including voice conversion or speech-to-speech translation, squeezed unit \(u\) and unit duration \(d_{u}\) are extracted from the given source speech using HuBERT. The extracted two are inputted into the unit encoder, which outputs \(c_{u}\), and the adaptive diffusion decoder uses \(c_{u}\) as a condition to generate voice-converted speech.
To further enhance the pronunciation of our model, we leverage a classifier-free guidance method [18] during sampling, which amplifies the degree of conditioning for the target condition using an unconditional score. Classifier-free guidance requires a corresponding unconditional embedding \(e_{\Phi}\) to estimate the unconditional score. Since the encoder loss drives the encoder output space close to mel-spectrogram, we set the \(e_{\Phi}\) to the mel-spectrogram mean of the dataset \(c_{mel}\) instead of training \(e_{\Phi}\) as in other works [10]. The modified score we utilize for classifier-free guidance is as follows:
\[\begin{split}&\hat{s}(X_{t},t|c_{c},e_{S})=s(X_{t},t|c_{c},e_{S})+ \gamma\cdot\alpha_{t},\\ &\alpha_{t}=s(X_{t},t|c_{c},e_{S})-s(X_{t},t|c_{mel},e_{S}).\end{split} \tag{4}\]
\(c_{c}\) here indicates the aligned output of text or unit encoder while \(\gamma\) denotes the gradient scale that determines the amount of provided condition information.
## 3 Experiments
### Experimental Setup
#### 3.1.1 Datasets
We use LibriTTS [19] to train the multi-speaker TTS model and the unit encoder. LibriTTS is a TTS dataset consisting of 2,456 different speakers, and we use the entire train subset. For training the speaker encoder, we use VoxCeleb 2 [20], a dataset consisting of 6,112 speakers. To show the unseen speaker adaptation capability of UnitSpeech on TTS, we select 10 speakers and a reference speech for each speaker from the test-clean subset of LibriTTS following YourTTS [3]. For evaluation on any-to-any VC, we randomly choose 10 reference speakers from the test-clean subset of LibriTTS, and randomly select 50 source samples from the test-clean subset. The reference samples are all \(7\sim 32\) seconds long.
#### 3.1.2 Training and Fine-tuning Details
Our pre-trained TTS model shares the same architecture and hyperparameters with Grad-TTS except for the doubled number of channels for multi-speaker modeling. The architecture of the unit encoder is equal to that of the text encoder. We train the TTS model on 4 NVIDIA RTX 8000 GPUs for 1.4M iterations and train the unit encoder for 200K iterations. We use the Adam optimizer [21] with the learning rate \(1e-4\) and batch size 64. The transcript is converted into the phoneme sequence using [22]. When extracting unit sequences, we utilize textless-lib [23]. We also train the speaker encoder on VoxCeleb2 [20] with GE2E [24] loss to extract the speaker embedding \(e_{S}\) of each reference speech. For fine-tuning, we use Adam optimizer [21] with learning rate \(2\cdot 10^{-5}\). We set the number of fine-tuning steps to 500 as a default, which only requires less than a minute on a single NVIDIA RTX 8000 GPU.
#### 3.1.3 Evaluation
To evaluate the performance on adaptive TTS, we compare UnitSpeech with Guided-TTS 2 [16], Guided TTS 2 (zero-shot), and YourTTS [3]. For baselines on voice conversion, we used DiffVC [25], YourTTS [3], and BNE-PPG-VC [26]. As for the vocoder, we use the officially released pre-trained model of universal HiFi-GAN [27]. We use the official implementations and pre-trained models for each baseline. Only a single reference speech is used for the adaptation of all the models, and generated audios are downsampled to 16khz for fair comparison. For all the diffusion-based models, we fix the number of sampling steps \(N\) to 50. We set the gradient scale \(\gamma\) of UnitSpeech to 1.0 for TTS and 1.5 for VC.
We select 5 sentences from text-clean subset of LibriTTS each for the 10 reference speakers chosen in 3.1.1 and set the total of 50 sentences as test set for TTS. 50 source speeches for evaluation of VC are selected as explained in 3.1.1. We use four metrics for model evaluation: the 5-scale mean opinion score (MOS) on audio quality and naturalness, the character error rate (CER) indicating pronunciation accuracy, the 5-scale speaker similarity mean opinion score (SMOS) and speaker encoder cosine similarity (SECS) to measure how similar the generated sample is to the target speaker. When calculating CER, we use the CTC-based conformer [28] of NEMO toolkit [29] as Guided-TTS 2. We also use the speaker encoder of Resemblyzer [30] for SECS evaluation as YourTTS. We generate adapted samples for each corresponding test sample and measure the CER and SECS values. We report the average values by repeating this measurement 5 times.
### Results
#### 3.2.1 Adaptive Text-to-Speech
In Table 1, we compare UnitSpeech to other adaptive TTS baselines. The MOS results indicate that our model generates high-quality speech comparable to Guided-TTS 2, a model for adaptive TTS only. UnitSpeech also shows superior performance compared to YourTTS, a model capable of both adaptive TTS and voice conversion similar to our model. Furthermore, we show that UnitSpeech is capable of generating speech with accurate pronunciation through the CER results.
We also confirm that our model is on par with Guided-TTS 2, which is also fine-tuned on the reference speech and outperforms zero-shot adaptation baselines on target speaker adaptation from the SMOS and SECS results. Through these results, we show that even though our model is capable of various tasks using either unit or transcript inputs in a personalized manner, it shows reasonably comparable TTS quality against single-task-only baselines. Samples of each model can be found on our demo page.
#### 3.2.2 Any-to-Any Voice Conversion
As shown in Table 2, UnitSpeech performs reasonably on VC task. Our model outperforms baselines regarding naturalness and speaker similarity, with a slight decline in pronunciation accuracy as a trade-off. This result demonstrates that our model is capable of both high-quality adaptive TTS and any-to-any VC. We include samples of our model and baselines on demo page.
#### 3.2.3 Other Data and Tasks
In the previous section, we explained that by fine-tuning the model with a single reference speech of the target speaker, we were able to obtain results either comparable or superior to the baselines on both TTS and VC tasks. UnitSpeech is capable of not only TTS and VC but also any other speech synthesis task that may use unit, providing a sense of personalization to each task. On speech-to-speech translation (S2ST), one of the most general tasks that can utilize unit, we replace the speech synthesis part, which generally uses a single speaker unit-HiFi-GAN [31], with UnitSpeech, and show possibilities of personalized S2ST on CoVoST-2 [32]. Samples are on our demo page.
UnitSpeech also maintains reasonable fine-tuning quality even on real-world data for various tasks. To show the real-world availability, we use 10-second-long real-world data extracted from Youtube. Due to copyright issues, we do not explicitly upload these data, but instead, post the Youtube link and start time/end time of each data. We post various adaptation samples on our demo page.
#### 3.2.4 Analysis
We show the effects of several factors of our model in Table 3.
**The number of unit clusters** We observed that the number of clusters \(K\) does not significantly affect TTS results. In the case of voice conversion, however, which directly uses units as inputs, the increase in \(K\) allows a more precise segmentation of pronunciation, leading to better pronunciation accuracy.
**Fine-tuning** Our results demonstrate that the more we fine-tune, speaker similarity increases gradually and eventually converges around 500 iterations. We also observe that the pronunciation accuracy decreases when fine-tuning over 2,000 iterations. Thus, we have set the default number of iterations for fine-tuning to 500, which only takes less than a minute in a single NVIDIA RTX 8000 GPU.
We also measure pronunciation accuracy and speaker similarity according to the amount of reference speech used for fine-tuning. Our results show that both metrics improve as the length of reference speech increases. Furthermore, our model can still achieve sufficient pronunciation accuracy and speaker similarity even with a 5-second-long short reference speech.
**Gradient scale in classifier-free guidance** The results in Table 3 indicate that the proposed guidance method improves pronunciation at the cost of a minor decrease in speaker similarity. Therefore, we choose the gradient scale \(\gamma\) that maximizes the pronunciation improvement while minimizing the reduction in speaker similarity, which is 1 for TTS and 1.5 for VC.
## 4 Conclusion
We proposed UnitSpeech, a diffusion model that enables various adaptive speech synthesis tasks by fine-tuning a small amount of untranscribed speech. UnitSpeech consists of a unit encoder in addition to the text encoder, eliminating the need
\begin{table}
\begin{tabular}{|c|c|c|} \hline & 5-scale MOS & CER(\%) \\ \hline \hline Ground Truth & \(4.49\pm 0.06\) & 0.7 \\ \hline Mel + HiFi-GAN [27] & \(4.09\pm 0.10\) & 0.75 \\ \hline UnitSpeech & \(4.13\pm 0.10\) & 1.75 \\ \hline Guided-TTS 2 [16] & \(4.16\pm 0.10\) & 0.84 \\ \hline Guided-TTS 2 (zs) [16] & \(4.10\pm 0.11\) & 0.8 \\ \hline YourTTS [3] & \(3.57\pm 0.13\) & 2.38 \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline & 5-scale SMOS & SECS \\ \hline \hline Ground Truth & \(3.94\pm 0.13\) & 0.933 \\ \hline Mel + HiFi-GAN [27] & \(3.72\pm 0.13\) & 0.927 \\ \hline UnitSpeech & \(3.90\pm 0.13\) & 0.935 \\ \hline Guided-TTS 2 [16] & \(3.90\pm 0.13\) & 0.937 \\ \hline Guided-TTS 2 (zs) [16] & \(3.71\pm 0.14\) & 0.873 \\ \hline YourTTS [3] & \(3.34\pm 0.15\) & 0.866 \\ \hline \end{tabular}
\end{table}
Table 1: MOS, CER, SMOS, and SECS for TTS experiments on LibriTTS. Guided-TTS 2 (zs) indicates Guided-TTS 2 that performs zero-shot adaptation without fine-tuning.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{Text-to-Speech} & \multicolumn{3}{c|}{Voice Conversion} \\ \cline{3-6} & & CER (\%) & SECS & CER (\%) & SECS \\ \hline \hline \multirow{4}{*}{(\# Units)} & 50 & 1.94 & 0.932 & 12.64 & 0.928 \\ \cline{2-6} & 100 & 1.87 & 0.930 & 5.69 & 0.920 \\ \cline{2-6} & 200 & 1.75 & 0.935 & 3.55 & 0.923 \\ \cline{2-6} & 500 & 2.10 & 0.932 & 3.80 & 0.918 \\ \hline \hline \multirow{4}{*}{\# Iers} & 0 & 1.89 & 0.849 & 3.65 & 0.845 \\ \cline{2-6} & 50 & 2.15 & 0.905 & 3.78 & 0.893 \\ \cline{2-6} & 200 & 1.96 & 0.925 & 3.92 & 0.924 \\ \cline{2-6} & 500 & 1.75 & 0.935 & 3.55 & 0.923 \\ \cline{2-6} & 2000 & 2.04 & 0.937 & 3.78 & 0.925 \\ \hline \hline \multirow{4}{*}{\begin{tabular}{c} Length \\ (secs) \\ \end{tabular} } & 3 & 2.16 & 0.916 & 3.82 & 0.926 \\ \cline{2-6} & 5 & 1.96 & 0.921 & 3.44 & 0.925 \\ \cline{2-6} & 30 & 1.88 & 0.949 & 3.07 & 0.946 \\ \hline \hline \multirow{4}{*}{
\begin{tabular}{c} Gradient \\ scale \(\gamma\) \\ \end{tabular} } & 0.0 & 2.83 & 0.941 & 5.02 & 0.939 \\ \cline{2-6} & 0.5 & 2.04 & 0.939 & 4.15 & 0.936 \\ \cline{2-6} & 1.0 & 1.75 & 0.935 & 3.86 & 0.93 \\ \cline{2-6} & 1.5 & 1.74 & 0.929 & 3.55 & 0.923 \\ \cline{2-6} & 2.0 & 1.79 & 0.923 & 3.74 & 0.918 \\ \hline \end{tabular}
\end{table}
Table 3: CER, SECS regarding the number of unit clusters, fine-tuning iterations, length of untranscribed speech used for fine-tuning, and the gradient scale in classifier-free guidance.
for a transcript during fine-tuning. We also introduce a simple guidance technique that allows UnitSpeech to perform high-quality adaptive speech synthesis with accurate pronunciation. We showed that UnitSpeech is on par with the TTS baselines and outperforms VC baselines regarding audio quality and speaker similarity. Our demo results also indicate that UnitSpeech can robustly adapt to untranscribed speech of real-world data and we can substitute UnitSpeech for speech synthesis modules of tasks that take the unit as input.
## 5 Acknowledgements
This work was supported by SNU-Naver Hyperscale AI Center, Samsung Electronics (IO221213-04119-01), Institute of Information & communications Technology Planning & Evaluation grant funded by the Korea govern- ment (MSIT) [2021-0-01343, AI Graduate School Program (SNU)], National Research Foundation of Korea grant funded by MSIT (2022R1A3B1077720), and the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, SNU in 2023.
|
2307.10903 | VoteLab: A Modular and Adaptive Experimentation Platform for Online
Collective Decision Making | Digital democracy and new forms for direct digital participation in policy
making gain unprecedented momentum. This is particularly the case for
preferential voting methods and decision-support systems designed to promote
fairer, more inclusive and legitimate collective decision-making processes in
citizens assemblies, participatory budgeting and elections. However, a
systematic human experimentation with different voting methods is cumbersome
and costly. This paper introduces VoteLab, an open-source and
thoroughly-documented platform for modular and adaptive design of voting
experiments. It supports to visually and interactively build reusable campaigns
with a choice of different voting methods, while voters can easily respond to
subscribed voting questions on a smartphone. A proof-of-concept with four
voting methods and questions on COVID-19 in an online lab experiment have been
used to study the consistency of voting outcomes. It demonstrates the
capability of VoteLab to support rigorous experimentation of complex voting
scenarios. | Renato Kunz, Fatemeh Banaie, Abhinav Sharma, Carina I. Hausladen, Dirk Helbing, Evangelos Pournaras | 2023-07-20T14:26:21Z | http://arxiv.org/abs/2307.10903v2 | # VoteLab: A Modular and Adaptive Experimentation Platform for Online Collective Decision Making
###### Abstract
Digital democracy and new forms for direct digital participation in policy making gain unprecedented momentum. This is particularly the case for preferential voting methods and decision-support systems designed to promote fairer, more inclusive and legitimate collective decision-making processes in citizens' assemblies, participatory budgeting and elections. However, a systematic human experimentation with different voting methods is cumbersome and costly. This paper introduces VoteLab, an open-source and thoroughly-documented platform for modular and adaptive design of voting experiments. It supports to visually and interactively build reusable campaigns with a choice of different voting methods, while voters can easily respond to subscribed voting questions on a smartphone. A proof-of-concept with four voting methods and questions on COVID-19 in an online lab experiment have been used to study the consistency of voting outcomes. It demonstrates the capability of VoteLab to support rigorous experimentation of complex voting scenarios.
voting, experimentation, collective decision making, digital democracy, participation
## I Introduction
Digital democracy initiatives with direct citizens' participation in decision and policy-making gain significant momentum across the world, for instance, citizens' assemblies and participatory budgeting [1, 2]. The limitations of the current electoral systems as well as inaccurate or polarized voting outcomes of majority voting [3] create the need to test and experiment with alternative preferential voting methods [4, 5, 6, 7]. This requires digital tools that are easy and trustworthy for voters to use in their everyday life, while the design of a campaign by researchers and policy-makers is simple, modular, and adaptive to different evaluation scenarios, offering flexibility to test a broad and extensible spectrum of voting methods. Although there are significant ongoing efforts in this direction [8, 9, 10, 11, 12], the existing voting and participation platforms have not yet all these features implemented.
To change this, our paper introduces VoteLab, an open-source platform for modular and adaptive experimentation with different voting methods on smartphones. VoteLab allows users to visually and interactively design a voting campaign without writing a single line of code. Designers can even preview the users' voting experience in different smartphones before deployment. They can easily match voting questions to different voter groups, using assignments of tags/topics via a publish-subscribe system. VoteLab can collect useful meta-information to understand and interpret voting behavior such as voting time duration, time of choice, change of choices and feedback on voting outcomes. This allows one to conduct studies with between- and within-subjects designs, including factorial designs with different voting questions, different voting methods and different (treatment) groups.
As a proof-of-concept for VoteLab, an online experiment, is conducted to study four voting methods [13] in four voting questions related to the COVID-19 pandemic, i.e. in a polarized voting context. It is known from axiomatic results in social choice theory that voting outcomes may differ considerably, depending on the input method used [14]. Accordingly, experimental insights are needed to understand better what are the factors that matter for voting outcomes, and which voting procedures are assessed by voters to be more favorable, trustworthy, and fair. Based on the collected data and experience with the experimental conduct, we conclude that VoteLab supports rigorous experimentation with complex collective decision making scenarios very well.
The main contributions of this paper are to provide and test (i) a modular and adaptive modeling architecture for flexible experimentation with different voting methods; (ii) an open-source platform of VoteLab that implements the modeling architecture with a Web dashboard and an Android app; (iii) a proof-of-concept case study on COVID-19 to assess the practicality of VoteLab to support rigorous experimentation of complex voting scenarios; (iv) a software artifact demonstrator [15] of the lab experiment implemented in VoteLab, which is running on a virtual machine for reproducibility, assessment and engagement of the broader research community; (v) a comprehensive documentation of VoteLab for end-users and developers as well as a guide for the software artifact demonstrator [15].
The rest of the paper is outlined as follows: Section II compares VoteLab with related work. Section III introduces the VoteLab architecture. Section IV outlines the implemented components of VoteLab. Section V illustrates the proof-of-concept of the software artifact. Section VI concludes this paper and outlines future work.
## II Comparison with Related Work
Several recent efforts focus on the implementation of participatory decision-making processes using digital voting
platforms. For instance, Consul [10] is such an open-source platform, developed by the Madrid city council for engaging the public in decision-making processes such as making proposals or allocating public budget. Similarly, Decidim [9] is an open-source digital platform for citizen participation. These platforms make it possible for everyone to democratically organize campaigns for proposals, public meetings, decision-making discussions and also vote on the selected proposals. Stanford Participatory Budgeting (SPB) [8] is mainly used for budgeting problems rather than collaborative legislation and proposal submissions.
While these Web apps provide cross-platform compatibility and are easily accessible through Web browsers, their performance is not comparable to the ones of native apps [19]. Moreover, these platforms only support a limited number of voting methods and they do not have significant built-in functionality for the collection of meta-data for the explanation and interpretation of the voting behavior.
Mobile Voting System (MVS) [16] is an open-source Android application developed for voting. The registration and casting of votes is based on SMS messages. M-Vote [17] is another mobile voting system that focuses on the robustness against attacks, utilizing fingerprint identification for enhanced security and authentication. DApps [18] is also a digital voting system focusing on integrity, where the identification is done based on voters' mobile phone numbers. Smart Agora [11, 12] is a crowd-sensing ubiquitous platform for outdoor living-lab experiments. It is designed for geo-located decision-making at points of interest, while providing capabilities for passive mobile sensor data collection. Complex crowd-sensing tasks are designed visually and interactively without the need to write code. Smart Agora has also been studied in the context of verifying conditions for more informed decision-making on the blockchain. It is applied to Smart City domains such as cycling risk assessment [12].
There are several limitations in current digital decision-making approaches. The design of these platforms is complex with limited modularity (_i.e., modular architecture_). It often requires technical or programming skills to obtain high-quality comprehensive data (_i.e., simplicity, metadata collection_), using different implementation options (_i.e., native app_). So far, there is a lack of flexible platforms for voting experimentation, as existing tools are typically limited to specific experimental and voting scenarios (_i.e., numbers of voting methods, verification method_). Meta-data about the voting choices such as recording the choice duration and evaluations of the voting results are often necessary to understand in-depth voting behavior (_i.e. user feedback_). Moreover, platforms provide a varying flexibility in customizing and reusing voting questions and settings (_i.e. adaptation_). Open-source digital voting platforms tend to be complex and inadequately documented (_i.e., thoroughly documented_). Table I provides a summary of our comparison of some prominent platforms.
## III System Architecture
The system architecture of VoteLab is designed to create an adaptive system that facilitates the seamless integration of new voting mechanisms. The section summarizes the key system functionality of the platform.
### _Comparison of Voting Methods_
VoteLab provides an extensible testbed environment, enabling rigorous experiments to compare and evaluate different voting mechanisms and their influence on collective choices. For example, organizations and communities can utilize this platform to assess the performance of various voting methods, such as majority voting, approval voting, score voting or quadratic voting (currently 7 supported). By combing the collection of voting data and meta-data about the choices, VoteLab supports researchers and policy makers to study evidence-based decision-making and design fairer, more expressive and inclusive voting systems.
### _Simplified Voting Experience_
In VoteLab, voters can effortlessly create digital voting and data collection processes, using an intuitive visual interface. No coding is required, as the platform empowers users to visually design and implement complex workflows running on smartphones. For example, a community organization can use the platform to design a visually appealing ballot with clear instructions and options for voters. This user-friendly approach simplifies the process and enables a broad range of users to leverage digital voting and complete data collection without the need for programming expertise.
### _Tag Assignment System_
Voters can automatically access the voting questions and campaigns via a tag assignment system, representing categories of interest. This system is implemented as a publish-subscribe mechanism, which effectively determines the permissions and privileges of individuals, clearly specifying who
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & SPB & Consul & Decidim & MVS & M-Vote & DApps & Smart Agora & VoteLab \\ Criteria & [8] & [10] & [9] & [16] & [17] & [18] & [11, 12] & \\ \hline Modular architecture & \(\times\) & \(\checkmark\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) \\ Adaptation & \(\checkmark\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) \\ Simplicity & \(\checkmark\) & \(\times\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Metadata collection & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) \\ User feedback & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) \\ Thoroughly documented & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) \\ Number of voting methods & 5 & 2 & 1 & 1 & 1 & 1 & surveys & 7 \\ Native app & \(\times\) & \(\times\) & \(\times\) & Android & mobile device & mobile device & Android & Android \\ Verification method & _Code \& SMS_ & _Census info \& SMS_ & _Code \& SMS_ & _Fingerprint_ & _Phone number_ & _Code_ & _Email_ \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of some popular digital participatory platforms for collective decision making.
can perform which actions. For instance, it allows precise control over which city district can access specific voting questions. In this way, voting designers can create tailored campaigns for specific groups and communities.
### _Multiple Voting Campaigns for Field Tests_
The platform empowers researchers to create multiple reusable voting campaigns that involve repeated measurements and group comparisons (between and within subjects experimental designs). Researchers and practitioners can easily design, setup and run voting processes managed via a user-friendly graphical user interface.
### _Behavioral Analysis and Decision-Making Insights_
VoteLab supports a comprehensive analysis of various criteria, including statistics and collective patterns of anonymous voter behaviors, recording initial choices made, their timing, changed decisions as well as the duration of decision-making processes. By collecting such metadata throughout the voting process, new insights can be gained to better understand, effectively design, and improve voting procedures.
### _Customizable and Seamless Feedback System_
The platform incorporates a built-in feedback system that allows for gathering opinions, ratings, or responses from users regarding the voting outcomes and experiences. This feedback can be used to assess the legitimacy and satisfaction of the voters with the voting outcomes and the processes involved.
### _Ubiquitous Online Voting_
Via VoteLab, voters can engage in the voting process using the personal devices they are already familiar with, without the need for specialized or dedicated voting hardware. This functionality expands the reach of voting campaigns and enhances participation by eliminating barriers associated with physical polling locations or specialized voting devices. It promotes flexibility and convenience, enabling voters to cast their votes online anytime and from anywhere with Internet access.
In summary, VoteLab provides a testbed for comparing voting mechanisms and gathering augmented voting data for research and policy-making purposes. The user-friendly interface promotes engagement and empowers users to design complex processes without coding skills. This promotes an inclusive voting experience, facilitating complex field tests, crowd sensing, and participatory decision-making.
## IV Key System Components
The _modular architecture_ of VoteLab prioritizes the separation of system components and leverages API calls for the communication interface, enabling seamless integration, replacement, or addition of code components. This design approach ensures flexibility, adaptability, and scalability, allowing for the inclusion of new code segments, such as an iOS application, or the integration of additional voting methods without disrupting the existing framework architecture.
VoteLab is implemented using the Android Studio platform, utilizing Java, and Microsoft.NET for the Web application. Figure 1 depicts the architecture of the platform, which consists of three interactive parts: (i) an Android application enabling voters to actively participate in elections, (ii) a database server responsible for the storage and management of the collected data, (iii) a dedicated Web dashboard supporting the voting design with an intuitive and user-friendly interface. In the rest of this section, we illustrate a detailed description of each component, emphasizing its distinct functionalities and features.
#### Iv-F1 Database Server and its Components
The central component of the system is the _database server_, serving as the hub for all communication and interactions. It plays a key role in handling changes, updates, and aggregations related to votes within the platform, i.e., the process of summing up individual votes to determine the overall voting outcome. The database server comprises three essential components: (i) a PostgreSQL database, (ii) API handlers for the phone application and Web server, and (iii) a vote processor responsible for processing the votes.
API handlers manage and translate external requests into PostgreSQL queries, ensuring effective communication with the database. Contributions to votes are processed by the vote processor, which generates the corresponding voting results. The processor regularly monitors the server for the closing date of voting, automatically calculates results, and stores them in the database upon completion of the voting period.
Furthermore, if there is a need for calculating voting results other than the specified closing date, the API handlers can request the calculation of results by establishing an open connection to the processor. This flexibility allows for on-demand result generation beyond the predefined closing date.
#### Iv-F2 Voting Management Dashboard
The _Web server component_ encompasses the data and API calls within an intuitive Web interface, which can be used to design and deploy voting campaigns, as well as assign tags to voting questions, see Figure 2(a). Tag assignment involves associating specific
Fig. 1: Architecture of VoteLab.
tags or categories with voting experiments, allowing for easy categorization and organization of voting campaigns based on different criteria or themes. This enables efficient filtering and analysis of voting results using the assigned tags.
#### Iv-A3 Voter Interface
The VoteLab _Android application_ provides an intuitive interface for voters to actively participate in voting and experiments, see Figure 2, panels (b) and (c). Using the smartphone app, voters can access information about ongoing voting campaigns and experiments, view relevant details, and submit their responses. The platform uses the tag assignment system to match voters with voting questions. For example, if a voting campaign is related to a specific demographic or geographical region, voters assigned to the corresponding tags receive notifications and updates relevant to their specific group. This ensures that voters can receive tailored information and get opportunities to engage in voting processes that are more relevant to them. It also allows designers to easily create special group treatments to study voting behavior, presumably within the scope of proper ethics approvals.
### _VoteLab Workflow_
Figure 3 illustrates the lifecycle of a voting experiment, starting from the design of the ballot to the deployment of the campaign and the calculation of the voting results. (1) The process begins with the ballot designers creating an account on the Web dashboard and (2) subsequently logging in, to create and run voting campaigns. (3) Through an intuitive interface, the designers can effortlessly create and customize voting experiments. (4) Voters, on the other hand, can easily access the created voting campaign, express their preferences, and (5) view the results, once the voting period concludes. Furthermore, voters can provide valuable feedback on the voting results. The dashboard allows users to assign tags and reuse voting processes, enabling the deployment of multiple voting campaigns at different points in time with different or the same voting participants. This feature simplifies the experimentation with more complex voting scenarios (e.g. when there are several waves of a panel study).
Voters have the option to (i) download the Android app and (ii) log in, using valid credentials. (iii) Once logged in, they can actively participate in voting, namely by selecting the relevant voting tags they wish to contribute. (iv) By subscribing to specific tags or categories of interest, voters are granted access to the corresponding voting questions. The tagging system allows voters to easily engage with projects that align with their interests. The privacy of votes and anonymity of voters is preserved in this process.
## V COVID-19 Online Experiments
To assess the modularity and adaptation capabilities of VoteLab, a proof-of-concept lab study is illustrated, based on an online experiment with human subjects. The experimental design involves a level complexity that is hard to manage with existing platforms: (i) four voting methods, (ii) four voting campaigns, (iii) two experimental conditions, (iv) a within-subjects design with repeated measurements, and (v) the collection of meta-data. The scope of the study performed in 2021 is related to COVID-19 and, in particular, an attempt to better understand how different preference elicitation methods influence the voting outcomes in a polarized voting atmosphere. The in-depth analysis of the collected data is not subject of this paper and performed elsewhere [13]. Nevertheless, we outline here some key findings of our proof-of-concept study using VoteLab.
The online experiment was preregistered [20] and received approval by the ETH ethics commission. It involved 120
Fig. 3: Life cycle of the voting platform.
Fig. 2: VoteLab user interface: Panel (a) shows the voting management dashboard. Panels (b) and (c) show two example pools to be answered by the app user via two different voting methods: combined approval voting and score voting.
participants, who voted on different questions via different voting methods offered by VoteLab.
The questions to be voted upon were the following: **(1)**_What are you most concerned about the COVID-19 vaccines? [vaccine]_ (\(o_{1}\)) How to be vaccinated as soon as possible. (\(o_{2}\)) Their long-term side-effects. (\(o_{3}\)) Their overall effectiveness. (\(o_{4}\)) Their misuse by governments & companies. (\(o_{5}\)) Discrimination, e.g. travels, access to facilities & services. **(2)**_Among COVID-19 patients, which criterion should grant one access to an intensive care unit? [icul]_ (\(o_{1}\)) Being the youngest. (\(o_{2}\)) Being the oldest. (\(o_{3}\)) No denial of vaccination. (\(o_{4}\)) No violation of lockdown rules. (\(o_{5}\)) No health self-damage, e.g. smoking, drugs, alcohol. **(3)**_Which is the most effective protection measure against a COVID-19 infection?_ [_protection_] (\(o_{1}\)) Wearing a mask. (\(o_{2}\)) Physical distancing. (\(o_{3}\)) Vaccination. (\(o_{4}\)) Regular hand washing. (\(o_{5}\)) Maintaining a healthy lifestyle. **(4)**_Which is the most significant problem that the lockdown has caused? [lockdown]_ (\(o_{1}\)) Economic recession & unemployment. (\(o_{2}\)) Government control & suppression of freedom. (\(o_{3}\)) Social segregation & increased inequality. (\(o_{4}\)) Mental distress. (\(o_{5}\)) Reduced physical health condition.
Each participant answered each question with four different voting/input methods, which varied in terms of the degree of freedom and detail to express personal preferences: (i) _majority voting_ (\(mv\)=\(\{0,1\}\)), (ii) _combined approval voting_ (\(cav\)= \(\{0,0.5,1\}\)), (iii) _score voting_ (\(sv\)= \(\{0,0.2,0.4,0.6,0.8,1\}\)), and (iv) _modified Borda count_ (\(mbc\)=\(\{0,0.2,0.4,0.6,0.8,1\}\), if all options are selected, otherwise adjusted accordingly). Each input method scores the options in a different way. The scores refer to the numerical values assigned to a choice and represent the degree of preference.
Majority voting is the least flexible voting method. Via combined approval voting, participants can express their disapproval or support of a particular option. Score voting allows even more fine-grained expression of preferences, as the participants can assign a score to each option. The modified Borda count additionally encourages participants to make compromises: the more options a participant selects, the higher the score assigned to each option.
Table II illustrates the aggregate scores of each option for each question and voting method. Figure 4 illustrates the consistency of voting outcomes for each of the 1st, 2nd,..., 5th ranked option, derived from Table II. For instance, a consistency of 1.0 for the 1st ranked option means that all voting methods determine the same option as being ranked 1st. A consistency of 0.5 for the 2nd ranked option means 2 of 4 voting/input methods determine the same option as 2nd ranked. The results reveal the following: (i) Voting methods seem to show higher consistency with regard to disagreements rather than agreements. (ii) For the 1st ranked option, the highest consistency is observed for the _protection_ question. (iii) The _vaccine_ question has the lowest mean consistency among all five ranked options. The highest mean consistency is found for the _protection_ question. Such consistency may be considered to be a measure of robustness of the voting outcome with regard to the variation of the voting/input method. |
2310.04361 | Exploiting Activation Sparsity with Dense to Dynamic-k
Mixture-of-Experts Conversion | Transformer models can face practical limitations due to their high
computational requirements. At the same time, such models exhibit significant
activation sparsity, which can be leveraged to reduce the inference cost by
converting parts of the network into equivalent Mixture-of-Experts (MoE)
layers. Despite the crucial role played by activation sparsity, its impact on
this process remains unexplored. In particular, we show that the efficiency of
the conversion can be significantly enhanced by a proper regularization of the
activation sparsity of the base model. Moreover, motivated by the high variance
of the number of activated neurons for different inputs, we introduce a more
effective dynamic-k expert selection rule that adjusts the number of executed
experts on a per-token basis. Finally, we extend this approach to multi-head
attention projections, which results in additional savings compared to only
converting the FFN blocks. The proposed method, Dense to Dynamic-$k$
Mixture-of-Experts (D2DMoE), outperforms existing approaches on common NLP and
vision tasks, allowing us to save up to 60% of inference cost without
significantly affecting model performance. | Filip Szatkowski, Bartosz WΓ³jcik, MikoΕaj PiΓ³rczyΕski, Simone Scardapane | 2023-10-06T16:34:51Z | http://arxiv.org/abs/2310.04361v3 | # Exploiting Transformer Activation Sparsity with Dynamic Inference
###### Abstract
Transformer models, despite their impressive performance, often face practical limitations due to their high computational requirements. At the same time, previous studies have revealed significant activation sparsity in these models, indicating the presence of redundant computations. In this paper, we propose Dynamic Sparsified Transformer Inference (DSTI), a method that radically reduces the inference cost of Transformer models by enforcing activation sparsity and subsequently transforming a dense model into its sparse Mixture of Experts (MoE) version. We demonstrate that it is possible to train small gating networks that successfully predict the relative contribution of each expert during inference. Furthermore, we introduce a mechanism that dynamically determines the number of executed experts individually for each token. DSTI can be applied to any Transformer-based architecture and has negligible impact on the accuracy. For the BERT-base classification model, we reduce inference cost by almost 60%.
## 1 Introduction
In recent years, Transformer [1] became a go-to model architecture in many fields of deep learning such as natural language processing [2; 3] or computer vision [4; 5]. Those models often have a large number of parameters [3; 6], which gives them sufficient expressivity and enables them to effectively accumulate knowledge. However, despite their impressive performance [3; 7], they require costly high-end computational resources, and their applications are limited due to high latency and energy consumption. Simultaneously, sparse Mixture-of-Experts (MoE) models have gained significant attention as a compelling approach for enhancing model expressiveness and capacity [8]. Unlike their dense counterparts, these models are able to handle a much larger number of parameters with only a slight increase in processing time, and many of the latest state-of-the-art Transformer models use MoE layers [9; 6; 10]. Unfortunately, training MoE models from scratch may be unstable and is prone to expert imbalance or representation collapse [8; 11], which limits their applicability.
In this paper, we follow the recently introduced approach of turning dense models into sparse MoE models [12] and propose Dynamic Sparsified Transformer Inference (DSTI), a simple and practical way to significantly reduce the computational cost of the inference in Transformer models. Inspired by the recent works that show the benefits of the natural sparsity emerging in the Transformer models [13], we propose to train the dense models with an additional loss component that enforces activation sparsity. Then, we construct the MoE layers by splitting the dense matrices of FFN layers into experts and subsequently training small gating networks. Moreover, we propose a novel learning objective for training the routers that enables them to accurately predict the relative contribution of
each expert. Finally, we introduce Dynamic-\(k\) routing, which allows the model to adapt the amount of compute to the difficulty of the input, which increases its efficiency even further. DSTI achieves performance close to the original dense model while using only a fraction of its computational resources.
## 2 Related Works
Mixture-of-Experts modelsSparse MoE was first proposed for RNNs by Shazeer et al. [8] and since then has been successfully applied in the NLP domain [14; 6]. Recently, those models have also been gaining popularity in computer vision [9; 15]. Sparse Mixture-of-Experts (MoE) transformers replace the FFN layers with multiple experts, which themselves are smaller feedforward networks, and a router that selects which experts to use for the current input. This change significantly increases model capacity, while inducing only a small computational overhead through the use of the router. Additionally, recent research suggests that MoE models have favorable properties in the context of the scaling laws [16].
Sparsification of Dense TransformersSeveral works notice the difficulties of end-to-end training of MoE models and propose alternative methods to obtain Sparse MoE more efficiently. Methods such as EvoMoE [17] or Sparse Upcycling [18] propose to progressively make the model sparser over the course of the training. Other works observe that activation patterns in Transformers are highly sparse [12; 13], and propose to take advantage of this phenomenon without training. Notably, Zhang et al. [12] introduced _MoEfication_, a method that enhances the computational efficiency of Transformer models. Our method builds on MoEfication and follows the expert construction scheme proposed in this paper.
## 3 Method
DSTI is a three-step method for obtaining an efficient Transformer MoE model. The first stage of our method consists of fine-tuning a pre-trained model with an auxiliary loss that enforces activation sparsity. The FFN modules in every layer are then divided into experts, and at the last step we train the routing networks that enable dynamic selection of experts. In this section, we describe all the components of DSTI in detail.
Enforcing activation sparsityThe scheme of reducing inference cost by dividing the model into independently activated modules relies on the well-known phenomenon of activation sparsity exhibited by most deep neural networks [19], especially Transformer architecture-based models [13]. Taking inspiration from this observation, we anticipate that enforcing activation sparsity with an auxiliary loss may allow for execution of an even smaller number of experts, resulting in overall computational savings. As such, we propose to apply the \(\ell_{1}\) norm penalty on the feature representations in the middle layer of each FFN module:
\[L_{s}(x)=\frac{1}{L}\sum_{l=1}^{L}||a^{l}||_{1}, \tag{1}\]
where \(a^{l}\) is the activation tensor from the middle layer of the \(l\)-th FFN for input \(x\), and \(L\) is the number of Transformer blocks. Overall, the model is trained with the following cost function:
\[L(x)=L_{\text{CE}}(\hat{y},y)+\alpha_{s}L_{s}(x) \tag{2}\]
where \(L_{CE}\) is the standard cross-entropy loss, and \(\alpha_{s}\) is the hyperparameter for scaling the sparsity enforcement loss. While this loss could be applied during pretraining, in practice we add it during finetuning of the model so that application to pretrained models is still possible.
Expert constructionWe construct the experts using parameter clustering method proposed by Zhang et al. [12], which we briefly describe here for the convenience of the reader. Weights of each neuron from the first matrix \(W_{1}\) are treated as its features and are fed into a balanced \(k\)-means algorithm [20]. The resulting cluster indices are used to split the first linear layer \(W_{1}\), the first bias vector \(b_{1}\), and the second linear layer \(W_{2}\) into \(E\) experts. The second bias \(b_{2}\) is not affected by this procedure. The process is repeated for each FFN block.
Regression routing objectiveIn a standard MoE-based model, the gating networks are trained in an end-to-end manner. Contrary to this, we train each gating network independently, similarly to Zhang et al. [12]. However, instead of framing the problem as a classification task, our gating network directly predicts the sum of activations in the hidden layer of each \(i\)-th expert \(s_{i}=\sum_{j}a_{ij}\). We train the gating network using the standard mean squared error. Assuming ReLU activation function is used, \(s_{i}\) is always positive, and to ensure the positive output of the gating network, we take the absolute value of the gating network output. The regression-based formulation is still compatible with commonly used top-\(k\) expert selection, but enables more precise attribution of the contribution of each expert, as we show later in the experiments section.
Dynamic-\(k\) gatingCommonly used MoE layers always execute \(k\) top experts for each token, where \(k\) is a predefined hyperparameter. This means that, regardless of the difficulty of the input, the model spends the same amount of compute on each batch [21] or token [8]. However, cognitive studies show that humans treat the data samples differently depending on their complexity and spend significantly less time on the easy samples [22]. Similarly, various conditional computation methods adjust their computational load to the difficulty of the input sample[23; 24]. Inspired by this, we modify the expert selection mechanism to allow for a varying number of experts. Since our gating network \(g\) approximates the actual contribution of every expert \(s\approx\hat{s}=g(x)\), we use those predictions to determine \(k\). For each token, we set:
\[k=\min\{n\in\{1,...,E\}|\sum_{i=1}^{n}\text{sort}(h)_{i}>\tau\},\ \ h_{i}=\frac{\hat{s}_{i}}{\sum_{j=1}^{E}\hat{s}_{j}} \tag{3}\]
where \(\tau\in(0.0,1.0)\) is a threshold that determines the preferred performance vs. computational cost trade-off. Note that after model deployment, \(\tau\) can be adjusted anytime without the need for retraining.
## 4 Experiments
We evaluate the proposed method on _emotion_ classification dataset [25] using BERT-base model [2] with ReLU activation function, and compare it with _MoEfication_[12]. All of the models finetuned in our experiments start from the same pretrained checkpoint. We use the parameter clustering and MLP router variant of MoEfication. We set the number of experts to 128 and use 2-layer MLP routers with a hidden size of 128. See the supplementary material for the full list of training hyperparameters.
To demonstrate the contribution of each piece of DSTI, we train additional variants of our method with different components ablated out and show the results of our study averaged over three random
Figure 1: Accuracy vs. averaged computational cost for BERT-base, MoEfication, DSTI, and the ablated variants. DSTI demonstrates superior performance on every considered computational budget, and each of its components improves the performance upon the MoEfication baseline.
seeds in Figure 1. The models are evaluated in terms of task performance at different compute budgets, adjusting \(k\) for methods with static Top-\(k\) expert selection, or \(\tau\) in case of Dynamic-\(k\). For the comparison, we also provide the score of the standard dense BERT-base model. It can be seen that the proposed DSTI offers a significantly better trade-off between computational cost and accuracy than MoEfication, and each of its components plays a substantial role in the final performance. We emphasize that due to the widespread availability of efficient MoE layer implementations, the presented results translate to real speedups on both CPUs and GPUs.
### Expert activation patterns
To explore the scale of variability of the computational effort introduced by Dynamic-\(k\) routing, we investigate the distribution of executed expert counts in different layers of the model. Figure 2 shows the selection frequency of a given fraction of experts for various \(\tau\) thresholds for DSTI trained with and without sparsity enforcement. As expected, models with higher activation sparsity require a smaller number of experts to meet the defined threshold. It is important to highlight the range of executed experts count, which for exemplary \(\tau=0.75\) with sparsity enforcement can vary between \(5\%\) and \(40\%\) in most of the layers. This suggests that computational adaptability mechanisms are crucial for efficient inference in Transformer-based models.
## 5 Conclusions
In this paper, we have proposed DSTI, a method that obtains computationally effective Transformer MoE models through enforcing activation sparsity, training routers with a novel regression objective, and using Dynamic-\(k\) gating. Our approach demonstrates that activation sparsity is a key factor for achieving efficient inference. With Dynamic-\(k\) gating we show that the intuition with different inputs having varying levels of difficulty is also true in deep learning models, and it is wasteful to assume a fixed amount of computation for each input. DSTI outperforms a simpler sparsification method, MoEfication, on various compute budgets and reduces the cost of inference by almost 60% with negligible impact on model accuracy.
### Limitations and Future work
Following the previous works, we conduct our experiments using a ReLU-based model. While Zhang et al. [12] showed that a GELU-based model could be converted to a ReLU-based one, we would like to adopt DSTI to work with any Transformer-based model without conversion. Losses that enforce activation sparsity could be a promising direction to achieve this goal. Moreover, we would like to extend the analysis of our method to different tasks and modalities beyond text classification to show its generality. Finally, as we believe our method is orthogonal to the other inference speed-up methods, such as quantization or early-exists, we would like to explore the interplay between DSTI and those methods.
## Acknowledgments
We gratefully acknowledge Poland's high-performance Infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH, PCCS, CI TASK, WCSS) for providing computer facilities and support within
Figure 2: Distribution of the number of executed experts in each layer. The high variability of that number explains the computational gains from using Dynamic-\(k\).
computational grant no. PLG/2023/01632. Filip Szatkowski is supported by the National Centre of Science (NCP, Poland) Grant No. 2022/45/B/ST6/02817. The work of Klaudia Balazy was supported by the National Centre of Science (Poland) Grant No. 2020/39/D/ST6/01332. Klaudia Balazy is affiliated with Doctoral School of Exact and Natural Sciences at the Jagiellonian University.
|
2303.04259 | Robust quantum many-body scars in the one-dimensional spin-1 Kitaev
model | Experimental observation of coherent oscillations in a Rydberg atom chain
[Bernien et al., Nature 551, 579 (2017)] has led to the discovery of quantum
many-body scars (QMBS) which is a new paradigm for ergodicity-breaking. The
experimental findings in the Rydberg chain can be well captured by a
kinetically constrained model called the "PXP" model, which has been shown to
host the Eigenstate Thermalization Hypothesis (ETH)-violating scar states in
the middle of the spectrum. Much effort has been put into identifying similar
kinetically restricted systems that show a violation of ETH. In this work, we
study the QMBS that can arise in one such model, namely the spin-$1$ Kitaev
chain, where owing to some conserved quantities, the Hilbert space gets
fragmented into unequal disconnected subspaces. Recently, You et al. [Phys.
Rev. Research 4, 013103 (2022)] showed that the ground state sector of this
chain can be mapped exactly onto the prototypical PXP model and thus hosts
QMBSs. Here, we demonstrate that the phenomenon of scarring is also present in
other sectors, and in particular, we identify a sector that exhibits
substantially more scarring than the ground state one. We propose an initial
state and numerically demonstrate that its fidelity revivals are robust and
longer-lived than those in the PXP model. | Sashikanta Mohapatra, Ajit C. Balram | 2023-03-07T21:57:33Z | http://arxiv.org/abs/2303.04259v1 | # Robust quantum many-body scars in the one-dimensional spin-\(1\) Kitaev model
###### Abstract
Experimental observation of coherent oscillations in a Rydberg atom chain [Bernien _et al._, Nature **551**, 579 (2017)] has led to the discovery of quantum many-body scars (QMBS) which is a new paradigm for ergodicity-breaking. The experimental findings in the Rydberg chain can be well captured by a kinetically constrained model called the "PXP" model, which has been shown to host the Eigenstate Thermalization Hypothesis (ETH)-violating scar states in the middle of the spectrum. Much effort has been put into identifying similar kinetically restricted systems that show a violation of ETH. In this work, we study the QMBS that can arise in one such model, namely the spin-\(1\) Kitaev chain, where owing to some conserved quantities, the Hilbert space gets fragmented into unequal disconnected subspaces. Recently, You _et. al_ [Phys. Rev. Research **4**, 013103 (2022)] showed that the ground state sector of this chain can be mapped exactly onto the prototypical PXP model and thus hosts QMBSs. Here, we demonstrate that the phenomenon of scarring is also present in other sectors, and in particular, we identify a sector that exhibits substantially more scarring than the ground state one. We propose an initial state and numerically demonstrate that its fidelity revivals are robust and longer-lived than those in the PXP model.
## I Introduction
Rapid improvements in the platforms for realizing and controlling non-equilibrium dynamics of closed quantum systems, such as ultracold atoms [1], trapped ions [2], nitrogen-vacancy centers [3], etc., has enabled a study of the thermalization of quantum systems isolated from external baths. A generic isolated quantum system is expected to be ergodic, i.e., under the unitary dynamics of its Hamiltonian, any initial state would eventually evolve into a featureless thermal state. This loss of information on the initial state's configuration presents a barrier to protecting quantum information. As a result, it is crucial to search for non-ergodic systems that resist thermalization. The Eigenstate Thermalization Hypothesis (ETH) [4; 5] regulates the characteristics of ergodic quantum systems and describes how far-from-equilibrium initial states evolve in time to reach a final state that is described by a thermal ensemble. ETH suggests _all_ the eigenstates of ergodic systems are thermal and thus any initial state evolves into a thermal state at long times. Two well-known exceptions to the ETH paradigm are integrable and many-body localized systems [6; 7; 8]. In integrable systems, the presence of an extensive number of conserved quantities prevents an initial state from fully exploring all the allowed configurations in the Hilbert space. In MBL systems, the presence of interactions [9] and strong disorder [10] leads to an emergent integrability that prevents thermalization. These two ergodicity-breaking mechanisms are of the strong form in that _every_ eigenstate exhibits features of an athermal state.
Recently, experimental findings in an ultracold Rydberg atom chain [11] revealed a new mechanism for weak ergodicity-breaking. When the Rydberg atoms were initialized in a particular state, the so-called Neel state, they do not thermalize and instead display long-lived coherent oscillations. On the other hand, certain other initial states do exhibit thermal behavior. The theoretical description of the Rydberg chain is captured by the kinetically constrained "PXP" model [12]. Since the Rydberg atoms are quite large, it is energetically prohibitive to simultaneously excite two nearest neighboring atoms [13; 14]. The 'P' in the PXP is a projector that exactly projects out these configurations in which the nearest neighboring sites are both in excited states. This Rydberg blockade constraint imposes a restriction on the allowed configurations for the system which results in a constrained Hilbert space that the system can access. Numerical studies of the PXP model [15; 16; 17; 18] have revealed the presence of anomalous states at equidistant energies that have sub-extensive entanglement entropy (EE) in the otherwise thermal bulk spectrum. These special eigenstates obey the area-law of EE rather than the volume-law of EE as anticipated by ETH and have substantial overlap with the Neel state which results in the observed coherent revivals. This phenomenon is dubbed quantum many-body scars (QMBS) [15]. These scar states are vanishingly rare and typically their number grows only algebraically with system size while the Hilbert space dimension grows exponentially with system size. As a result, these scars only lead to a weak or incomplete breach of ETH.
In recent years, substantial theoretical effort [19; 20; 21; 22; 23; 24] has been put in, in tandem with experiments [25; 22], to look for systems that can support QMBS. QMBS have been identified in many-body systems, such as the Affleck-Kennedy-Lieb-Tasaki model [26; 27; 28], integer spin XY model [29; 30], \(\eta\)-pairing Hubbard model [31; 32; 33], thin torus limit of quantum hall phases [34], tilted 1D Fermi-Hubbard model [22], etc. In this work, we study QMBS in the spin-\(1\) Kitaev chain [35; 36], where previ
ous studies [37] have demonstrated that the PXP model is embedded in one of its subspaces, thereby making it an ideal candidate system to support QMBS. We study other subspaces (besides the one that has the PXP in it) of this model and see if they too can support QMBS. We identify a sector where the scarring is considerably stronger than that observed in the PXP model. Analogous to the Neel state of the PXP model, we propose an initial state in this sector that shows remarkably persistent fidelity oscillations.
The remainder of this paper is organized as follows. We give a brief overview of the one-dimensional (1D) spin-1 Kitaev model in Sec. II. In Sec. III.1 we study a particular sector of this model and its associated constrained dynamics and find that this subspace hosts anomalous scarred states. We identify an initial state in this subspace and numerically demonstrate that it has robust and long-lived coherent oscillations. We show that the forward scattering approximation nicely captures these scarred states. In Sec. III.2 we consider some other subspaces of the Kitaev chain and show that the fidelity oscillations of analogous initial states in these subspaces decay rapidly. Finally, we summarize our results in Sec. IV and present an outlook for the future.
## II The one-dimensional Kitaev model
The spin-1 Kitaev chain can be obtained as a single row of the two-dimensional Kitaev model [35]. We start with the general spin-\(S\) Kitaev model on the honeycomb lattice that is described by the Hamiltonian
\[H_{K}^{\rm 2D}=J_{x}\sum_{\langle i,j\rangle_{x}}S_{i}^{x}S_{j}^{x}+J_{y} \sum_{\langle i,j\rangle_{y}}S_{i}^{y}S_{j}^{y}+J_{z}\sum_{\langle i,j\rangle_ {x}}S_{i}^{z}S_{j}^{z}, \tag{1}\]
where operators \(S_{j}^{a}\) (with \(a\)=\(x,y,z\)) are the spin-\(S\) operators at site \(j\) and \(\langle i,j\rangle_{a}\) denotes nearest neighbors in the \(a\)-direction. The spin operators satisfy the usual \(SU(2)\) algebra i.e., \([S_{i}^{a},S_{j}^{a}]\)=\(i\delta_{ij}\epsilon_{abc}S_{j}^{c}\), where \(\epsilon_{abc}\) is the totally anti-symmetric Levi-Civita tensor. Setting \(J_{z}\)=0 in Eq. (1), we get a set of decoupled 1D chains any one of which of \(N\) sites is described by the Hamiltonian [36]
\[H_{K}^{\rm 1D}(\{J\})=\sum_{j=1}^{N/2}(J_{2j-1}S_{2j-1}^{x}S_{2j}^{x}+J_{2j}S_{2 j}^{y}S_{2j+1}^{y}). \tag{2}\]
In general, the coupling constants \(J\)'s could be different from each other. However, throughout this work, we will consider the simple case where all \(J\)'s are equal and set to unit strength i.e., \(J_{l}=1\ \forall l\). Thus, we end up with the following Hamiltonian for the spin-\(S\) Kitaev chain
\[H_{K}^{\rm 1D}=\sum_{j=1}^{N/2}(S_{2j-1}^{x}S_{2j}^{x}+S_{2j}^{y}S_{2j+1}^{y}), \tag{3}\]
which is the model that we will work with throughout this paper. Next, we would like to find the symmetries of the Hamiltonian of Eq. (3). To do so, we define site parity operators \(\mathcal{P}_{j}^{a}\) on every site as
\[\mathcal{P}^{a}=e^{i\pi S_{j}^{a}}. \tag{4}\]
The Ising-like terms in Eq. (3) change the value of total \(S_{z}\) at the sites adjoining a link i.e., \(S_{2j-1}^{z}\)+\(S_{2j}^{z}\) at the \(x\)-link \((2j\)\(-\)\(1,2j)\) and the value of \(S_{2j}^{z}\)+\(S_{2j+1}^{z}\) at the \(y\)-link \((2j,2j\)+1), by either 0 or \(\pm 2\). Therefore the bond parity operators \(\mathcal{B}_{j}\) on odd and even bonds defined by
\[\mathcal{B}_{2j-1}=\mathcal{P}_{2j-1}^{y}\mathcal{P}_{2j}^{y},\ \ \text{and}\ \ \mathcal{B}_{2j}=\mathcal{P}_{2j}^{x}\mathcal{P}_{2j+1}^{x} \tag{5}\]
remain invariant under the action of Hamiltonian. Thus we have
\[[\mathcal{B}_{j},H]=0,\ \forall j, \tag{6}\]
and these constitute symmetries of the spin-\(S\) Kitaev chain. By performing the following unitary transformation on the even sites [36]
\[S_{2j}^{x}\to S_{2j}^{y},\ \ \ \ S_{2j}^{y}\to S_{2j}^{x}\ \ \ \ S_{2j}^{z}\to-S_{2j}^{z}, \tag{7}\]
the Hamiltonian can be cast into the following convenient translationally invariant form
\[H=\sum_{j=1}^{N}S_{j}^{x}S_{j+1}^{y}. \tag{8}\]
Upon the unitary transformation of Eq. (7), the bond parity operators take the universal form (independent of whether bond \(j\) is even or odd)
\[\mathcal{B}_{j}=\mathcal{P}_{j}^{y}\mathcal{P}_{j+1}^{x}. \tag{9}\]
From here on, we shall restrict ourselves to the spin-1 case of our interest and work with its natural representation given by the orthonormal basis states \(\{|x\rangle,|y\rangle,|z\rangle\}\) defined as
\[|x\rangle\equiv\frac{1}{\sqrt{2}}(|-1\rangle-|1\rangle),|y\rangle \equiv\frac{i}{\sqrt{2}}(|-1\rangle+|1\rangle),|z\rangle\equiv|0\rangle\,, \tag{10}\]
where \(|m\rangle\) is the eigenstate of the spin-1 operator \(S_{i}^{z}\) with eigenvalue \(m\)=\(-1,0,1\). In this representation the spin-1 operators can be written as \(S_{bc}^{a}\)=\(i\epsilon_{abc}\) and their matrix representation is
\[S^{x}=\begin{pmatrix}0&0&0\\ 0&0&-i\\ 0&i&0\end{pmatrix},S^{y}=\begin{pmatrix}0&0&i\\ 0&0&0\\ -i&0&0\end{pmatrix},S^{z}=\begin{pmatrix}0&-i&0\\ i&0&0\\ 0&0&0\end{pmatrix}. \tag{11}\]
Furthermore, the \(3\times 3\) matrices corresponding to the site parity operators \(\mathcal{P}^{a}\) of Eq. (4) are diagonal and given by
\[\mathcal{P}^{x}\text{=}\begin{pmatrix}1&0&0\\ 0&-1&0\\ 0&0&-1\end{pmatrix},\mathcal{P}^{y}\text{=}\begin{pmatrix}-1&0&0\\ 0&1&0\\ 0&0&-1\end{pmatrix},\mathcal{P}^{z}\text{=}\begin{pmatrix}-1&0&0\\ 0&-1&0\\ 0&0&1\end{pmatrix}. \tag{12}\]
From this matrix representation, we can readily read off that the eigenvalues of the operators \(\mathcal{P}_{j}^{a}\) are \(\pm 1\) with the eigenvalue \(-1\) being doubly degenerate. Therefore, the eigenvalues of bond parity operators \(\mathcal{B}_{j}\) defined in Eq. (9) are also \(b_{j}\)=\(\pm 1\) since they are just products of the site parity operators. Moreover, as the site-parity operators \(\mathcal{P}_{j}^{a}\) are diagonal, they commute with each other. The bond operators \(\mathcal{B}_{j}\) being products of diagonal site-parity operators are also diagonal and commute with each other [along with the fact that they commute with the Hamiltonian as shown in Eq. (6)]. This implies the Hilbert space can be decomposed into \(2^{N}\) sectors (of unequal sizes since the eigenvalue \(-1\) is doubly degenerate) and each sector can be represented by a set of bond invariants \(\vec{b}\)\(\equiv\)\(\{b_{1},b_{2},\cdots,b_{N}\}\).
Projection into these sectors imposes restrictions on the allowed configurations of two neighboring sites. For the nearest neighbor sites \(\langle j,j\)+\(1\rangle\) there are a total of 3\(\times\)3=9 allowed states which, based on the eigenvalue of the bond operator \(\mathcal{B}_{j}\), get fragmented into the following two sets
\[|xy\rangle\,,\ \ |xz\rangle\,,\ \ |yx\rangle\,,\ \ |zy\rangle\ \ \text{and}\ \ \ |zz\rangle\ \ \text{have}\ b_{j}\text{=}1, \tag{13}\]
and
\[|xx\rangle,\ \ |yy\rangle,\ \ |yz\rangle\ \ \text{and}\ \ |zx\rangle\ \ \text{have}\ b_{j}\text{=} \text{-}1. \tag{14}\]
The existence of these constrained subspaces makes the spin-1 Kitaev chain a viable candidate to host QMBS.
## III QMBS in the spin-\(1\) Kitaev chain
The authors of Ref. [36] showed that the ground state of the Hamiltonian of Eq. (8) lies in the subspace with all \(b_{j}\)=1. The restriction on the neighboring sites in this sector exactly mimics the Rydberg blockade constraint [36; 37]. Thus, the \(\vec{b}\)=\(\{1,1,\cdots,1\}\) subspace can exactly be mapped into the PXP model (see App. A) and therefore hosts QMBS [37]. The corresponding Neel state for the spin-1 chain is given by \(|Z_{2}\rangle_{\text{Kitaev}}\)=\(|yx\rangle\)\(\equiv\)\(|yxyx\cdots yx\rangle\) and the fidelity for this state \(F(t)\)=\(|\langle Z_{2}|\exp(-iHt)|Z_{2}\rangle|^{2}\) gives rise to coherent oscillation as shown in Fig. 1.
We will show in the subsequent sections that some other subspaces of the spin-1 Kitaev chain also harbor scarred eigenstates, though in general, it is difficult to find the corresponding spin-1/2 Hamiltonian like the PXP one as it involves complicated forms with long-range interactions. In particular, we find that the \(\vec{b}\)=\(\{1,1,-1,1,1,-1,\cdots,1,1,-1\}\) sector exhibits a more pronounced scarring effect than the ground state one and we will discuss the fate of QMBS in this sector next.
### The \(\vec{b}\)=\(\{1,1,-1,1,1-1,\cdots,1,1,-1\}\) sector
We first unravel the structure of the constrained Hilbert space of this sector. There are two types of constraints on the states of nearest neighboring sites: i) since \(b_{3j}\)=\(-1\), there are four possible states given in Eq. (14) that the neighboring sites \(\langle 3j,3j\)+\(1\rangle\) can be in, whereas, ii) since \(b_{k}\)=\(1\)\(\forall k\)\(\neq\)\(3j\), there are five possible states given in Eq. (13) that the neighboring sites \(\langle k,k\)+\(1\rangle\), \(k\)\(\neq\)\(3j\) can be in. The dimension of Hilbert space \(\mathcal{H}\) of this sector is known to be \(D(1,1,-1,1,1,-1,\cdots,1,1,-1)\)\(\approx\)\(1.55113^{N}\) for a system of size \(N\) with periodic boundary condition (PBC) [36]. In this sector, the Hamiltonian transforms the state \(|yz\rangle\) to \(|zx\rangle\) and vice-versa over the bond with \(b_{j}\)=\(-1\) and \(|zz\rangle\) to \(|yx\rangle\) and vice-versa over the bond with \(b_{j}\)=1. Therefore the Hilbert space of this sector can be constructed by taking any initial state of the sector as root state (call it \(|R\rangle\)) and successively applying the Hamiltonian on it, i.e.,
\[\mathcal{H}_{\{1,1,-1,1,1-1,\cdots,1,1-1\}}\equiv\text{Span}\{|R\rangle,H|R \rangle,H^{2}|R\rangle,\cdots\}. \tag{15}\]
Fig. 2 shows the constrained Hilbert space and the ac
Figure 1: Fidelity of the \(|Z_{2}\rangle_{\text{Kitaev}}\) state showing periodic revivals. Data are for \(N\)=22 sites with periodic boundary conditions. The dimension of the sector \(D(1,1,\cdots,1)\)=\(39,603\).
Figure 2: The action of the Kitaev Hamiltonian of Eq. (8) on the Hilbert space of the sector \(\vec{b}\)=\(\{1,1,-1,1,1,-1\}\) for \(N\)=6 sites with periodic boundary conditions.
tion of the Hamiltonian in this sector for \(N\)=6 sites with PBC. In the graph, each node corresponds to a product state of the subspace and the edges connect the configurations that result from a given product state due to the action of the Hamiltonian. This graph representation will be helpful in the forward scattering approximation (FSA) defined later in this section. We now study the dynamics of the basis states of this subspace using the exact diagonalization of the Hamiltonian. We find that initial states of the kind \(|yxy\rangle\)\(\equiv\)\(|yxyxy\cdots yxy\rangle\) and \(|xyx\rangle\)\(\equiv\)\(|xyxxy\cdots xyx\rangle\) show long-lived revivals. Fig. 3(a) depicts the evolution of the initial state \(|yxy\rangle\) and a randomly chosen product state under the Kitaev Hamiltonian with \(N\)=24 sites. The \(|yxy\rangle\) state shows the hallmark of QMBS wherein fidelity oscillations are robust and long-lived. In particular, the fidelity oscillations for this state are more robust (peaks heights are higher as evidenced by the fact that the first revival peak displays \(>\)80% return probability to the initial state) and longer-lived (persist for a longer time) as compared to that of the \(|Z_{2}\rangle_{\rm Kitaev}\) state shown in Fig. 1. In sharp contrast, a random state thermalizes rapidly [see the green curve shown in Fig. 3(a)].
We can visualize the scarred dynamics in this sector as the state bouncing between the two corner states \(|yxy\rangle\) and \(|xyx\rangle\) of the Hilbert space graph shown in Fig. 2. The dotted line in Fig. 3(a), where we plot \(|\langle xyx|e^{-iHt}|yxy\rangle|^{2}\), which is the probability of finding the state in \(|xyx\rangle\) following time evolution from the initial state \(|yxy\rangle\), illustrates this back-and-forth motion. In Fig. 3(b) we plot the growth of EE with time for different initial states. The EE of a subregion \(A\) is defined as the von Neumann entropy of the reduced density matrix of the subsystem as \(S_{A}\)=\(-{\rm Tr}_{A^{c}}\{\rho_{A}{\rm ln}\rho_{A}\}\), where \(\rho_{A}\) is the reduced density matrix of subsystem \(A\) and the trace is taken over its complement \(A^{c}\). For the \(|yxy\rangle\) state, along with an increase as a function of time, the EE mirrors the oscillations seen in the fidelity. Moreover, the rate at which EE grows in the \(|yxy\rangle\) state is much smaller as compared to that of a randomly chosen state, suggesting that the initial \(|yxy\rangle\) state results in non-ergodic behavior.
Thermalization and its breakdown can also be probed by measuring the spread of EE of eigenstates. ETH predicts a "volume-law" scaling of EE, i.e., for a 1D system EE scales linearly with the size of the subsystem. Fig. 4 demonstrates that the bipartite (equipartitioned) \(S\) for the majority of the eigenstates of the \(\vec{b}\)=\(\{1,1,-1,1,1-1,\cdots,1,1,-1\}\) sector do exhibit the volume-law behavior that is consistent with the prediction of ETH. However, in addition to the bulk of highly entangled states, there are outliers over the entire range of the spectrum that have much lower entropy that violates the volume law predicted by ETH.
The fidelity oscillations observed in Fig. 3(a) arise precisely due to the existence of these relatively small number of athermal eigenstates that are spread throughout the bulk spectrum but carry low EE. These scarred states have anomalously high overlap with the initial product state \(|yxy\rangle\) as shown in Fig. 5. Like in other models hosting QMBS [15; 34], the projection onto the \(|yxy\rangle\) state displays towers of special _equispaced_ (with the spacing in energy determining the inverse time period of oscillations seen in the fidelity) eigenstates having anomalously high overlap with the initial product state \(|yxy\rangle\). The observed coherent oscillations in fidelity, sub-thermal entanglement entropy of certain eigenstates, and anomalously large overlap of these eigenstates with a particular initial product state results in the non-ergodic dynamics and establishes the existence of QMBS in this subspace of spin-1 Kitaev chain.
Furthermore, as has been demonstrated for the PXP mode, the topmost state in the towers of scarred eigenstates in this sector can be well-approximated using the so-called Forward Scattering Approximation (FSA) [15]. The FSA mechanism involves constructing an approx
Figure 3: (a) Fidelity dynamics for the initial state \(|yxy\rangle\) and a randomly chosen product state. The former shows coherent oscillations whereas the latter thermalizes rapidly. The black dotted lines show the probability of state transfer between \(|yxy\rangle\) and \(|yxy\rangle\) (b) Entanglement entropy S for the same initial states. The Entropy of the \(|yxy\rangle\) state grows linearly with time but it also shows oscillations due to scarring. Data is shown for N=24 sites with PBC.
imate Hamiltonian whose eigenstates reproduce the scarred states. We start by splitting the Hamiltonian into forward and backward propagating parts as \(H\)=\(H^{+}\)+\(H^{-}\), where
\[\begin{split} H^{+}\!=\!\sum_{i=1,4,7,\cdots}|yx\rangle\langle zz |&+\sum_{i=2,5,6,\cdots}|zz\rangle\langle yx|\\ &-\sum_{i=1,4,7,\cdots}|yz\rangle\langle zx|\\ \end{split} \tag{16}\]
\[\begin{split} H^{-}\!=\!\sum_{i=1,4,7,\cdots}|zz\rangle\langle yx |+\sum_{i=2,5,6,\cdots}|yx\rangle\langle zz|\\ -\sum_{i=1,4,7,\cdots}|zx\rangle\langle yz|.\end{split} \tag{17}\]
Then we construct the basis vectors \(|0\rangle,|1\rangle,\cdots,|N\rangle\) of the effective Hamiltonian \(H_{\text{FSA}}\), where \(|0\rangle\)\(\equiv\)\(|yxy\rangle\) and \(|n\rangle\)=\((1/\sqrt{c_{n}})(H^{+})^{n}|yxy\rangle\) (\(c_{n}\) is the normalization constant). In the Hilbert space graph shown in Fig. 2, the action of \(H^{+}\) corresponds to moving from left to right (right to left for \(H^{-}\)) and \(H^{+}\) annihilates the \(|yxy\rangle\) state (\(H^{-}\) annihilates the \(|yxy\rangle\) state). Therefore, starting from the \(|yxy\rangle\) state the FSA recurrence closes after \(N\)+1 steps once the forward propagation reaches the \(|xyx\rangle\) state at the opposite end of the Hilbert space graph shown in Fig. 2. The approximation in FSA entails that the Hilbert space of basis states \(|0\rangle,|1\rangle,\cdots,|N\rangle\) is closed under the action of the Kitaev Hamiltonian of Eq. (8). The action of \(H\) on these basis states is given by
\[\begin{split} H|n\rangle=& H^{+}|n\rangle+H^{-}|n \rangle\\ =&\beta_{n+1}|n+1\rangle+H^{-}|n\rangle,\end{split} \tag{18}\]
where \(\beta_{n}\)=\(\sqrt{c_{n}/c_{n-1}}\). Thus to make the Hilbert space closed under the action of the Hamiltonian we have to approximate
\[H^{-}|n\rangle\approx\beta_{n}|n-1\rangle. \tag{19}\]
Using Eqs. (18) and (19) the Hamiltonian takes the form of the following tridiagonal matrix which is the FSA Hamiltonian
\[H_{\text{FSA}}=\begin{pmatrix}0&\beta_{1}&&\\ \beta_{1}&0&\beta_{2}&&\\ &\beta_{2}&0&\ddots&\\ &&\ddots&\ddots&\beta_{N}\\ &&&\beta_{N}&0\end{pmatrix}. \tag{20}\]
As shown by the cross marks in Fig. 5, the eigenstates of the FSA Hamiltonian of Eq. (20) provide an excellent approximation to the special scarred eigenstates of the Kitaev Hamiltonian in the sector \(\vec{b}\)=\(\{1,1,-1,1,1,-1,\cdots,1,1,-1\}\).
### QMBS in other sectors
We have also looked for the possibility of scarring in other sectors of the Kitaev chain by studying the dynamics from different initial product states. Amongst all the initial states and sectors we considered, we found that initial product states \(|yyxx\rangle\)\(\equiv\)\(|yyxxyyxx\cdots yyxx\rangle\) in the sector \(\vec{b}\)=\(\{-1,1,-1,1,\cdots,-1,1\}\) and the state \(|yyyyx\rangle\)\(\equiv\)\(|yyyyxyyx\cdots yyyx\rangle\) in the sector \(\vec{b}\)=\(\{-1,-1,1,1,-1,-1,1,1,\cdots,-1,-1,1,1\}\) also show oscillations in the fidelity. In Fig. 6 we plot the
Figure 4: Bipartite (equipartition) entanglement entropy of the eigenstates of the \(\vec{b}\)=\(\{1,1,-1,1,1-1,\cdots,1,1,-1\}\) in the spin-1 Kitaev model as a function of energy. The bulk states satisfy the volume law of EE, however, there are several states that reside over the entire range of the spectrum and carry low EE and thereby violate ETH. Data is shown for \(N\)=24 sites with PBC. The color scale on the right indicates the density of the data points.
Figure 5: Density plot showing the overlap of the initial product state \(|yxy\rangle\) with the eigenstates of the \(\vec{b}\)=\(\{1,1,-1,1,1-1,\cdots,1,1,-1\}\) sector in the spin-1 Kitaev model. The cross marks denote the overlap with the eigenstates of the FSA Hamiltonian [see Eq. (20)], which very well approximate the topmost states in the towers of scar states.
return probabilities for the aforementioned initial states. We note that the oscillations are weaker (as evidenced by the peak heights) and decay much faster in these sectors. One way to understand this is that the FSA does not work well in these sectors as shown in Fig. 7.
The authors of Ref. [20] showed that Hamiltonians hosting QMBS support an emergent approximate \(SU(2)\) symmetry within a subspace of the Hilbert space. The revivals from the initial product states can then be thought of as the coherent rotation of the large \(SU(2)\) degree of freedom. In the \(SU(2)\) algebra, \(H^{+}\) and \(H^{-}\) act as an analog of the raising and lowering operators. Their commutator \(H^{z}\)=\([H^{+},H^{-}]\) acts as \(S^{z}\) operator, and the FSA states are its eigenstates. However, in Eq. (19) we saw that \(H^{-}\) only approximately inverts the action of \(H^{+}\), which is why the algebra is not exact and perfect revivals are not observed. We expect that if the FSA gives a good representation of the exact scar states, then one sees strong revivals. Otherwise, if the FSA does not represent the scar states well, the fidelity decays quickly. This is consistent with our numerical observations.
These results also show that it is not necessarily the case that the more constrained a Hilbert space is the more scarring it shows. In general, the more the number of \(b_{j}\)=\(-1\) larger the number of constraints and the fewer the number of states in the corresponding Hilbert space. Nevertheless, as we have shown above, the more constrained sector \(\vec{b}\)=\(\{-1,-1,1,1,-1,-1,1,1,\cdots\}\) shows less scarring than the less constrained \(\vec{b}\)=\(\{1,1,-1,1,1,-1,\cdots\}\) sector. The strength of scarring is determined by the structure of the graph of the corresponding Hilbert space and how well the FSA works there. We note here that we have checked that all sectors in the spin-1 Kitaev chain that we con
Figure 7: Density plot of the overlap of the initial product states that show fidelity oscillations in particular sectors (see Fig. 6) of the spin-1 Kitaev model with all eigenstates in that sector. The crosses show the overlap with eigenstates of the forward-scattering approximation Hamiltonian \(H_{\text{FSA}}\). The overlap of the scar states with the product states fails to match the magnitude as predicted by FSA which is consistent with the observation of a faster decay of fidelity in these states. Data are shown for (a) \(\vec{b}\)=\(\{-1,1,-1,1,\cdots,-1,1\}\) and (b) \(\vec{b}\)=\(\{-1,-1,1,1,-1,-1,1,1,\cdots,-1,-1,1,1\}\) sectors of the spin-1 Kitaev chain of \(N\)=24 sites.
Figure 6: Fidelity of the state (a) \(|yyxx\rangle\) which lies in the sector \(\vec{b}\)=\(\{-1,1,-1,1,\cdots,-1,1\}\) (b) \(|yyyx\rangle\) which lies in the sector \(\vec{b}\)=\(\{-1,-1,1,1,-1,-1,1,1,\cdots,-1,-1,1,1\}\). The fidelity oscillations are weaker (peak heights are reduced) and decay faster in these sectors as compared to the ones shown in Fig. 3. Data are shown for \(N\)=24 sites.
sidered do not show Poison level statistics which rules out an integrability-based explanation for the athermal behavior we observe.
## IV Summary and conclusion
In this paper, we studied the time evolution of initial states in certain sectors of the spin-1 Kitaev chain, where the Hilbert space is fragmented into \(2^{N}\) unequal subspaces. We looked at the dynamics of the initial states in these constrained subspaces. We found that the \(|yxy\rangle\) state in the \(\vec{b}\)=\(\{1,1,-1,1,1,-1,\cdots\}\) sector showed the most prominent coherent oscillations in fidelity when evolved under the Kitaev Hamiltonian, more so than even the \(|Z_{2}\rangle_{\text{Kitaev}}\) state of the celebrated PXP model which is embedded in the \(\vec{b}\)=\(\{1,1,\cdots,1\}\) sector that hosts the ground state of the spin-1 Kitaev chain. The coherent dynamics in the \(\vec{b}\)=\(\{1,1,-1,1,1,-1,\cdots\}\) sector were characterized by special eigenstates that have anomalously low entanglement entropy, and high overlap with the initial \(|yxy\rangle\) state. We also showed that these special scarred states can be well-approximated by the FSA. Furthermore using the FSA, we showed why certain other sectors do not show long-lived oscillations in fidelity. It would be interesting to see if the scarring phenomena we observed can be understood using analytical techniques such as the recently proposed broken unitary picture of dynamics in QMBS [38], or interpreting it as a one-dimensional chiral scattering problem [39], or projector-embedding [40] or commutant algebras [41].
The Kitaev chain provides a model system and framework to study the dynamics of constrained systems. Here, we only looked at the spin-1 chain and it would be interesting to look at higher spins and see whether they exhibit QMBS. Another potential avenue that could be worth studying in the future is to explore the existence of QMBS in higher dimensions and/or in different geometries.
###### Acknowledgements.
We acknowledge useful discussions with Diptiman Sen, Kartiek Agarwal, Sanjay Moudgalya, and Zlatko Papic. Computational portions of this research work were conducted using the Nandadevi supercomputer, which is maintained and supported by the Institute of Mathematical Science's High-Performance Computing Center.
## Appendix A Mapping the \(\vec{b}\)=\(\{1,1,\cdots,1\}\) sector to the PXP model
In this appendix, we show that the \(\vec{b}\)=\(\{1,1,\cdots,1\}\) sector of the spin-1 Kitaev chain can be mapped to the PXP model. Since all \(b_{j}\)=1 in this sector, there are five allowed states for any pair of nearest neighbor sites \(\langle j,j\)+1\(\rangle\) as shown in Eq. (13). Owing to these constraints, configurations of neighboring sites can be written in terms of spin-1/2 degrees of freedom on the dual lattice (for each bond between sites \(j\) and \(j\)+1 on the primal lattice, on the dual lattice we define a site at \(\{j\)+1/2\(\}\)) of \(N\) sites using the following mapping
\[\begin{split}|yx\rangle_{j,j+1}&\to|\uparrow \rangle_{j+\frac{1}{2}}\\ |zy\rangle_{j,j+1}&\to|\downarrow\rangle_{j+\frac{1}{2}} \\ |xz\rangle_{j,j+1}&\to|\downarrow\rangle_{j+\frac{1}{2}} \\ |zz\rangle_{j,j+1}&\to|\downarrow\rangle_{j+\frac{1}{2}} \\ |xy\rangle_{j,j+1}&\to|\downarrow\rangle_{j+\frac{1}{2}}.\end{split} \tag{16}\]
This mapping does not allow the nearest neighbors on the dual lattice to be in the configuration \(|\uparrow\uparrow\rangle\) which is precisely the Rydberg blockade constraint (no two nearest neighbors are simultaneously in the excited state) that is implemented in the PXP model. Though the mapping in Eq. (16) appears to be many-to-one it is not. The reverse mapping from the dual lattice to the spin-1 primal lattice is given by
\[\begin{split}|\downarrow\uparrow\rangle_{j-\frac{1}{2},j+\frac{1 }{2}}&\to|y\rangle_{j}\\ |\uparrow\downarrow\rangle_{j-\frac{1}{2},j+\frac{1}{2}}& \to|x\rangle_{j}\\ |\downarrow\downarrow\rangle_{j-\frac{1}{2},j+\frac{1}{2}}& \to|z\rangle_{j},\end{split} \tag{17}\]
which ensures that the mapping is one-to-one. Note that a similar mapping was used in Ref. [34] to map the thinoturs limit of the pair-hoping Hamiltonian of the \(\nu\)=1/3 fractional quantum hall effect to the PXP model. With this mapping, the (non-vanishing) action of the spin-1 Kitaev model on the primal lattice leads to the following terms in the Hamiltonian \(\mathcal{H}_{\{1,1,\cdots,1\}}\) in the dual space
\[\begin{split}& H_{j,j+1}|\ast\overset{jj+1}{y}\overset{1}{x} \ast\rangle=|\ast\overset{j+1}{z}\overset{1}{z}\ast\rangle\\ \implies\mathcal{H}_{j+\frac{1}{2}}|\downarrow\overset{j+\frac{1 }{2}}{\uparrow}\downarrow\rangle=|\downarrow\overset{j+\frac{1}{2}}{\downarrow} \downarrow\rangle\\ & H_{j,j+1}|\ast\overset{jj+1}{z}\ast\rangle=|\ast\overset{j+1}{y} \overset{1}{x}\ast\rangle\\ \implies\mathcal{H}_{j+\frac{1}{2}}|\downarrow\overset{j+\frac{ 1}{2}}{\downarrow}\downarrow\rangle=|\downarrow\overset{j+\frac{1}{2}}{\uparrow} \downarrow\rangle,\end{split} \tag{18}\]
where \(|\ast\rangle\) corresponds to any allowed configuration on the sites that respect the above-mentioned constraints. The terms in the Hamiltonian of Eq. (18) are exactly the ones that appear in the PXP model, which is given by
\[\mathcal{H}_{\{1,1,\cdots,1\}}=\sum_{j=1}^{N}P_{j-1}\sigma_{j}^{x}P_{j+1}, \tag{19}\]
where the Pauli operators are defined in the usual way with \(\sigma^{x}\)=\((|\downarrow\rangle\langle\uparrow|+|\uparrow\rangle\langle\downarrow|)\) and \(\sigma^{z}\)=\((|\uparrow\rangle\langle\uparrow|-|\downarrow\rangle\langle \downarrow|)\), and the projectors \(P_{j}\)=\((1-\sigma_{j}^{z})/2\) ensure that no two spin-1/2
nearest neighbors are simultaneously in the excited \(|\uparrow\rangle\) state. Thus, the spin-1 Kitaev chain Hamiltonian restricted to the \(\vec{b}\)=\(\{1,1,\cdots,1\}\) sector (which hosts its ground state) exactly maps into the PXP model.
|
2301.04420 | Combining Self-labeling with Selective Sampling | Since data is the fuel that drives machine learning models, and access to
labeled data is generally expensive, semi-supervised methods are constantly
popular. They enable the acquisition of large datasets without the need for too
many expert labels. This work combines self-labeling techniques with active
learning in a selective sampling scenario. We propose a new method that builds
an ensemble classifier. Based on an evaluation of the inconsistency of the
decisions of the individual base classifiers for a given observation, a
decision is made on whether to request a new label or use the self-labeling. In
preliminary studies, we show that naive application of self-labeling can harm
performance by introducing bias towards selected classes and consequently lead
to skewed class distribution. Hence, we also propose mechanisms to reduce this
phenomenon. Experimental evaluation shows that the proposed method matches
current selective sampling methods or achieves better results. | JΓΒdrzej Kozal, MichaΓ
Β WoΓ
ΒΊniak | 2023-01-11T11:58:45Z | http://arxiv.org/abs/2301.04420v1 | # Combining Self-labeling with Selective Sampling
###### Abstract
Since data is the fuel that drives machine learning models, and access to labeled data is generally expensive, semi-supervised methods are constantly popular. They enable the acquisition of large datasets without the need for too many expert labels. This work combines self-labeling techniques with active learning in a selective sampling scenario. We propose a new method that builds an ensemble classifier. Based on an evaluation of the inconsistency of the decisions of the individual base classifiers for a given observation, a decision is made on whether to request a new label or use the self-labeling. In preliminary studies, we show that naive application of self-labeling can harm performance by introducing bias towards selected classes and consequently lead to skewed class distribution. Hence, we also propose mechanisms to reduce this phenomenon. Experimental evaluation shows that the proposed method matches current selective sampling methods or achieves better results.
## 1 Introduction
Active learning [1] is the area of machine learning where a training set is constructed by selecting the most informative samples that can speed up training. New labeled learning examples are obtained by queering, i.e., requesting ground truth labels from an oracle. To create a query, we use a model trained with a small number of labeled samples. Stream-Based Selective Sampling [1] is based on the assumption that acquiring new unlabeled training examples is relatively inexpensive. We process a single sample at a time, and decide whether it should be labeled by oracle or discarded. In this work, we propose a new method that combines self-labeling with active learning in Stream-Based Selective Sampling scenario.
An overview of our method is provided in Fig. 1. We hope that this approach could allow for the cost-efficient creation of bigger labeled datasets. Self-labeling could introduce noisy labels into the dataset [14], as in most cases, models have non-zero classification error. In [1] various types of noises and their impact on deep learning performance were analyzed. It was found for various noise types that, with the increase of the noise ratio, test accuracy decreased. and with the increase of dataset size, test accuracy increases. In self-labeling, errors made by the classifier introduce wrong labels to the dataset, but as we label new samples, the overall data volume increases. If the gain in accuracy from increasing the dataset size surpasses the performance loss from the wrong labels, we can use self-labeling to boost classification performance. In this work, we utilize this dynamic to improve classification performance. The main contributions of this work are following:
* An analysis of the problems, that arise when we apply self-labeling to an active learning scenario
* New method was proposed based on classifier committee, that integrates self-labeling to active learning
* Thorough experimental evaluation with multiple dataset and settings
Figure 1: Overview of the proposed method. We utilize ensemble predictions to determine whether a given sample could be added to the dataset with the predicted label (self-labeling) or should be labeled by oracle (active learning). More specifically, we check if obtained support exceeds a predefined threshold and if all confident predictions return to the same class. If not, we check if the budget, create a query, and train with bootstrapping. Otherwise, we filter out and drop the samples from the current majority class (prior filter), and perform bootstrapped training with label obtained from prediction.
Related Works
### Active learning
The most popular active learning strategy is based on uncertainty sampling [14], where model supports are utilized as an information source about learning example usefulness. Fragment of the sample space, where the support for the samples is low, is called the region of uncertainty [1]. This concept was used in [14] to select samples for labeling with the lowest difference in computed support and the predefined threshold. In [15], authors proposed uncertainty sampling with a variable threshold for data stream mining. In margin sampling [13], queries are created by selecting samples with the smallest difference in probabilities of two classes with the largest confidence. It was shown in [12] that this algorithm performs on par with more computationally expensive ensemble-based methods. In [13], modification of standard margin sampling was proposed, were samples are selected based on the smallest classification margin of all models in the ensemble. Query by Committee algorithms measures disagreement between members of an ensemble to choose the most informative samples. In vote entropy, [1] samples are selected based on the entropy of ensemble vote distribution. A modified version of this algorithm [15] select samples with the highest average predictions entropy. Another possible disagreement measure is maximum disagreement sampling [16], where KL divergence is used.
### Self-labeling
Self-supervised learning [15] aims at learning valuable data representation without annotation. To obtain good representations, we need to define some pretext tasks for a model to solve. The first attempts of creating self-supervised learning involved auto-encoders [1], patch location prediction [17], inpainting [11], or rotation prediction [1]. Clustering was utilized as a pretext task [1, 1] for training deep data representations. Another research direction is contrastive learning [1], where the pretext task is based on learning close representations for the same sample with different augmentations applied and distant representations for dissimilar samples. Pseudo-labels are also used for semi-supervised learning [14]. Authors of [14] utilize the outputs of the classifier with the highest confidence as a target for unlabeled data. In [15], new methods were proposed that utilize high-confidence pseudo-labels generated with weakly-augmented images. Next, these pseudo-labels are used as a target for the strongly-augmented version of the same image.
### Active learning with Self-labeling
In [20] combination of automatic pseudo-labeling and active learning was proposed for a pool-based setting. Authors found that utilization of pseudo-labels can improve the labeling efficiency of active learning algorithms, and error rates of automatically assigned labels are low for the convolutional neural network. Authors of [13] introduce a new method that combines semi-supervised learning with pseudo-labels and active learning. Korycki and Krawczyk [15] have combined self-labeling with active learning for learning from data streams.
## 3 Method
In this section, we describe the setting and introduce our method. We also provide results of preliminary experiments with the dynamic imbalance.
### Selective sampling
First, we introduce the selective sampling framework that our work is based on. We assume an access to small set of labeled data \(\mathcal{L}=\{(\mathbf{x}_{m},y_{m})\}_{m=1}^{M}\) and stream of unlabeled data \(\mathcal{U}=\{(\mathbf{x}_{n})_{n=1}^{N}\) with \(x\in\mathcal{X},y\in\mathcal{Y}\), where \(\mathcal{X}\) and \(\mathcal{Y}\) are the input space and the set of labels respectively. Our goal is to train a model \(f\), that predicts the support for input sample \(\mathbf{x}\), namely: \(p(\mathbf{y}|\mathbf{x})=f_{\theta}(\mathbf{x})\), where \(\theta\) is set of model parameters. Final model prediction is given by \(\hat{y}=arg\max_{i}p(\mathbf{y}_{i}|\mathbf{x})\). We denote maximum support for sample \(\mathbf{x}\) as \(\max_{i}p(\mathbf{y}_{i}|\mathbf{x})\). The general algorithm for selective sampling is provided in appendix A. We assume the same cost of obtaining label from an oracle for each sample in \(\mathcal{U}\). For this reason, we define budget \(B\) as the number of samples that can be labeled, with the exception of presenting results when we refer to budget as the fraction of all samples that can be labeled.
### Informativeness computation
We employ differences in supports obtained from different models in an ensemble as a source of informativeness. First, the ensemble of \(L\) base classifiers is trained. For the unlabeled sample \(\mathbf{x}\) each model \(l\) in a committee computes supports \(p_{l}(\mathbf{y}|\mathbf{x})\). Next, we check if at least half of the classifiers in the ensemble provided supports that exceed a predefined support threshold \(\tau\).
\[\sum_{l}\mathds{1}_{\max_{c}p_{l}(\mathbf{y}_{c}|\mathbf{x})>\tau}>\frac{L}{2} \tag{1}\]
If more than half of the models return confident predictions and these models output the same prediction, we add \((\mathbf{x},\hat{y})\) to \(\mathcal{L}\). Otherwise, we query an oracle with \(\mathbf{x}\). By choosing samples with consistent, highly confident predictions, we avoid assigning the wrong label to a sample. From an active learning perspective, these learning examples are not valuable, as models already return confident predictions for them. However, we hope that by a faster increase in the number of labeled samples, we can obtain improvements in classification accuracy.
### Bootstrapped training
We train initial models with bootstrapping of labeled part of data \(\mathcal{L}\). This corresponds to sampling number of repeats of each sample from the Poisson distribution with \(\lambda=1\). This part of our method is inspired by Online Bagging [1].
Russell, 2001) method, introduced for data stream classification. During training with an unlabeled stream, we use bootstrapping for new learning examples added to the dataset. In the case of training with ground truth label from oracle, we use \(\lambda=1\). When updating the dataset with a sample labeled based on model prediction, we calculate \(\lambda\) as:
\[\lambda=\frac{\max_{l,c}p_{l}(\mathbf{y}_{c}|\mathbf{x})}{\tau}-\mathds{1}_{B=0} \tag{2}\]
where \(\tau\) is the same threshold used earlier for selecting confident predictions. When \(B>0\), then \(\lambda\) is always greater than one. As a result, samples labeled based on model prediction will be more frequently added to the dataset than learning examples from initial dataset \(\mathcal{L}\). To avoid the negative influence of incorrect model predictions after the budget ended we change lambda calculation after labeling budget was spent. In such case \(\lambda<1\), assuming that value of \(\tau\) is not significantly lower than \(p\). Consequently, updates to datasets are still performed, while the negative impact of incorrect predictions is limited. Values of \(\lambda\) for each sample are stored in \(\mathbf{\lambda}\) vector. Upon an update, we generate separate datasets by bootstrapping. The number of repeats of a single sample in a dataset is limited to 4. The ensemble training procedure for the proposed method is given in algorithm 1.
```
0:\(\mathcal{L}\) - set of labeled data with \(M\) elements, \(\{f_{\theta}\}_{L}\) - ensemble of \(L\) models, \(\mathbf{\lambda}\) - vector with parameters for Poisson distribution for each sample in \(\mathcal{L}\)
1:for\(l\in\{0,L\}\)do
2:\(\mathbf{r}\thicksim Pois(\mathbf{\lambda})\)
3:\(\mathbf{r}\leftarrow\min(\mathbf{r},4)\)
4:\(D\leftarrow\emptyset\)
5:for\(i\in\{0,M\}\)do
6:\((\mathbf{x},y)\leftarrow\mathcal{L}_{i}\)
7:for\(j\in\{0,r_{i}\}\)do
8:\(D\gets D\cup\{(\mathbf{x},y)\}\)
9:endfor
10:endfor
11: train \(f_{\theta_{l}}\) with \(D\)
12:endfor
```
**Algorithm 1** Bootstrapped training
### Dynamic Imbalance
Naive usage of self-labeling in selective sampling can introduce imbalance in the training set \(\mathcal{L}\). To demonstrate this, we conduct a preliminary experiment with synthetic data. We generate simple datasets by sampling from 2D Gaussian distribution for easier visualization. We study two scenarios that may occur in practice. In the first case, the dataset consists of three balanced classes, but one of the classes is easier to learn than the rest. In the second scenario dataset with two classes is imbalanced.
We sample 300 learning examples, plotted in Fig. 2 on the left-hand side. Datasets are used for initial training of Multi-layer Perceptron with 5 neuron single hidden layer. Next, we generate a stream with 3000 learning examples, sample data from the stream, and obtain model predictions. When model confidence exceeds 0.95, we expand the training set with learning examples and predicted labels. For simplicity, we do not use bootstrapping in this experiment. When model confidence is below 0.7, we create a query to obtain a ground-truth label. Changes in the percentage of labels in training set during training with an unlabeled data stream are presented in Fig. 2 in the middle.
In the first scenario, over time percentage of samples labeled as the third class grows until it utilizes approximately 40% of all data. This result shows that naive utilization of self-labeling can disturb class distribution, even if the original data is balanced. In the second scenario, the initial imbalance ratio is 1:4, however, after approximately 800 iterations, it is closer to 1:5. This shows that initial bias in the data distribution can be strengthened by self-labeling. Please note that in this experiment algorithm have access to the ground truth labels by creating queries for samples with low confidence, and yet, the class distribution change over time.
### Prior filter
To address the issue of dynamic imbalance, we introduce a method that prevents training when the current prior estimation for the predicted class is too high. We use the last \(k\) labels from \(\mathcal{L}\) and compute the percentage of samples that have the same label as the predicted class:
\[\hat{p}=\frac{1}{k}\sum_{i=M-k}^{M}\mathds{1}_{y_{i}=\hat{y}} \tag{3}\]
This value can be interpreted as an estimation of the current class prior. Only the last \(k\) labels were used because, as shown in preliminary experiments, class distribution can change over time. We compute difference between \(\hat{p}\) and prior of perfectly balanced dataset:
\[\Delta_{p}=\hat{p}-\frac{1}{C} \tag{4}\]
Where \(C\) is the number of all classes. When \(\Delta_{p}>0\) we disallow training. We do not apply this prior filter to labels obtained from an oracle. A similar approach was proposed earlier in (Komorniczak _et al._, 2022) in the context of the data stream processing, however it estimated prior with regression models and switching labels for the majority class. Here we estimate prior directly from model predictions and skip samples from majority classes, which is similar to undersampling.
We repeat previous preliminary experiments with the prior filter applied. We use \(k=50\) last samples. Results are plotted in Fig. 2 on the right-hand side. The proposed method can keep class distribution balanced in the first setting and, over time, improve initial class distribution in the second setting.
### Self-labeling selective sampling
The complete algorithm for Self-labeling selective sampling (SL2S) along with time complexity analysis is provided in appendix B.
## 4 Experimental Setup
This section provides a detailed description of the methods, datasets, and tools used to conduct experiments.
### Datasets
We utilize datasets from the UCI repository [1] with a wide range of datasets with different sizes, number of classes, number of attributes, and imbalance ratio (IR). The detailed information about data used in experiments is presented in Tab. 1. The complete list of features and procedures for loading data are provided in appendix C.
### Metrics and evaluation
Due to high values of IR for some datasets, we decided to employ balanced accuracy [1] as primary performance metrics for our experiments. In our evaluation, we focus on the impact of budget size and seed size used for training of the initial model, as these two factors can impact the results the most. All values of metrics reported in this paper were obtained with a separate test set. Code was implemented in Python with the utilization of scikit-learn library [1]. The codebase with the method and experiment implementations are available on github 1.
Footnote 1: [https://github.com/w4k2/active-learning-data-streams](https://github.com/w4k2/active-learning-data-streams)
### Baselines
To perform fair evaluation, we compare the proposed method to commonly used algorithms in selective sampling literature:
* random selection of samples for query
* selection of samples based on a static confidence threshold
* modification of fixed uncertainty that adjust confidence threshold based on the current size of the uncertainty region
* a method that computes the difference in confidence between classes with two biggest supports
* queries are based on ensemble vote entropy
* samples are selected based on the highest average prediction entropy
* computes KL-divergence between output class distribution and consensus distribution
* a method that selects samples based on minimum classification margin for all models in the ensemble
In the case of methods that were created for the pool-based scenario, we adapt them by introducing the informativeness
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline dataset name & size & \#class & \#attributes & IR \\ \hline adult [1] & 48842 & 2 & 14 & 3.1527 \\ bank marketing [1] & 45211 & 2 & 17 & 7.5475 \\ firewall [1] & 65478 & 3 & 12 & 2.9290 \\ chess [1] & 20902 & 15 & 40 & 22.919 \\ \hline nursery [1] & 12958 & 4 & 8 & 13.1707 \\ mushroom [1] & 8124 & 2 & 22 & 1.0746 \\ wine [1] & 4873 & 5 & 12 & 13.5082 \\ abalone [1] & 4098 & 11 & 8 & 21.5417 \\ \hline \end{tabular}
\end{table}
Table 1: Datasets used for experiments. IR was computed by taking a ratio of class with the highest and lowest number of samples.
Figure 2: Dynamic imbalance of classes when applying self-labeling directly during selective sampling. We consider two settings: in the first three classes with balanced prior distribution, but a single class is easier to learn than others (up), and the second imbalanced binary classification problem (bottom). Generated 2-D datasets are plotted on the left-hand side. When applying self-labeling directly (middle), we observe a change in the class distribution in the training set. This problem can be avoided when we apply dynamic balancing (right).
threshold. For each committee-based method, we use 9 base classifiers and employ bootstrapping during initial training. All methods were trained with Multi-layer Perceptron classifier with two hidden layers, 100 neurons each.
### Hyperparameter tuning
In preliminary experiments, we found that the most important hyperparameter is the threshold used for the informativeness measure. For this reason, we focused on tuning this parameter. We use random search [1] to select the best thresholds for each algorithm. MLP classifier was trained with Adam optimizer (learning rate equal to 0.001). We allow training for a maximum of 5000 iterations. Detailed description of hyperparameter tuning process with range of values for each algorithm are provided in appendix D.
### Goal of experiments
The overall goal of experiments is to perform a thorough investigation into the usefulness of self-labeling in a selective sampling setting. To provide a more precise description, we formulate the following research questions:
* Is there a benefit of combining active learning strategies with self-labeling?
* What is the performance of the proposed method for datasets with a high number of learning examples?
* What is the impact of the initial training size (the seed size) on the performance?
* How does the accuracy of the model trained with seed impact the learning process of the proposed algorithm?
* Does the proposed algorithm allows for the better utilization of the computational budget?
Each of these research questions will be addressed in the following parts of our work.
## 5 Experiments
In this section, we describe the results of an experimental evaluation in accordance with the research questions stated above.
### Experiments with smaller datasets
We compare the performance of the proposed method and baselines according to the experimental protocol described in previous sections. Here we utilize four datasets, namely nursery, mushroom, wine, and abalone. Results are presented on the left-hand side of Tab. 2.
Here we can see that our method rarely obtains the best score. However, the difference between the best-performing method and SL2S is often close to or below 0.02. The worst performance is obtained for the nursery dataset. This is probably due to the presence of three majority classes with a close number of samples and a single minority class with a substantially lower number of samples. For this reason, a lot of samples could be discarded by the prior filter. Other datasets are either well-balanced or contain a single majority class. For these types of datasets, we obtained better results.
When we compare the performance of other methods, we can notice that uncertainty-based methods perform comparably to the best algorithms only with a high budget. Classification margin is a strong baseline as indicated by literature [1]. When we compare ensemble-based methods, it turns out that min margin and consensus entropy are the best, with both methods obtaining close performance to the best algorithm.
### Experiments with bigger datasets
We also conduct experiments on larger datasets, i.e., adult, bank marketing, firewall, and chess. To save computation, we train only when batch of 100 labeled samples is collected and reuse the hyperparameters found for the biggest datasets in previous experiments. We also drop the vote entropy and max disagreement methods from our comparison due to poor performance in previous experiments compared to other ensemble-based methods. Results are presented on the right-hand side of Tab. 2.
Here our method performs well, with either the best-balanced accuracy or close to the best. There is no clear performance pattern when we compare results across the varying budget. Uncertainty-based methods provide the worst balanced accuracy in this case. Random sampling allows for obtaining the best performance for the firewall dataset, probably due to the simplicity of the classification problem in this dataset. Firewall has only three classes and a lower IR compared to other datasets. Most methods perform well on this dataset, with a lot of ties between different algorithms in the first place.
### Impact of seed size
We evaluate the impact of the size of the initial training set on active learning performance. When utilizing labels generated with model predictions, the lower number of initial training samples may cause a higher error rate at the beginning of the experiment and the introduction of more noise into the dataset. For this reason, smaller seed sizes can impact the overall results. We reuse the hyperparameter values from previous experiments. All experiments are performed with a budget equal to 0.3. The results are provided in appendix E.
As expected, initial training size has a lower impact on the random sampling algorithm. This method is not dependent on model predictions, therefore, changing the seed should not impact the overall performance. In the case of uncertainty-based methods, there is no clear pattern of seed size impact. In some cases, training with these algorithms and lower seed could provide better results. The ensemble-based methods improve balanced accuracy as the number of labeled samples grows. SL2S can in some cases, obtain better performance with smaller seeds, and often we were able to obtain the best-balanced accuracy with our method. This result indicates that SL2S does not depend heavily on the initial model performance and could be applied even if the number of labeled samples in the beginning is small.
### Ablation studies
We perform ablation studies for the proposed method. First, we remove the prior filter and allow training regardless of
the current dataset imbalance. This modification should further verify whether the dynamic imbalance is an issue when we use self-labeling in selective sampling. Secondly, we keep higher lambda values in equation 2 after the end of the budget. Decreasing lambda is the second mechanism introduced in our work that, in principle, should prevent the gradual degradation of model performance when using self-supervision as a source of new labels. Next, we remove the bootstrapped training to evaluate if ensemble diversification provides better performance in our experiments. Lastly, the self-labeling part of our approach was removed, and training was conducted with active learning alone. We use the wine dataset for evaluation. Experiments were performed with a 0.3 labeling budget and various seed sizes. The prediction threshold value was selected based on hyperparameter tuning results from previous experiments. We repeat experiments with three different random seeds and report average results in Tab. 3.
We find that the prior filter has only a positive impact only in the case of smaller seed sizes. Conversely, reducing lambda after budget end provides gains in balanced accuracy for higher seed size. Removing self-labeling increase accuracy. This finding is expected, as in preliminary experiments we found that naive application of self-labeling could make results worse. In this case, after removing two mechanisms from our algorithm that prevent the negative impact of dynamic imbalance and classification errors, we can see that
\begin{table}
\begin{tabular}{l|c c c c c c c c c c} \hline dataset & \multicolumn{8}{c}{unresory} & \multicolumn{8}{c}{adult} \\ \hline labeled & \multicolumn{8}{c}{0.318Β±0.030} & \multicolumn{8}{c}{0.741Β±0.010} \\ labeled ensemble & \multicolumn{8}{c}{0.276Β±0.013} \\ \hline budget & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 \\ \hline random & 0.371Β±0.015 & 0.350Β±0.017 & 0.352Β±0.012 & 0.298Β±0.017 & 0.282Β±0.012 & 0.735Β±0.007 & 0.733Β±0.005 & 0.729Β±0.007 & 0.731Β±0.005 & 0.732Β±0.006 \\ f. uncertainty & 0.389Β±0.018 & 0.393Β±0.016 & 0.385Β±0.007 & 0.391Β±0.016 & 0.394Β±0.019 & 0.754Β±0.007 & 0.758Β±0.008 & 0.765Β±0.009 & 0.760Β±0.011 & 0.760Β±0.011 \\ v. uncertainty & 0.378Β±0.012 & 0.359Β±0.012 & 0.372Β±0.014 & 0.307Β±0.014 & 0.286Β±0.018 & 0.756Β±0.013 & 0.751Β±0.011 & 0.755Β±0.012 & 0.758Β±0.012 & 0.746Β±0.011 \\ class. margin & 0.397Β±0.019 & **0.395Β±0.020** & **0.396Β±0.013** & 0.399Β±0.036 & 0.396Β±0.019 & 0.757Β±0.008 & 0.757Β±0.008 & 0.757Β±0.008 & 0.757Β±0.008 & 0.757Β±0.008 \\ vote entropy & 0.393Β±0.014 & 0.393Β±0.014 & 0.393Β±0.014 & 0.393Β±0.014 & 0.393Β±0.014 & 0.393Β±0.014 & - & - & - & - & - \\ consensus entropy & 0.394Β±0.013 & 0.394Β±0.013 & 0.393Β±0.014 & 0.393Β±0.014 & **0.404Β±0.017** & 0.764Β±0.004 & **0.767Β±0.005** & 0.765Β±0.002 & 0.765Β±0.004 & 0.764Β±0.003 \\ max disagreement & 0.402Β±0.019 & 0.393Β±0.014 & 0.393Β±0.014 & **0.404Β±0.016** & 0.343Β±0.021 & & & & & & \\ min margin & **0.405Β±0.019** & 0.375Β±0.012 & 0.388Β±0.021 & 0.385Β±0.018 & 0.400Β±0.010 & **0.768Β±0.004** & **0.767Β±0.005** & **0.768Β±0.004** & **0.768Β±0.004** & **0.768Β±0.004** \\ SLS & 0.384Β±0.018 & 0.350Β±0.014 & 0.338Β±0.013 & 0.292Β±0.020 & 0.294Β±0.016 & 0.762Β±0.003 & 0.763Β±0.004 & 0.763Β±0.004 & 0.763Β±0.003 & 0.762Β±0.003 & 0.762Β±0.004 \\ \hline dataset & \multicolumn{8}{c}{0.63Β±0Β±0.010} & \multicolumn{8}{c}{0.712Β±0.012} \\ labeled ensemble & \multicolumn{8}{c}{0.63Β±0Β±0.010} & \multicolumn{8}{c}{0.714Β±0.009} \\ \hline budget & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 \\ \hline random & **0.632Β±0.011** & **0.634Β±0.009** & 0.633Β±0.010 & **0.636Β±0.010** & 0.633Β±0.012 & 0.700Β±0.023 & 0.694Β±0.017 & 0.703Β±0.021 & 0.698Β±0.014 & 0.699Β±0.018 \\ f. uncertainty & 0.633Β±0.010 & 0.63Β±0.011 & 0.633Β±0.012 & 0.635Β±0.009 & 0.634Β±0.010 & 0.691Β±0.014 & 0.710Β±0.014 & 0.706Β±0.019 & 0.706Β±0.019 & 0.705Β±0.019 \\ v. uncertainty & 0.631Β±0.010 & 0.63Β±0.012 & 0.634Β±0.010 & 0.633Β±0.009 & 0.634Β±0.011 & 0.690Β±0.014 & 0.694Β±0.018 & 0.700Β±0.018 & 0.700Β±0.018 & 0.697Β±0.018 \\ class. margin & **0.632Β±0.012** & 0.633Β±0.012 & 0.633Β±0.013 & 0.618Β±0.023 & **0.635Β±0.010** & 0.682Β±0.012 & 0.682Β±0.012 & 0.682Β±0.012 & 0.682Β±0.012 & 0.682Β±0.012 & 0.682Β±0.012 \\ consensus entropy & 0.630Β±0.011 & 0.63Β±0.011 & **0.63Β±0.011** & 0.633Β±0.011 & 0.633Β±0.011 & 0.63Β±0.010 & & & & & \\ max disagreement & 0.631Β±0.010 & 0.63Β±0.012 & 0.634Β±0.010 & 0.633Β±0.011 & 0.63Β±0.010 & & & & & & \\ min margin & **0.632Β±0.011** & 0.63Β±0.010 & 0.63Β±0.010 & 0.630Β±0.011 & & & & & & & \\ min margin & **0.632Β±0.011** & 0.63Β±0.012 & 0.634Β±0.010 & 0.634Β±0.011 & 0.634Β±0.011 & & & & & & & \\ SLS & 0.631Β±0.011 & 0.63Β±0.012 & 0.632Β±0.012 & 0.633Β±0.010 & 0.634Β±0.010 & & & & & & & \\ \hline dataset & \multicolumn{8}{c}{firewall} \\ labeled & \multicolumn{8}{c}{0.52Β±0.027} & \multicolumn{8}{c}{0.997Β±0.001} \\ labeled ensemble & \multicolumn{8}{c}{0.514Β±0.015} & \multicolumn{8}{c}{0.998Β±0.000} \\ \hline budget & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 \\ \hline budget & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 \\ \hline random & 0.408Β±0.021 & 0.430Β±0.021 & 0.493Β±0.023 & 0.452Β±0.018 & 0.474Β±0.017 & 0.960Β±0.002 & **0.997Β±0.002** & **0.998Β±0.001** & **0.997Β±0.001** & **0.998Β±0.001** \\ f. uncertainty & 0.418Β±0.020 & 0.423Β±0.018 & 0.443Β±0.017 & 0.448Β±0.012 & 0.440Β±0.012 & 0.993Β±0.002 & 0.996Β±0.002 & 0.996Β±0.002 & 0.996Β±0.002 & 0.996Β±0.002 \\ v. uncertainty & 0.415Β±0.022 & 0.437Β±0.016 & 0.437Β±0.022
further removing self-labeling improves the results. This is in line with preliminary results and further proves that prior filter and lambda reduction are indeed necessary. Lastly, the removal of bootstrapped training has a bigger impact when training with a smaller seed size. We can intuitively explain this result by the fact that ensemble diversity should be smaller when utilizing bigger datasets, as more samples could cover feature space more densely, and randomly sampling datasets with bootstrapping would produce more similar datasets.
### Incorrect labels from self-labeling
As an extension of ablation studies, we examine how many wrong labels are introduced when using SL2S with and without prior filter. For this purpose, we train the MLP model on a wine dataset with a budget of 0.3 and various seed sizes. We plot the balanced accuracy with the corresponding fraction of samples with wrong labels in the training dataset over multiple iterations in Fig. 3. Including a prior filter drastically reduces the number of incorrect labels. This does not necessarily lead to improvement in balanced accuracy. For a seed size equal to 500, both versions of the algorithm obtain very close final accuracy, while the difference in a fraction of incorrect labels is nearly 0.2. This phenomenon can be explained by two factors. First is the fact that neural networks trained with gradient descent are known to be robust to noisy labels [11]. Another possible explanation is that in some cases, incorrect labels could help to "smooth" the decision boundary. This can also explain why no difference in balanced accuracy was observed only in part of our experiments. Nonetheless, more research is needed to better understand this phenomenon and its impact on self-labeling performance. Without prior filter, models trained with smaller seed sizes accumulate erroneous labels faster in the initial phase of training. For full SL2S the fraction of wrong labels roughly states the same across the whole training. We verify this further in appendix F and show that, indeed, initial model performance has no great impact on final accuracy of SL2S.
## 6 Lessons Learned
Based on the results provided in Tab. 2 we can claim that SL2S method works better for big datasets. This was expected, as for a larger stream, more samples can be accumulated with self-labeling. In the case of a smaller dataset, the performance is similar to other methods. The budget does not have a huge impact on the experiment results. Also, experiments with seed size confirm that our method could be applied for low data regimes.
As indicated by results with the nursery dataset and ablation results, a prior filter may not be the best method to address the imbalance issue in our datasets. This part of the algorithm was designed with synthetic data. In the case of real datasets, the prior class distribution has higher importance. For this reason, alternative methods should be developed for dealing with imbalance when applying self-labeling to active learning scenarios. Although we did not manage to mitigate the imbalance problem properly, solving this issue is important, and future work in this area should address this problem.
Results from Fig. 4 suggest that after the budget ends, the balanced accuracy roughly stays at the same level, and changes in the test accuracy do not occur frequently. With a higher number of model updates, the performance over time could fall drastically. For this reason, we introduced solutions that limit the use of self-labeling, preventing a fall in accuracy. However, this is sub-optimal, as in this case, we would ideally want accuracy to increase over time, despite the end of the budget. More work is needed to better address the dynamic imbalance issue or to provide a more accurate filter for wrong predictions. With these two problems solved we could give up the mechanisms that inhibit learning. It should allow for obtaining better performance, especially for bigger datasets.
## 7 Conclusions
We have proposed a new active learning method that combines simple ensemble-based sample selection and self-labeling for selective sampling. Experiments with multiple baselines show that our method offers comparable performance to other active learning algorithms for smaller datasets and better performance for bigger datasets. Further experiments also show that our method could work well when the initially labeled dataset is small or when initial model accuracy is poorly trained.
We also show that an important aspect of self-labeling is an imbalance, as bias towards a single class in model predictions could, over time, increase dataset imbalance. Another important factor is erroneous model predictions that introduce noise into the training dataset. Based on the preliminaries and ablations presented in this work, we claim that further work should focus on these two aspects to improve the overall self-labeling performance. We cannot eliminate all errors from model predictions, however, developing better methods for filtering noisy labels or models that are more robust to label noise should allow for better utilization of self-labeling.
Figure 3: Balanced accuracy (left) with a corresponding fraction of incorrect samples in the labeled dataset (right) over multiple iterations. We perform experiments for various seed sizes: 100 (top), 500 (middle), and 1000 (bottom).
## Acknowledgment
This work is supported by the CEUS-UNISONO programme, which has received funding from the National Science Centre, Poland under grant agreement No. 2020/02/Y/ST6/00037.
|
2302.12712 | Amortised Invariance Learning for Contrastive Self-Supervision | Contrastive self-supervised learning methods famously produce high quality
transferable representations by learning invariances to different data
augmentations. Invariances established during pre-training can be interpreted
as strong inductive biases. However these may or may not be helpful, depending
on if they match the invariance requirements of downstream tasks or not. This
has led to several attempts to learn task-specific invariances during
pre-training, however, these methods are highly compute intensive and tedious
to train. We introduce the notion of amortised invariance learning for
contrastive self supervision. In the pre-training stage, we parameterize the
feature extractor by differentiable invariance hyper-parameters that control
the invariances encoded by the representation. Then, for any downstream task,
both linear readout and task-specific invariance requirements can be
efficiently and effectively learned by gradient-descent. We evaluate the notion
of amortised invariances for contrastive learning over two different
modalities: vision and audio, on two widely-used contrastive learning methods
in vision: SimCLR and MoCo-v2 with popular architectures like ResNets and
Vision Transformers, and SimCLR with ResNet-18 for audio. We show that our
amortised features provide a reliable way to learn diverse downstream tasks
with different invariance requirements, while using a single feature and
avoiding task-specific pre-training. This provides an exciting perspective that
opens up new horizons in the field of general purpose representation learning. | Ruchika Chavhan, Henry Gouk, Jan Stuehmer, Calum Heggan, Mehrdad Yaghoobi, Timothy Hospedales | 2023-02-24T16:15:11Z | http://arxiv.org/abs/2302.12712v2 | # Amortised Invariance Learning for
###### Abstract
Contrastive self-supervised learning methods famously produce high quality transferable representations by learning invariances to different data augmentations. Invariances established during pre-training can be interpreted as strong inductive biases. However these may or may not be helpful, depending on if they match the invariance requirements of downstream tasks or not. This has led to several attempts to learn task-specific invariances during pre-training, however, these methods are highly compute intensive and tedious to train. We introduce the notion of amortised invariance learning for contrastive self supervision. In the pre-training stage, we parameterize the feature extractor by differentiable invariance hyper-parameters that control the invariances encoded by the representation. Then, for any downstream task, both linear readout and task-specific invariance requirements can be efficiently and effectively learned by gradient-descent. We evaluate the notion of amortised invariances for contrastive learning over two different modalities: vision and audio, on two widely-used contrastive learning methods in vision: SimCLR and MoCo-v2 with popular architectures like ResNets and Vision Transformers, and SimCLR with ResNet-18 for audio. We show that our amortised features provide a reliable way to learn diverse downstream tasks with different invariance requirements, while using a single feature and avoiding task-specific pre-training. This provides an exciting perspective that opens up new horizons in the field of general purpose representation learning.
## 1 Introduction
Self-supervised learning has emerged as a driving force in representation learning, as it eliminates the dependency on data annotation and enables scaling up to larger datasets that tend to produce better representations (Ericsson et al., 2022). Among the flavours of self-supervision, contrastive learning has been particularly successful in important application disciplines such as computer vision (Chen et al., 2020; Caron et al., 2020; Zbontar et al., 2021), medical AI (Azizi et al., 2021; Krishnan et al., 2022), and audio processing (Al-Tahan & Mohsenzadeh, 2021). The key common element of various contrastive learning methods is training representations that are _invariant_ to particular semantics-preserving input transformations (e.g., image blur, audio frequency masking) that are applied synthetically during training. Such invariances provide a strong inductive bias that can improve downstream learning speed, generalisation, and robustness (Geirhos et al., 2020).
A major vision motivating self-supervision research has been producing a general purpose representation can be learned once, albeit at substantial cost, and then cost-effectively re-used for different tasks of interest. Rapidly advancing research (Chen et al., 2020; Caron et al., 2020; Zbontar et al., 2021), as summarized by various evaluation studies (Azizi et al., 2021; Ericsson et al., 2021), shows progress towards this goal. If successful this could displace the 'end-to-end supervised learning for each task' principle that has dominated deep learning and alleviate its data annotation cost.
However, this vision is not straightforward to achieve. In reality, different tasks often require mutually incompatible invariances (inductive biases). For example, object recognition may benefit from rotation and blur invariance; but pose-estimation and blur-estimation tasks obviously prefer rotation
and blur equivariance respectively. Training a feature extractor with any given invariance will likely harm some task of interest, as quantified recently Ericsson et al. (2022). This has led to work on learning task-specific invariances/augmentations for self-supervision with meta-gradient (Raghu et al., 2021) or BayesOpt (Wagner et al., 2022), which is extremely expensive and cumbersome; and training feature ensembles using multiple backbones with different invariances (Xiao et al., 2021; Ericsson et al., 2022), which is also expensive and not scalable. In this paper we therefore raise the question: _How can we learn a single general-purpose representation that efficiently supports a set of downstream tasks with conflicting, and a-priori unknown invariance requirements?_
To address these issues we explore the notion of _amortized invariance learning_ in contrastive self-supervision. We parameterise the contrastive learner's neural architecture by a set of differentiable invariance hyper-parameters, such that the feature extraction process is conditioned on a particular set of invariance requirements. During contrastive pre-training, sampled augmentations correspond to observed invariance hyper-parameters. By learning this architecture on a range of augmentations, we essentially learn a low-dimensional manifold of feature extractors that is parameterised by desired invariances. During downstream task learning, we freeze the feature extractor and learn a new readout head as well as the unknown invariance hyperparameters. Thus the invariance requirements of each downstream task are automatically detected in a way that is efficient and parameter light.
Our framework provides an interesting new approach to general purpose representation learning by supporting a range of invariances within a single feature extractor. We demonstrate this concept empirically for two different modalities of vision and audio, using SimCLR Chen et al. (2020) and MoCo Chen et al. (2020) as representative contrastive learners; and provide two instantiations of the amortized learning framework: a hypernetwork-based Ha et al. (2017) approach for ResNet CNNs, and a prompt learning approach for ViTs Dosovitskiy et al. (2021). We evaluate both classification and regression tasks in both many-shot and few-shot regime. Finally, we provide theoretical insights about why our amortised learning framework provides strong generalisation performance.
## 2 Related Work
**Invariance Learning** Invariances have been learned by MAP (Benton et al., 2020), marginal likelihood (Immer et al., 2022), BayesOpt (Wagner et al., 2022), and meta learning (Raghu et al., 2021)--where gradients from the validation set are backpropagated to update the invariances or augmentation choice. All these approaches are highly data and compute intensive due to the substantial effort required to train an invariance at each iteration of invariance learning. Our framework amortises the cost of invariance learning so that it is quick and easy to learn task-specific invariances downstream.
**Invariances in Self-Supervision** Self-supervised methods (Ericsson et al., 2022) often rely on contrastive augmentations (Chen et al., 2020). Their success has been attributed to engendering invariances (Ericsson et al., 2021; Wang and Isola, 2020; Purushwalkam and Gupta, 2020) through these augmentations, which in turn provide good inductive bias for downstream tasks. Self-supervision sometimes aspires to providing a single general purpose feature suited for all tasks in the guise of foundation models (Bommasani et al., 2021). However, studies have shown that different augmentations (invariances) are suited for different downstream tasks, with no single feature being optimal for all tasks (Ericsson et al., 2022) and performance suffering if inappropriate invariances are provided. This leads to the tedious need to produce and combine an ensemble of features (Xiao et al., 2021; Ericsson et al., 2022), to disentangle invariance and transformation prediction (Lee et al., 2021), or to costly task-specific self-supervised pre-training (Raghu et al., 2021; Wagner et al., 2022). Our framework breaths new life into the notion of self-supervised learning of general purpose representations by learning a parametric feature extractor that spans an easily accessible range of invariances, and provides easy support for explicit task-specific invariance estimation of downstream tasks.
**Self-Supervision in Audio and Beyond** The design of typical augmentations in computer vision benefits from a large collective body of wisdom (Chen et al., 2020) about suitable augmentation-s/invariances for common tasks of interest. Besides the task-dependence (e.g., recognition vs pose-estimation) of invariance already discussed, bringing self-supervision to new domains with less prior knowledge - such as audio - often requires expensive grid search to find a good augmentation suite to use (Al-Tahan and Mohsenzadeh, 2021; Wagner et al., 2022), where each step consists of self-supervised pre-training followed by downstream task evaluation. Our framework also benefits these
situations: we can simply pre-train once with a fairly unconstrained suite of augmentations, and then quickly search for those augmentations beneficial to downstream tasks in this modality.
## 3 Methodology
### Pre-training
**Features and Invariance Descriptors** We begin by denoting a large unlabeled dataset available for pre-training by \(\mathcal{D}^{t}=\{x_{i}^{t}\}_{i=1}^{n_{t}}\), where \(n_{t}\) is the number of samples available in the raw dataset. Contrastive self-supervision typically trains a feature extractor \(h(x)\) that bakes in invariance to a single pre-defined set of augmentations. We introduce the concept of an invariance descriptor \(i\), that denotes whether a parameterised feature extractor \(h(x;i)\) should be invariant or sensitive to \(K\) possible factors of variation. This is a vector \(i\in[0,1]^{K}\) over \(K\) possible transformations, where \(i_{k}=1\) and \(i_{k}=0\) indicate invariance and sensitivity to the \(k\)th factor respectively. We denote the set of binary invariance descriptors by \(\mathcal{I}\), where \(|\mathcal{I}|=2^{K}\). 1
Footnote 1: We exclude the case where all bits correspond to 0, implying that no augmentations are applied.
**Learning an invariance-parameterised feature extractor** Every unique invariance descriptor \(i\) can be paired with a corresponding combination of stochastic augmentations, which are denoted by \(\mathbb{A}_{i}\). To learn our invariance-parameterised encoder \(h_{w}(x;i)\) we extend the standard contrastive self-supervised learning paradigm. At each iteration, of contrastive learning, we sample an invariance descriptor \(i\) which can thus be considered observed. We then use the corresponding augmentations to generate two views of the same example for invariance descriptor \(i\), denoted by \(\tilde{x}_{\mathbb{A}_{i}}\) and \(\tilde{x}_{\mathbb{A}_{i}}^{+}\). Similarly, a set of \(N^{-}\) negative samples denoted by \(X_{\mathbb{A}_{i}}=\{\tilde{x}_{\mathbb{A}_{i}}^{k}\}_{k=1}^{N^{-}}\) are also augmented using the same augmentations, i.e. invariance descriptor \(i\).
Like all contrastive learning methods, a network projection head \(g_{\phi}(\cdot)\) projects representations to the feature space in which contrastive loss is applied. Following the convention introduced in MoCo-v2 (Chen et al., 2020), the two views \(\tilde{x}_{\mathbb{A}_{i}}^{1}\) and \(\tilde{x}_{\mathbb{A}_{i}}^{2}\) are considered as input for query \(q_{\mathbb{A}_{i}}\) and positive key \(k_{\mathbb{A}_{i}}^{+}\) representations respectively. A set of encoded samples form the keys of a dictionary denoted by \(\mathcal{K}_{\mathbb{A}_{i}}=\{k_{\mathbb{A}_{i}}^{+},\tilde{k}_{\mathbb{A}_{i }}^{1},\tilde{k}_{\mathbb{A}_{i}}^{2},\cdots\}\). Eq. 1 shows the forward propagation pipeline of the invariance encoder backbone to generate the query and keys of \(\mathcal{K}_{\mathbb{A}_{i}}\).
\[q_{\mathbb{A}_{i}}=g_{\phi}(h_{w}(\tilde{x}_{\mathbb{A}_{i}};i))\qquad\quad k_ {\mathbb{A}_{i}}^{+}=g_{\phi}(h_{w}(\tilde{x}_{\mathbb{A}_{i}}^{+};i))\qquad \quad\tilde{k}_{\mathbb{A}_{i}}^{j}=g_{\phi}(h_{w}(\tilde{x}_{\mathbb{A}_{i}}^ {j};i)) \tag{1}\]
Both SimCLR and MoCo-v2, employ the contrastive InfoNCE loss Oord et al. (2018). In SimCLR variants, negative keys are from the same batch, while MoCo-based methods maintain negative keys as a queue. Finally, for a particular augmentation operation \(\mathbb{A}_{i}\), we formulate the InfoNCE loss as:
\[\mathcal{L}_{\text{contrastive}}(q_{\mathbb{A}_{i}},\mathcal{K}_{\mathbb{A} _{i}})=-\log\frac{\exp\left(q_{\mathbb{A}_{i}}\cdot k_{\mathbb{A}_{i}}^{+}/ \tau\right)}{\sum_{j=0}^{|\mathcal{K}_{\mathbb{A}_{i}}|}\exp\left(q_{\mathbb{ A}_{i}}\cdot k_{\mathbb{A}_{i}}^{j}/\tau\right)} \tag{2}\]
where \(\tau\) is a temperature hyper-parameter. In the pre-training stage, the invariance encoder and the projection head are trained using the contrastive loss governed by the contrastive method employed.
\[w^{\star},\phi^{\star}=\arg\min_{w,\phi}\frac{1}{|\mathcal{I}|}\sum_{i_{t}\in \mathcal{I}}\mathcal{L}_{\text{contrastive}}(q_{\mathbb{A}_{i}},\mathcal{K}_{ \mathbb{A}_{i}}) \tag{3}\]
In practice, we randomly sample an invariance descriptor from \(\mathcal{I}\) for each batch and \(w,\phi\) are learned for corresponding \(\mathcal{L}_{\text{contrastive}}(q_{\mathbb{A}_{i}},\mathcal{K}_{\mathbb{A}_{ i}})\). The invariance-parameterised encoder \(h^{\star}(x;i)\) is then transferred to a downstream task.
### Downstream task learning
We next consider a set of downstream tasks \(\mathcal{T}_{\text{target}}\), that may have different opposing and a priori unknown invariance requirements. We denote the training data available for a downstream task \(t\in\mathcal{T}_{\text{target}}\) as \(\mathcal{D}^{t}=\{x_{i}^{t},y_{i}^{t}\}_{i=1}^{J_{t}}\). In the downstream task training stage, we employ the learned parametric encoder \(h^{\star}(\cdot;\cdot)\) learned from the pre-training to encode data as \(h^{\star}(x;\cdot)\). For each downstream
task \(t\), we follow the linear evaluation protocol by learning a prediction head \(\Phi_{t}\), but extend it by also learning the corresponding task-wise invariance vector \(i_{t}\). Thus the predicted output given an invariance hyper-parameter \(i_{t}\) is \(\tilde{y}^{t}=\langle\phi_{t},h^{\star}(x;i_{t})\rangle\). For each task \(t\), we find the optimal invariance hyper-parameters \(i_{t}\) and prediction heads \(\Phi_{t}\) by minimizing the task-specific loss of the training set,
\[i_{t}^{\star},\Phi_{t}^{\star}=\arg\min_{i_{t},\Phi_{t}}\frac{1}{n_{t}}\sum_{j= 1}^{n_{t}}\mathcal{L}(\langle\phi_{t},h^{\star}(x^{t};i_{t})\rangle,y_{j}^{t}). \tag{4}\]
**Quantised Invariance Learning** We remark that invariance parameters learned for downstream tasks are continuous vectors \(i\in[0,1]^{K}\) in our model. In the previous pre-training phase, all observed occurrences of \(i\) are discrete \(i\in\{0,1\}^{K}\). However, during downstream learning continuous values are learned which can represent continuous degree of invariance. Nevertheless, we will show later than there are learning theoretic benefits for modeling \(i\) as members of a discrete set. To exploit this, while retaining the ease of continuous optimisation for \(i\) in downstream task learning, we can simply quantize to a desired number of bits \(\tilde{i}^{\star}=Q(i^{\star};b)\) where \(Q(\cdot;b)\) is the quantization operator that quantises each element of \(i\) into a \(b\)-bit representation.
### Architectures
We next describe architectures \(h(\cdot;i)\) capable of supporting invariance-paramaterised feature encoding for ResNet CNNs and ViT transformers.
**Hyper-ResNets:** To incorporate differentiable invariances, we parameterise the ResNet50 backbone in the form of a hypernetwork Ha et al. (2017), conditioned on an invariance descriptor. Previous work on generating ResNet-50 parameters using hypernetworks Mittal (2018) is tailored for supervised learning on small-scale dataset like CIFAR10. This architecture relies on multiple forward passes through the hypernetwork architecture to generate a single convolutional kernel, which leads to prohibitively slow pre-training with constrastive learning on large datasets like ImageNet. Thus, we develop a different hypernetwork architecture that can generate weights of a full ResNet50 with a single forward pass of the hypernetwork. This is easier to optimise and faster to train for contrastive learning. Details about the architectures are provided in the supplementary material A.1.1.
**Prompt-ViTs:** It is well known that ViTs are difficult to train, and extremely hyperparameter sensitive, especially for contrastive learning as discussed in Chen et al. (2021). While we were able to successfully learn invariance parameterised ViTs with hypernetworks analogous to those described for ResNet above, these were even harder to train. We therefore developed an alternative approach based on prompt learning that was easier to train. Specifically, our invariance vectors are embedded by two-layer MLP network denoted by \(l_{\text{prompt}}(\cdot)\) and then appended after ViT input tokens from the corresponding task. Therefore, features from an image \(x\) are extracted with desired invariance \(i\) as \(h(x;i)=\text{ViT}([\texttt{CLS},E(x),l_{\text{prompt}}(i)])\), where \(E(x)\) denotes the image tokens with added position embedding. Thus invariance preferences are treated the same as image and class tokens. The invariance prompt guides the feature encoding of VIT as it is passed through all the attention and MLP layers together with the image tokens. Further details are given in the supplementary material A.1.2.
## 4 Experiments: Computer Vision Tasks
We evaluate our proposed framework on two widely-used contrastive learning methods: SimCLR and MoCo-v2 and with ResNet and VIT architectures.
### Augmentation Groups
Most contrastive learning studies use a suite of \(K\) augmentations consisting of standard data augmentation strategies like random resized cropping, horizontal flipping, color jitter, gaussian blurring, etc. We consider two cases of treating these as \(K\) independent invariances, and grouping several augmentations into a single invariance.
**Grouping Augmentations** For simple analysis and ease of comparison to prior work, we conduct experiments by grouping the augmentations into \(K=2\) groups as suggested by Ericsson et al. (2022). The default set of augmentations have been divided into two groups called _Appearance_ and
_Spatial_ augmentations. Spatial augmentations (crop, flip, scale, shear, rotate, transform) are those that mainly transform image spatially while Appearance based augmentations (greyscale, brightness, contrast, saturation, hue, blur, sharpness) are those that mainly act on the pixels of the image, augmenting its appearance. Thus we amortise learning Appearance, Spatial, and default (combination of Appearance+Spatial) augmentations in a single feature extractor. During training, we assign invariance hyperparameters as 2-way binary vectors i = [1, 1], i = [1, 0] and i = [0, 1] for default, Spatial and Appearance based augmentations respectively.
**Individual Augmentations** In this condition, we perform amortised learning among the five default augmentations to learn invariance to all possible combinations of these augmentations. Every combination of augmentation is specified by a \(K=5\)-way binary vector, indicating which augmentation is switched on. Since SimCLR Chen et al. (2020) draws two augmentations out of the entire set, we exclude the invariance descriptors that indicate that less than two augmentations have been applied. Thus, _26 unique invariances are encoded into a single backbone_.
### Implementation Details
**Pre-training Datasets:** We perform self-supervised pre-training for ViT-B and ResNet50 on the 1.28M ImageNet training set Deng et al. (2009) and ImageNet-100 (a 100-category subset of ImageNet) following Chen et al. (2021) and Xiao et al. (2021) respectively. Both the models are pre-trained for 300 epochs with a batch size of 1024.
**Learning rates and Optimisers:** We find that the optimal learning rate and weight decay obtained by Chen et al. (2020) work well for Hyper-ResNets and Prompt-ViTs, in both SimCLR and MoCO-v2/v3 experiments. We follow the optimization protocol in Chen et al. (2021) and use the AdamW optimiser along with learning rate warm-up for 40 epochs, followed by a cosine decay schedule.
**MLP heads:** Following Chen et al. (2020, 2021), we use a 3-layer projection head and a 2-layer prediction head for both ViT-B and ResNet50. The hidden and output layers of the all projection and prediction MLPs of both architectures is 4096-d and 256-d respectively.
**Loss:** Following the protocol in Chen et al. (2021) for training ViT-B models under the MoCO-v3 framework, we abandon the memory queue and optimize Prompt-ViT models on the symmetrised contrastive loss (Caron et al., 2020; Grill et al., 2020). However, we observe that the symmetrised contrastive loss and discarding queues is not effective for training Hyper-ResNet50 models. Therefore, for ResNet models we stick to the MoCO-v2 framework, where a memory queue is used. This leads to fair comparison between different baselines for both the architectures. Additionally, we maintain a separate queue for each type of invariance encoded in the Hyper-ResNet50 model so that augmented keys corresponding to the same invariance are used for contrastive loss.
**Downstream Evaluation:** In our framework, evaluation on downstream tasks consists of supervised learning of the task-specific invariance hyperparameters and a linear classifier using backpropagation. We use the Adam optimiser, with a batch size of 256, and sweep learning rate and weight decay parameters for each downstream dataset based on its validation set.
**Downstream tasks:** Our suite of downstream tasks consists of object recognition on standard benchmarks CIFAR10/100 (Krizhevsky et al., 2009), Caltech101 (Fei-Fei et al., 2004), Flowers (Nilsback & Zisserman, 2008), Pets (Parkhi et al., 2012), DTD (Cimpoi et al., 2014), CUB200 (Wah et al., 2011), as well as a set of spatially sensitive tasks including facial landmark detection on 300W SAG (2016), and CelebA Liu et al. (2015), and pose estimation on Leeds Sports Pose Johnson & Everingham (2010). More details can be found in A.2.
**Few shot downstream evaluation** We also evaluate the pre-trained networks on various few-shot learning benchmarks: FC100 (Oreshkin et al., 2018), Caltech-UCSD Birds (CUB200), and Plant Disease Mohanty et al. (2016). We also show results for few-shot regression problems on 300w, Leeds Sports Pose and CelebA datasets. More details can be found in A.2.
**Competitors** We compare our Amortised representation learning framework based on Hyper-ResNet and Prompt-ViT as backbones (denoted AI-SimCLR, etc) with default SimCLR and MoCo alternatives. We also compare with SimCLR and MoCo variants that we re-trained to specialise in Appearance and Spatial group augmentations, (denoted A- and S-), and two state of the art ensemble
based alternatives LooC (Xiao et al., 2021) and AugSelf (Lee et al., 2021), with the same pretraining setting as us.
#### 4.2.1 Results
**Can we successfully amortise invariance learning?** We investigate this question for Appearance and Spatial invariance groups explained in Section 4.1. Specifically, we follow Ericsson et al. (2022) in measuring the invariance between original and augmented features using the cosine similarity in a normalised feature space between input images and their augmented counterpart. A high cosine similarity indicates high invariance, and vice-versa. Using this measure, we compare the invariances learned by (1) a default MoCo model, and our single amortised model fed with (2) Appearance (\(i=[1,0]\)) hyperparameter, and (3) Spatial (\(i=[0,1]\)) hyperparameter. We also compare two MoCo models re-trained from scratch to specialize on the Appearance and Spatial invariance groups, following Ericsson et al. (2022). Full details are given in Tables 6 and 7 in the appendix, and a summary of the first three comparisons in Figure 1(left). From the figure we can see that: (i) Our amortised invariance learner can access comparably strong invariances to the default model where desired (convex hull of both colored regions is similar to the dashed region). More importantly, (ii) while the default model is fixed and cannot change any (in)variances without retraining, our amortised invariance learner can _dial-down_ invariances on command. For example, the Appearance model increases sensitivity to flipping. We will show that this is a useful capability for a general purpose representation when downstream tasks contain both pose estimation and classification, that require conflicting spatial invariances.
**Can We Continuously Scale Invariance Strength?** Recall that during training, our model observed three invariance hyperparameter vectors \(\{[0,1],[1,0],[1,1]\}\). However, once trained we can easily interpolate along the 2d-manifold of Appearance and Spatial group invariances. Figure 1(right) illustrates this for amortised ResNet50 and grayscale invariance within the Appearance group. We can see that interpolating between Spatial \([1,0]\) and Appearance \([0,1]\) parameter vectors leads to a corresponding smooth increase in grayscale invariance. We expect that if a downstream task benefits from grayscale invariance it will essentially perform gradient ascent on this surface to reach the Appearance \([0,1]\) corner. We show similar plots for Spatial amortised ResNet50 and amortised variants of ViT in the appendix. (Figure 4)
**Does amortised invariance learning benefit downstream tasks?** We next evaluate a suite of downstream learning tasks. We compare (1) default SimCLR and MoCo models, (2) variants re-trained to specialise in Appearance and Spatial invariances (Ericsson et al., 2022) (See C), (3) For MoCo CNNs, we compare LooC (Xiao et al., 2021) and AugSelf (Lee et al., 2021), which are ensemble based approaches to supporting multiple invariances, (4) Fine-tuning (FT) and fine-tuning with time constrained to match our method (FT-*), and (5) Our amortised framework including linear readout and invariance learning for each downstream task (denoted AI-SimCLR/MoCo etc).
Figure 1: Left: Radar plots comparing strengths of 5 Appearance (left) and 5 Spatial invariances (right) for default ResNet50-MoCov2 and ViT-MoCov3 (green dots) vs our corresponding amortised models. By varying a _runtime_ invariance parameter, a single feature extractor can provide one or other group of invariances on demand. Right: While training was performed on discrete invariance vectors (corners), we can interpolate smoothly between different invariances (vertical axis) by continuously varying the invariance parameter (horizontal axes).
From the results in Table 1, we can see that (i) our Hypernet approaches for ResNet50 have comparable or better performance compared to baselines. This is especially the case for the regression datasets, where the results are often dramatically better than baselines. EG: \(+21\%\) for hypernet vs baseline on MoCo based CelebA, \(+12.7\%\) for hypernet vs AugSelf on MoCo based 300w.
To understand how these strong results are achieved, we report the Appearance and Spatial invariances estimated by our framework in the case of SimCLR for each downstream task in Figure 2(right). The results show that while the invariance strengths learned vary continuously across tasks, there is a systematic difference: Classification tasks (right seven) prefer stronger invariances overall, and a greater Spatial- than Appearance invariance. Meanwhile regression tasks (left three) prefer more moderate invariances with similar strength or a tendency towards Appearance invariance. Invariance parameters learned for AI-MoCo models are shown in Table 10 in the appendix.
**Can we scale to learning more invariances?** We next repeat the above experiments for the case of SimCLR, using five invariances (Sec 4.1) rather than the two considered before. The results in Table 1 for AI-SimCLR(2) vs AI-SimCLR(5) show that indeed using more distinct invariances is possible, and often improves performance compared to the case of two invariances.
**Does amortised invariance learning benefit few-shot learning tasks?** To answer this question we focused on MoCo-v2 CNN models trained on ImageNet100. For classification tasks, we followed Lee et al. (2021); Xiao et al. (2021) in sampling \(C\)-way \(K\)-shot episodes from the target problem and training linear readouts (and invariances for the case of our method) for each episode. For regression tasks we repeatedly sampled 5%, and 20% to generate low-shot training sets. From the results in Tables 2, we can see that our AI-MoCo framework usually performs better than all competitors, with substantial margins in several cases, for example \(12\%\) and improvement on default MoCo for 5-way/5-shot flowers classification dataset, and \(10\%\)\(R^{2}\) improvement on AugSelf in CelebA 20% low-shot regression. Invariances hyper-parameters for few-shot classification tasks and few-shot regression tasks are shown in are shown in Table 15 in the appendix. Results for few-shot learning on audio are given in the Supplementary.
\begin{table}
\begin{tabular}{c c|c c c c c c c c c c c c} \hline \hline & Methods & 300w & LS Pose & CelebA & CIFAR10 & CIFAR100 & Flowers & Caltech & IOT & DTD & Fels & CUB & Avg. & Rank \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & MoCo & 85.5 & 58.7 & 61.0 & 84.6 & 61.6 & 82.4 & 77.3 & 64.5 & 70.1 & 32.2 & 67.8 & 3.8 \\ & MoCo- FT & 87.7 & 61.6 & 72.1 & 85.1 & **65.1** & 80.4 & 78.0 & **69.5** & **75.8** & 40.1 & 71.5 & 2.2 \\ & S-MoCo & 74.7 & 46.1 & 49.0 & 68.8 & 41.7 & 51.6 & 53.8 & 62.5 & 56.3 & 31.3 & 53.4 & 5.5 \\ & A-MoCo & 78.5 & 49.2 & 62.0 & 75.0 & 37.4 & 18.8 & 43.8 & 56.3 & 39.6 & 17.6 & 47.8 & 5.5 \\ & AugSelf\({}^{*}\) & 77.3 & 63.9 & 77.0 & **88.3** & 63.9 & **85.7** & **78.9** & 66.2 & 73.5 & 37.0 & 70.9 & 2.5 \\ & Loc\({}^{*}\) & - & - & - & - & - & - & - & - & - & - & 39.6 & - & - \\ \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & AI-MoCo & **90.0** & **65.2** & **82.0** & 81.3 & 64.6 & 81.3 & 78.4 & 68.8 & 41.0 & **41.4** & **72.7** & **1.9** \\ & SimCLR & 53.3 & 54.5 & 61.0 & 81.8 & 61.4 & 66.6 & 71.9 & 51.6 & 67.9 & 37.9 & 60.7 & 4.0 \\ & SimCLR & 87.5 & 58.1 & 64.0 & **83.7** & **63.4** & **68.1** & **73.4** & **53.8** & **70.4** & **39.8** & 66.2 & 2.0 \\ & S-SimCLR & 28.4 & **84.4** & 54.9 & 75.0 & 60.7 & 61.8 & 52.8 & 43.8 & 42.9 & 31.3 & 50.0 & 5.3 \\ & A-SimCLR & 67.6 & 58.2 & 72.5 & 61.5 & 40.0 & 50.0 & 43.7 & 25.0 & 29.8 & 18.8 & 46.7 & 5.2 \\ & AI-SimCLR (2) & **87.1** & **65.5** & **75.3** & 83.0 & 62.5 & 67.9 & 70.8 & 52.8 & 68.6 & 37.5 & **67.1** & 3.0 \\ & AI-SimCLR (5) & **88.0** & **65.0** & **77.2** & **83.9** & **63.1** & **68.3** & **74.2** & **53.7** & **69.5** & **38.6** & **68.1** & **1.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Downstream performance of MoCo (top) and SimCLR (below) based ResNet50 models pre-trained on ImageNet-100. The first three results columns are regression tasks (\(R^{2}\), %); the last six are classification (accuracy, %). \({}^{*}\) numbers taken from Xiao et al. (2021). \({}^{+}\) numbers for classification taken from Lee et al. (2021), rest are our runs. The invariances learned by our amortised models AI-SimCLR(2), AI-SimCLR(5) for each downstream task are indicated in Figure 2 (right and left respectively).
Figure 2: Learned invariance vectors for AI-SimCLR based on ResNet50 models for the suite of downstream tasks when using 2-invariance (right) and 5-invariance (left) condition.
**Does amortised invariance learning benefit downstream tasks when using transformer architectures?** The experiments described so far have used ResNet50 CNNs. To investigate the impact of amortised invariance learning on transformers, we apply our framework to ImageNet-1k pretrained MoCo-v3 VIT-B, and repeat the many-shot and few-shot learning experiments above with this architecture. From the results in Table 3, and few-shot results in Table 5, we can see that the outcomes are broadly consistent with those of the CNNs. Our AI-VIT performs comparably or better than conventional MoCo-v3 VITs. In many cases it performs substantially better, e.g., \(18\%\) and \(19\%\) improvement in \(R^{2}\) for 5% low-shot regression on 300w and LSP respectively. Invariance prompts learned for AI-ViT models are shown in Table 10 in the appendix.
**How do amortised invariances compare to Fine-Tuning?** Our framework provides a new operating point between linear readout and fine-tuning in terms of expressivity and efficiency for adapting to a downstream task. To explore this point, we compare our framework with fine-tuning variants that update 1, 2, 3 or all 4 ResNet blocks in Figure 3. The figure also shows the Pareto front of accuracy vs parameter/clock time cost. AI-MoCo with ResNet50 only requires updating essentially as few parameters as linear readout, and much less than FT; and it provides greater accuracy for less time compared to FT. In both cases AI-MoCo dominates a section of the Pareto front.
## 5 Theoretical Analysis
We finally provide some theoretical analysis to give insight into the value and empirical efficacy of our approach when applied to novel downstream tasks not seen during pre-training. Specifically, for downstream tasks, our amortised invariance framework admits the generalisation bound below.
**Theorem 5.1**.: _For 1-Lipschitz loss function, \(\mathcal{L}\), taking values in \([0,M]\), if for all \(\phi\) we have that \(\|\phi\|\leq B\) and \(\|f_{\phi}(x)\|\leq X\), the following holds with probability at least \(1-\delta\)_
\[\mathbb{E}_{x^{t},y^{t}}[\mathcal{L}(\hat{y}^{t},y^{t})]\leq\frac{1}{n_{t}} \sum_{j=1}^{n_{t}}\mathcal{L}(\hat{y}^{t}_{j},y^{t}_{j})+\frac{2\sqrt{2c}XB}{ \sqrt{n_{t}}}+3M\sqrt{\frac{\ln(|I|/\delta)}{2n_{t}}},\]
_where \(I=\{0,1\}^{d}\) is the space of possible invariance hyperparameters and \(c\) is the number of classes._
The proof is in the appendix along with corresponding existing theorems for (i) simple linear readout and (ii) fine-tuning alternatives. Comparing Theorem 5.1 with the alternatives shows that the overfitting behaviour (generalisation error) of our approach when applied to novel tasks scales similarly to conventional linear models (i.e., with \(\frac{2XB}{\sqrt{n}}\)). However due to the parameterised feature extractor
\begin{table}
\begin{tabular}{c|c c c c|c c c c c c c c c} \hline Methods & \multicolumn{2}{c|}{CUB} & \multicolumn{2}{c|}{Flowances} & \multicolumn{2}{c|}{FC100} & \multicolumn{2}{c|}{Plant Disease} & \multicolumn{2}{c|}{300w} & \multicolumn{2}{c|}{LS Pose} & \multicolumn{2}{c|}{CelebA} & Rank \\ & (5.1) & (5.5) & (5.1) & (5.5) & (5.1) & (5.5) & (5.1) & (5.5) & \(s.05\) & \(s.05\) & \(s.02\) & \(s.05\) & \(s.05\) & \(s.02\) & \\ \hline MoCo & 41.0 & 56.9 & 66.6 & 78.4 & 31.7 & 43.9 & 65.7 & 85.0 & 39.0 & 50.1 & 54.2 & 60.3 & 40.2 & 52.3 & 2.2 \\ MoCo-FT & 37.8 & 52.5 & 66.6 & 73.5 & 29.4 & 40.8 & 61.0 & 80.3 & 38.2 & 45.0 & 49.0 & 56.0 & 37.2 & 68.2 & 4.3 \\ S-MoCo & 36.0 & 45.6 & 69.6 & 20.0 & 26.0 & 56.6 & 64.6 & 76.3 & 12.1 & 20.7 & 38.9 & 43.7 & 30.2 & 44.2 & 5.3 \\ A-MoCo & 34.1 & 34.7 & 24.4 & 36.4 & 21.0 & 21.2 & 21.6 & 36.6 & 36.5 & 40.8 & 43.2 & 48.2 & 43.8 & 5.3 \\ AugSelf & 44.2 & 57.4 & 76.0 & 88.6 & 35.0 & **48.8** & 71.8 & 87.8 & 42.0 & 51.8 & 53.8 & 60.1 & 53.2 & 66.3 & 2.1 \\ Lod & - & - & 70.9 & 80.8 & - & - & - & - & - & - & - & - & - & - & - & - \\ \hline AI-MoCo & **45.0** & **58.0** & **58.0** & **76.7** & **37.4** & **83.4** & **72.6** & **89.1** & **49.2** & **87.9** & **58.3** & **62.0** & **36.0** & **76.0** & **1.1** \\ \hline \end{tabular}
\end{table}
Table 2: Few-shot classification and regression accuracy (%, \(R^{2}\)) of our AI-MoCo based on ResNet50 models pretrained on ImageNet-100. Values are reported with 95% confidence intervals averaged over 2000 episodes on FC100, CUB200, and Plant Disease. (N, K) denotes N-way K-shot tasks. For regression tasks tasks (300w, LS Pose, CelebA), we report downstream performance for different splits with train proportion given by \(s\). More details are given in Tab. 11 and 12
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c c c} \hline Methods & 300w & LS Pose & CelebA & CIFAR10 & CIFAR100 & Flowers & Caltech 101 & DTD & Pets & CUB & Avg. & Rank \\ \hline MoCo-v3 & 81.6 & 59.1 & 78.0 & 94.8 & 63.4 & **87.7** & 83.5 & 61.1 & 79.4 & 26.5 & 71.5 & 2.7 \\ MoCo-v3 - FT & 85.7 & 64.7 & 82.0 & **95.5** & **68.8** & 86.8 & **84.3** & **62.7** & **82.0** & **87.4** & **1.6** \\ S-MoCo-v3 & 50.9 & 48.1 & 72.0 & 74.3 & 62.1 & 78.9 & 72.1 & 59.3 & 73.0 & 21.4 & 61.2 & 4.6 \\ A-MoCo-v3 & 78.9 & 60.6 & 81.0 & 79.2 & 59.2 & 70.3 & 70.5 & 53.6 & 79.3 & 24.7 & 65.7 & 4.2 \\ \hline AI-MoCo-v3 & **89.0** & **67.0** & **84.0** & 93.8 & 63.7 & 87.5 & 81.3 & 60.4 & 81.5 & **28.2** & 73.7 & 1.9 \\ \hline \end{tabular}
\end{table}
Table 3: Downstream performance of AI-MoCo-v3 based on ViTs pretrained on ImageNet-1k for many-shot classification (accuracy, %) and regression (\(R^{2}\), %).
we have a potential to obtain a much better fit on the training data (reduced first term on RHS) at a comparatively limited cost to complexity (third term on RHS). In contrast, improving train data fit by end-to-end fine-tuning of a deep network to adapt to task-specific invariance requirements induces a third complexity term that depends exponentially on the depth of the network due to the product of norms of the weights in each layer (Bartlett et al., 2017; Golowich et al., 2018; Long and Sedghi, 2020). This analysis requires working with a discrete space of invariances, while most of our experiments have worked with continuous invariance learning for simplicity. As mentioned in Section 3.2, we can trivially discretize our estimated invariances, and we show that invariance discretization does not substantially affect our results in Table 9.
Bringing all this together, we illustrate the significance by instantiating all the terms in Theorem 5.1 to compute the guaranteed worst case generalisation error for our model and the alternatives in Table 4. The FT baseline has a vacuous bound with that only guarantees an error rate \(\gg 1\). The linear model can provide an error rate guarantee, while our AI-MoCo provides a stronger guaranteed error rate thanks to the trade-off discussed above. Altogether this analysis shows that our framework provides an exciting new operating point in the bias-variance trade-off compared to the established paradigms of linear classifiers and end-to-end deep learning. See also Sec F for further details and discussion.
## 6 Conclusion
We have introduced the concept of amortised invariance learning, and shown that a manifold spanning multiple invariances (up to \(K=7\) dimensional in our experiments) can be pre-learned by a single feature extractor of either CNN or VIT types. Our amortised extractor provides an effective general purpose representation that can be transferred to support diverse downstream tasks that cannot be supported by a single conventional contrastively trained representation. With our parametric representation, each downstream task can rapidly select an appropriate invariance in an efficient and parameter-light way. This leads to strong improvements across a range of classification and regression tasks in the few-and many-shot regimes. Amortised invariances provide an exciting new direction of study for general purpose features suited for diverse downstream tasks.
\begin{table}
\begin{tabular}{c|c} \hline Method & Guaranteed \\ & Error (\(\downarrow\)) \\ \hline LR-MoCo & 0.78 \\ AI-MoCo & **0.67** \\ FT-MoCo & \(\gg 1\) \\ \hline \end{tabular}
\end{table}
Table 4: Guaranteed generalistaion error for CIFAR-10
Figure 3: Comparison of linear readout (LR), AI-MoCo (Ours), and fine-tuned MoCo (FT) in terms of parameter update cost (left) and clock time cost (mid, right) vs performance. Here, we present the pareto front for two datasets: 300W (regression) and CUB 200 (classification). (x) denotes corresponding FT baseline run with time constraint while (+) denotes FT baseline for intermediate iterations. We present pareto fronts for more datasets in the appendix.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline Methods & \multicolumn{2}{c|}{CUB} & \multicolumn{2}{c|}{Fluences} & \multicolumn{2}{c|}{FC 100} & \multicolumn{2}{c|}{Plant Disease} & \multicolumn{2}{c|}{300w} & \multicolumn{2}{c|}{LS Pose} & \multicolumn{2}{c|}{CelebA} & Rank \\ & (5,1) & (5,5) & (5,5) & (5,5) & (1,5) & (5,5) & (5,5) & (1,5) & (5,5) & \(s=0.05\) & \(s=0.2\) & \(s=0.05\) & \(s=0.2\) & \(s=0.05\) & \(s=0.2\) & \(s=0.05\) & \(s=0.2\) \\ \hline MoCo-v3 & **65.8** & 77.0 & 83.8 & 91.6 & 65.2 & **88.8** & **79.2** & **91.4** & 15.0 & 65.3 & 29.1 & 53.3 & 5.3 & 68.7 & 2.3 \\ MoCo-v3 & FT & 65.8 & 66.7 & 81.8 & 88.8 & 61.1 & 72.4 & 72.2 & 87.9 & 11.0 & 55.0 & 24.0 & 45.0 & 50.0 & 67.0 & 1.6 \\ \hline AI-MoCo-v3 & 65.6 & **77.2** & **84.2** & **92.7** & **67.8** & 79.3 & 76.8 & 84.6 & **33.3** & **74.6** & **48.0** & **50.0** & **55.4** & **72.3** & **1.3** \\ \hline \end{tabular}
\end{table}
Table 5: Few-shot classification and regression accuracy (%, \(R^{2}\)) of the AI-MoCo-v3 for ViTs pretrained on ImageNet-1k. Values are reported with 95% confidence intervals averaged over 2000 episodes on FC100, CUB200, and Plant Disease. (N, K) denotes N-way K-shot tasks. For regression tasks (300w, LS Pose, CelebA), we report downstream performance for different splits with train proportion given by \(s\). Full results for regression and classification are in Tab13 and Tab 14
## 7 Acknowledgements
This project was supported by the Royal Academy of Engineering under the Research Fellowship programme. Ruchika Chavhan was supported by Samsung AI Research, Cambridge.
|
2308.08077 | Sobolev sheaves on the plane | In this paper, we show that for any integer $k \in \mathbb{N}$ there exists a
Sobolev sheaf (in the sense of Lebeau) on any definable site of $\mathbb{R}^2$
that agrees with Sobolev spaces on cuspidal domains. We also provide a complete
computation of the cohomology of these sheaves using the notion of 'Good
direction' introduced by Valette. This paper serves as an introduction to a
more general project on the sheafification of Sobolev spaces in higher
dimensions. | M'hammed Oudrane | 2023-08-16T00:09:05Z | http://arxiv.org/abs/2308.08077v3 | # Sobolev sheaves on the plane
###### Abstract.
In this paper, we show that for any integer \(k\in\mathbb{N}\) there exists a Sobolev sheaf (in the sense of Lebeau) on any definable site of \(\mathbb{R}^{2}\) that agrees with Sobolev spaces on cuspidal domains. We also provide a complete computation of the cohomology of these sheaves using the notion of 'Good direction' introduced by Valette. This paper serves as an introduction to a more general project on the sheafification of Sobolev spaces in higher dimensions
Key words and phrases: Sobolev spaces, sheaf theory, o-minimal geometry
## 1. Introduction
Sheaves of functional spaces on the subanalytic topology (introduced by Kashiwara and Schapira in [6]) are important objects in algebraic analysis, which involves studying solutions of \(\mathcal{D}\)-modules as a generalization of linear partial differential equations. The most famous example is the sheaf of tempered distributions on the subanalytic site of a complex manifold, introduced by Kashiwara [5] to provide an elegant solution to the Riemann-Hilbert problem. In this paper, our focus is on sheaves composed of Sobolev functions. For \(s\in\mathbb{R}\), the presheaf of \(\mathbb{C}\)-vector spaces
\[U\subset\mathbb{R}^{n}\mapsto W^{s,2}(U)=\{F_{|U}\;:\;F\in W^{s,2}(\mathbb{R}^{ n})\},\]
is not always a sheaf (as shown by Lebeau [9]). This is related to the fact that if \(U\subset\mathbb{R}^{n}\) is an open subanalytic set with a non-Lipschitz boundary \(\partial U\), then the space \(W^{s,2}(U)\) doesn't exhibit favorable properties. More precisely, it is well known that in this case, Sobolev functions are not necessarily restrictions of Sobolev functions on \(\mathbb{R}^{n}\), and this gives rise to various issues. The aim of this paper is to find for \(s>0\) an optimal sheafification of Sobolev spaces \(W^{s,2}\) on the definable site (of a fixed o-minimal structure). Optimal in the sense that for \(U\subset\mathbb{R}^{n}\), the space \(W^{s,2}(U)\) will be modified only if it is necessary.
In [9], Lebeau proved that for any \(s<0\), there exists an object \(\mathcal{F}^{s}\) in the derived category of sheaves on the subanalytic topology of \(\mathbb{R}^{n}\), such that for any open bounded subanalytic set \(U\subset\mathbb{R}^{n}\) with Lipschitz boundary, the complex \(\mathcal{F}^{s}(U)\) is concentrated in degree \(0\) and equal to the classical Sobolev space \(W^{s,2}(U)\). The proof relies on the linear subanalytic site introduced by Guillermou and Schapira in [3].
For \(k\in\mathbb{N}\), we construct a sheaf \(\mathcal{F}^{k}\) of distributions on the definable site of \(\mathbb{R}^{2}\) such that if \(U\subset\mathbb{R}^{2}\) is a small (from the metric point of view) open set then
\[\mathcal{F}^{k}(U)=W^{k,2}(U).\]
In a more formal way, our main progress in this paper will be:
**Main result:** Let \(\mathcal{A}\) be an o-minimal stricture on the real field \((\mathbb{R},+,\cdot)\). Then, for any \(k\in\mathbb{N}\), there exists a sheaf \(\mathcal{F}^{k}\) on the definable site (associated to \(\mathcal{A}\)) of \(\mathbb{R}^{2}\) such that, for any \(U\subset\mathbb{R}^{2}\) open definable bounded L-regular cell, we have \(\mathcal{F}^{k}(U)=W^{k,2}(U)\). Moreover, for any \(U\subset\mathbb{R}^{2}\) open definable bounded and for any \(j>1\), we have
\[H^{j}(U,\mathcal{F}^{k})=0.\]
Additionally, if \(U\) has no punctured disk singularities, then
\[H^{j}(U,\mathcal{F}^{k})=\left\{\begin{array}{ll}\mathcal{F}(U)&\quad if\ j=0\\ \{0\}&\quad if\ j\geqslant 1.\end{array}\right.\]
This sheaf is unique (thanks to L-regular decomposition (see [15])) and agrees with \(W^{k,2}\) on domains with Lipschitz boundaries. The idea of the construction is based on understanding the local obstructions for \(W^{k,2}\) to be a sheaf. Note that again thanks to L-regular decomposition, for \(s\in]-\frac{1}{2},\frac{1}{2}[\) the presheaf \(U\mapsto W^{s,2}(U)\) is a sheaf (see Lebeau [9]). The obstructions are present for \(s>0\) big enough to have embedding of \(W^{s,2}\) into at least the space of continuous functions. In the two dimensional case, the construction is explicit because the Lipschitz structure of definable open subsets in \(\mathbb{R}^{2}\) has an explicit classification. The cohomology computation part is less obvious and requires more technical work.
The paper is organized as follows:
\(\bullet\)**Section 2:** We recall the basic concepts of o-minimal structures that are necessary for the context of this paper.
\(\bullet\)**Section 3:** We present the definitions of Sobolev spaces \(W^{s,2}\) as introduced in [9], along with the classical Stein extension theorem (Theorem 3.2).
\(\bullet\)**Section 4:** We provide the definitions of definable sites and sheaves on definable sites (after Kashiwara and Schapira [6]), followed by the discussion of the sheafification problem for Sobolev spaces.
\(\bullet\)**Section 5:** We discuss the spaces \(W^{s,2}\) for \(s\in]-\frac{1}{2},\frac{1}{2}[\).
\(\bullet\)**Section 6:** Here, we define the presheaf \(\mathcal{F}^{k}\) (for \(k\in\mathbb{N}\)) of Hilbert spaces on a fixed definable site of \(\mathbb{R}^{2}\) and subsequently prove its sheaf property.
\(\bullet\)**Section 7:** This is a core section focusing on a complete cohomology computation, establishing \(\mathcal{F}^{k}\) as a Sobolev sheaf.
\(\bullet\)**Section 8:** We give a sufficient condition to extend our method to Sobolev spaces \(W^{s,2}\) for \(s\in\mathbb{R}\). Notably, this offers a categorical proof of Lebeau's result from [9], affirming the validity of the Mayer-Vietoris sequence on domains with Lipschitz boundaries.
\(\bullet\)**Section 9:** Finally, we provide remarks and insights concerning challenges in higher dimensions and the case of Sobolev spaces with real regularities.
**Acknowledgment.** The author is very grateful to Adam Parusinski and Armin Rainer for their help and support, and the long hours of discussion they devoted to the
author during the preparation of this work. The author extends warm and profound thanks to Georges Comte and Guillaume Valette for reading this manuscript, and for the valuable comments, remarks, and suggestions. Part of the work has been done at Vienna university, when the author was funded by the Austrian Science Fund (FWF) Project P 32905-N. I am very grateful for the kind hospitality and the excellent working conditions.
## 2. Definitions and Preliminaries
### Notations:
* \(\mathcal{P}(X)\) is the set of subsets of \(X\).
* \(B(v,r)\) represents the open ball with radius \(r\) and center \(v\), and \(\overline{B}(v,r)\) represents the closed ball with radius \(r\) and center \(v\). Alternatively, notations \(B_{r}(v)\) and \(\overline{B}_{r}(v)\) might be used.
* \(C(v,r)\) represents the sphere with radius \(r\) and center \(v\), i.e., \[C(v,r)=\overline{B}_{r}(v)\setminus B_{r}(v)=\{x\in\mathbb{R}^{n}\;:\;d(x,v)=r\}.\]
* For a definable set \(X\subset\mathbb{R}^{n}\), \(X^{reg}\) is the set of points \(x\in X\) where \(X\) is a \(C^{1}\) manifold nearby \(x\).
* For \(v\in\mathbb{R}^{n-1}\), \(\pi_{v}:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{n-1}\) is the linear projection parallel to \(Vect((v,1))\).
* For a set \(A\subset\mathbb{R}^{n}\times\mathbb{R}^{m}\) and \(x_{0}\in\mathbb{R}^{n}\), we denote \(A_{x_{0}}\) as the set: \[A_{x_{0}}=\{y\in\mathbb{R}^{m}\;:\;(x_{0},y)\in A\}.\]
* \(\overline{A}\) refers to the topological closure of \(A\).
* For a set \(U\subset\mathbb{R}^{n}\), \(\partial U\) represents the boundary of \(U\), i.e., \(\partial U=\overline{U}\setminus U\).
* \(\mathbb{N}\) denotes the set of nonnegative integers.
* For a map \(f:A\to B\), \(\Gamma_{f}\) denotes the graph of \(f\).
* For two functions \(f:A\rightarrow[0,+\infty[\) and \(g:A\rightarrow[0,+\infty[\), we will write \(f\lesssim g\), if there is \(C>0\) such that \(f(x)\leqslant Cg(x)\) for all \(x\in A\).
* For two functions \(f:A\rightarrow\mathbb{R}\) and \(g:A\rightarrow\mathbb{R}\) with \(f<g\), \(\Gamma(A,f,g)\) (or simply \(\Gamma(f,g)\)) denotes the set: \[\Gamma(A,f,g)=\{(x,y)\in A\times\mathbb{R}\;:\;f(x)<y<g(x)\}.\]
* If \(u,v\in\mathbb{R}^{2}\setminus\{0\}\), \(\angle(u,v)\) represents the angle between \(u\) and \(v\) with respect to the anticlockwise orientation.
* For \(U\subset\mathbb{R}^{n}\) open, \(\mathcal{D}(U)\) represents the topological vector space of \(C^{\infty}\) functions with compact support, and \(\mathcal{D}^{\prime}(U)\) represents the space of continuous linear forms on \(\mathcal{D}(U)\).
* \(H^{j}(X,\mathcal{F})\) denotes the \(j\)-th cohomology group of the sheaf \(\mathcal{F}\) on the topological space \(X\).
* If \(\mathcal{A}\) is an o-minimal structure on the real field \((\mathbb{R},+,\cdot)\), then \(X_{\mathcal{A}}(\mathbb{R}^{n})\) represents the site on \(\mathbb{R}^{n}\) where open sets are open bounded definable (in \(\mathcal{A}\)) subsets of \(\mathbb{R}^{n}\), and coverings are finite. \(D^{+}(X_{\mathcal{A}}(\mathbb{R}^{n}))\) denotes the derived category of bounded
below complexes of sheaves on the site \(X_{\mathcal{A}}(\mathbb{R}^{n})\). If \(\mathcal{A}\) is the structure of globally subanalytic sets, then \(X_{sa}(\mathbb{R}^{n})\) is used instead of \(X_{\mathcal{A}}(\mathbb{R}^{n})\).
### O-minimal structures
An o-minimal structure on the field \((\mathbb{R},+,\cdot)\) is a sequence \(\mathcal{A}=(\mathcal{A}_{n})_{n\in\mathbb{N}}\) such that for any \(n\), we have:
* \(\mathcal{A}_{n}\) is a Boolean subalgebra of \(\mathcal{P}(\mathbb{R}^{n})\).
* \(\mathcal{A}_{n}\) contains all the real algebraic subsets of \(\mathbb{R}^{n}\).
* \(\pi(\mathcal{A}_{n})\subset\mathcal{A}_{n-1}\), where \(\pi:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{n-1}\) is the standard projection.
* For all \((n,m)\in\mathbb{N}^{2}\): \(\mathcal{A}_{n}\times\mathcal{A}_{m}\subset\mathcal{A}_{n+m}\).
* For any \(A\in\mathcal{A}_{1}\), \(A\) is a finite union of points and intervals.
For a fixed o-minimal structure \(\mathcal{A}\):
* Elements of \(\mathcal{A}_{n}\) are called definable sets.
* If \(A\in\mathcal{A}_{n}\) and \(B\in\mathcal{A}_{m}\), then a map \(f:A\longrightarrow B\) is called a definable map if its graph is a definable set.
We refer to [20] for the fundamentals of o-minimal geometry.
**Cell decomposition:**
For a given positive integer \(p\), a definable set \(C\) in \(\mathbb{R}^{n}\) is referred to as a \(C^{p}\)-cell if:
case \(n=1\): \(C\) is either a point or an open interval.
case \(n\geq 2\): \(C\) is one of the following:
* \(C=\Gamma_{\phi}\) (the graph of \(\phi\)), where \(\phi:B\longrightarrow\mathbb{R}\) is a \(C^{p}\) definable function, and \(B\) is a \(C^{p}\)-cell in \(\mathbb{R}^{n-1}\).
* \(C=\Gamma(\phi,\varphi)=\{(x,y)\in B\times\mathbb{R}\;:\;\phi(x)<y<\varphi(x)\}\), where \(\phi\) and \(\varphi\) are two \(C^{p}\) definable functions on a \(C^{p}\)-cell \(B\), satisfying \(\phi<\varphi\) with the possibility of \(\phi=-\infty\) or \(\varphi=+\infty\).
A \(C^{p}\)-cell decomposition of \(\mathbb{R}^{n}\) is defined by induction as follows:
* A \(C^{p}\)-cell decomposition of \(\mathbb{R}\) is a finite partition consisting of points and open intervals.
* A \(C^{p}\)-cell decomposition of \(\mathbb{R}^{n}\) is a finite partition \(\mathcal{P}\) of \(\mathbb{R}^{n}\) by \(C^{p}\)-cells. It is required that \(\pi(\mathcal{P})\) is a \(C^{p}\)-cell decomposition of \(\mathbb{R}^{n-1}\), where \(\pi:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{n-1}\) is the standard projection, and \(\pi(\mathcal{P})\) is the family:
\[\pi(\mathcal{P})=\{\pi(A)\;:\;A\in\mathcal{P}\}.\]
**Theorem 2.1**.: _Let \(p\in\mathbb{N}\) and \(\{X_{1},...,X_{n}\}\) be a finite family of definable sets of \(\mathbb{R}^{n}\). Then there is a \(C^{p}\)-cell decomposition of \(\mathbb{R}^{n}\) compatible with this family, i.e. each \(X_{i}\) is a union of some cells._
Proof.: See [1] or [20].
Now we can define the dimension of a definable set. Take \(X\) a definable subset of \(\mathbb{R}^{n}\) and \(\mathcal{C}\) a cell decomposition of \(\mathbb{R}^{n}\) compatible with \(X\), then we define the dimension
\[dim_{\mathcal{C}}(X)=max\{dim(C)\;:\;C\subset X\;and\;C\in\mathcal{C}\}.\]
This number does not depend on \(\mathcal{C}\), we denote it by \(dim(X)\).
Throughout the text, we assume \(\mathcal{A}\) is an o-minimal structure on \((\mathbb{R},+,.)\).
### L-regular decomposition
L-regular cells (Lipschitz cells) were introduced by A. Parusinski to establish the existence of Lipschitz stratification for subanalytic sets ([15], see also [7]).
**Definition 2.2**.: Let \(X\subset\mathbb{R}^{n}\) be a definable subset. We say that \(X\) is L-regular if:
\(\bullet\): \(X\) is a point if \(\dim(X)=0\).
\(\bullet\): \(X\) is an open interval if \(\dim(X)=1\) and \(n=1\).
\(\bullet\): If \(\dim(X)=n\) (with \(n>1\)), then there exists \(X^{\prime}\subset\mathbb{R}^{n-1}\) that is L-regular, along with two \(C^{1}\) definable functions with bounded derivatives \(\phi_{1},\phi_{2}:X^{\prime}\longrightarrow\mathbb{R}\) where \(\phi_{1}<\phi_{2}\), satisfying
\[X=\{(x^{\prime},x_{n})\in X^{\prime}\times\mathbb{R}\ :\ \phi_{1}(x^{\prime})<x_ {n}<\phi_{2}(x^{\prime})\}.\]
\(\bullet\): If \(\dim(X)=k<n\), then \(X\) is the graph of a \(C^{1}\) definable map \(\phi:X^{\prime}\longrightarrow\mathbb{R}^{n-k}\) with bounded derivatives on \(Int(X^{\prime})\), where \(X^{\prime}\subset\mathbb{R}^{k}\) is L-regular and of dimension \(k\).
We will also say that \(A\) is L-regular if it becomes so after a linear change of coordinates.
**Theorem 2.3**.: _Let \(X_{1},\ldots,X_{l}\) be definable subsets of \(\mathbb{R}^{n}\). Then, there exists a finite definable partition \((L_{k})_{k}\) of \(\bigcup_{i}X_{i}\) that is compatible with each \(X_{i}\), and each element \(L_{k}\) is \(L\)-regular._
Proof.: See [15] or [7].
Figure 1. Example of building L-regular cells by induction.
## 3. Hilbert Sobolev spaces revisited.
Let \(n\in\mathbb{N}\). We denote:
* \(\mathcal{S}(\mathbb{R}^{n})\) as the space of Schwartz functions (\(C^{\infty}\)-functions that vanish at infinity along with all their derivatives, decaying faster than any polynomial).
* \(\mathcal{S}^{\prime}(\mathbb{R}^{n})\) as the topological dual of \(\mathcal{S}(\mathbb{R}^{n})\).
And we have natural continuous injections
\[\mathcal{S}(\mathbb{R}^{n})\subset L^{2}(\mathbb{R}^{n})\subset\mathcal{S}^{ \prime}(\mathbb{R}^{n}).\]
We recall the Fourier Transform
\[u\in\mathcal{S}(\mathbb{R}^{n})\mapsto\widehat{u}\in\mathcal{S}(\mathbb{R}^{n }),\]
where
\[\widehat{u}(y)=\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{\mathbb{R}^{n}}e^{-iy\cdot x }u(x)dx. \tag{3.1}\]
By duality, the Fourier transform extends in a canonical way to \(\mathcal{S}^{\prime}(\mathbb{R}^{n})\). Finally, for \(s\in\mathbb{R}\), we recall the Sobolev space
\[W^{s,2}(\mathbb{R}^{n})=\{u\in\mathcal{S}^{\prime}(\mathbb{R}^{n})\;:\;\|u\|_{ W^{s,2}(\mathbb{R}^{n})}=\sqrt{\int_{\mathbb{R}^{n}}(1+|y|^{2})^{s}\left| \widehat{u}(y)\right|^{2}dy}<+\infty\},\]
with the natural dense inclusions (for \(s\geqslant 0\))
\[\mathcal{D}(\mathbb{R}^{n})\subset\mathcal{S}(\mathbb{R}^{n})\subset L^{2}( \mathbb{R}^{n})\subset W^{s,2}(\mathbb{R}^{n})\subset\mathcal{S}^{\prime}( \mathbb{R}^{n}).\]
An equivalent way to define \(W^{s,2}(\mathbb{R}^{n})\) is as follows:
* For \(k\in\mathbb{N}\) \[W^{k,2}(\mathbb{R}^{n})=\{f\in L^{2}(\mathbb{R}^{n})\;:\;\forall|\alpha| \leqslant k,\;\partial^{\alpha}f\in L^{2}(\mathbb{R}^{n})\},\] where \(\partial^{\alpha}f\) denotes the distributional derivative of \(f\) for \(\alpha\in\mathbb{N}^{n}\).
* For \(s\in]k,k+1[\) for some \(k\in\mathbb{N}\), then \(W^{s,2}\) is the interpolation space \[W^{s,2}(\mathbb{R}^{n})=[W^{k,2}(\mathbb{R}^{n}),W^{k+1,2}(\mathbb{R}^{n})]_{s -k}.\]
* For \(s<0\), \(W^{s,2}(\mathbb{R}^{n})\) is the topological dual \[W^{s,2}(\mathbb{R}^{n})=(W^{-s,2}(\mathbb{R}^{n}))^{\prime}.\]
For an open set \(U\subset\mathbb{R}^{n}\) and a closed set \(F\subset\mathbb{R}^{n}\), we define the spaces \(W^{s,2}_{F}(\mathbb{R}^{n})\) to be the closed subspace of distributions supported in \(F\), with the induced norm.
Take \(s\geqslant 0\) and \(r=s-[s]\). It is classical that (we refer to [9]) \(f\in W^{s,2}(\mathbb{R}^{n})\) if and only if \(\partial^{\alpha}f\in L^{2}(\mathbb{R}^{n})\) for all \(|\alpha|\leqslant[s]\) and (if \(r>0\))
\[\frac{\partial^{\alpha}f(x)-\partial^{\alpha}f(y)}{|x-y|^{\frac{n}{2}+r}}\in L ^{2}(\mathbb{R}^{n}\times\mathbb{R}^{n})\]
for all \(|\alpha|=[s]\). The norm of \(W^{s,2}(\mathbb{R}^{n})\) is given by
\[\|f\|_{W^{s,2}(\mathbb{R}^{n})}=\sum_{|\alpha|\leqslant[s]}\|\partial^{\alpha }f\|_{L^{2}(\mathbb{R}^{n})}+1_{r>0}\sum_{|\alpha|=[s]}\|\frac{\partial^{ \alpha}f(x)-\partial^{\alpha}f(y)}{|x-y|^{\frac{n}{2}+r}}\|_{L^{2}(\mathbb{R} ^{n}\times\mathbb{R}^{n})}. \tag{3.2}\]
For \(s\in\mathbb{R}\) and \(U\subset\mathbb{R}^{n}\) open, we define the space (following Lebeau [9])
\[W^{s,2}(U)=\{f\in\mathcal{D}^{\prime}(U)\;:\;\exists F\in W^{s,2}(\mathbb{R}^{n} )\;\text{such that}\;F_{|U}=f\}. \tag{3.3}\]
With the norm
\[\|f\|_{W^{s,2}(U)}=\inf\{\|F\|_{W^{s,2}(\mathbb{R}^{n})}\;:\;F_{|U}=f\}.\]
We have the quotient Hilbert structure on \(W^{s,2}(U)\) induced by the natural isomorphism between \(W^{s,2}(U)\) and
\[W^{s,2}(\mathbb{R}^{n})\left/W^{s,2}_{\mathbb{R}^{n}\setminus U}(\mathbb{R}^{n })\right.\]
Since \(W^{s,2}_{\mathbb{R}^{n}\setminus U}(\mathbb{R}^{n})\) is a closed subspace of the Hilbert space \(W^{s,2}(\mathbb{R}^{n})\), it is complemented by its orthogonal
\[W^{s,2}(\mathbb{R}^{n})=W^{s,2}_{\mathbb{R}^{n}\setminus U}(\mathbb{R}^{n}) \oplus(W^{s,2}_{\mathbb{R}^{n}\setminus U}(\mathbb{R}^{n}))^{\perp}.\]
This induces an extension operator \(\mathcal{T}:W^{s,2}(U)\longrightarrow W^{s,2}(\mathbb{R}^{n})\) given by
\[\mathcal{T}(f)=\operatorname{Proj}_{(W^{s,2}_{\mathbb{R}^{n}\setminus U}( \mathbb{R}^{n}))^{\perp}}(F)\]
for any choice of \(F\in W^{s,2}(\mathbb{R}^{n})\) such that \(F\mid_{U}=f\), where
\[\operatorname{Proj}_{(W^{s,2}_{\mathbb{R}^{n}\setminus U}(\mathbb{R}^{n}))^{ \perp}}:W^{s,2}(\mathbb{R}^{n})\rightarrow(W^{s,2}_{\mathbb{R}^{n}\setminus U }(\mathbb{R}^{n}))^{\perp}\]
is the orthogonal projection.
**The usual definition of Sobolev spaces:** In our definition, we follow [9]. Note that the usual Sobolev spaces \(W^{s,2}_{\star}\) (see Lions and Magenes [11]) are defined as follows:
* If \(k\in\mathbb{N}\), then \[W^{k,2}_{\star}(U)=\{f\in L^{2}(U)\;:\;\partial^{\alpha}f\in L^{2}(U)\;\text{ for all}\;|\alpha|\leq k\}.\]
* If \(s\in]k,k+1[\), then \[W^{s,2}_{\star}(U)=[W^{k,2}_{\star}(U),W^{k+1,2}_{\star}(U)]_{s-k}.\] And we have \[W^{s,2}_{\star}(U)=\{f\in L^{2}(U)\;:\;\partial^{\alpha}f\in W^{s-k,2}_{\star} (U)\;\text{for all}\;|\alpha|\leq k\}.\]
* For \(s<0\), \(W^{s,2}_{\star}(U)\) is defined to be the topological dual space of \(W^{-s,2}_{\star}(U)\).
**Definition 3.1**.: A bounded open set \(U\subset\mathbb{R}^{n}\) is said to be Lipschitz (or with Lipschitz boundary) if and only if for any \(q\in\overline{U}\setminus U\), there exists an orthogonal transformation \(\phi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) with \(\phi(q)=0\), a Lipschitz function \(f:\mathbb{R}^{n-1}\rightarrow\mathbb{R}\), and \(r>0\) such that
\[\phi(U\cap B(q,r))=\{(y^{\prime},y_{n})\in B(0,r)\;:\;y_{n}>f(y^{\prime})\}.\]
Thanks to the Stein extension Theorem (along with the functoriality of interpolations, as discussed in Section 8), for a Lipschitz domain \(U\subset\mathbb{R}^{n}\) and \(s\geqslant 0\), we have
\[W^{s,2}(U)=W^{s,2}_{\star}(U). \tag{3.4}\]
In fact, the Stein extension Theorem provides even more (we refer to Stein [16]):
**Theorem 3.2**.: _Take \(U\subset\mathbb{R}^{n}\) open bounded with Lipschitz boundary. Then there is a linear continuous extension operator \(Ext:L^{2}(U)\mapsto L^{2}(\mathbb{R}^{n})\) such that for \(k\in\mathbb{N}\) the restriction of \(Ext\) to \(W^{k,2}_{\star}(U)\) induces a linear continuous operator_
\[Ext_{W^{k,2}_{\star}(U)}:W^{k,2}_{\star}(U)\mapsto W^{k,2}(\mathbb{R}^{n}).\]
**Proposition 3.3**.: _Let \(U\subset\mathbb{R}^{n}\) be an open bounded with Lipschitz boundary and \(s\geqslant 0\). Let \(k=[s]\) and \(r=s-[s]\). Then \(f\in W^{s,2}(U)\) if and only if:_
1. _For all_ \(|\alpha|\leqslant k\)_, we have_ \(\partial^{\alpha}f\in L^{2}(U)\)_._
2. _If_ \(r>0\)_, then_ (3.5) \[\int\int_{U\times U}\frac{|\partial^{\alpha}f(x)-\partial^{\alpha}f(y)|^{2}}{ |x-y|^{n+2r}}dxdy<+\infty.\]
Proof.: This result follows as a classical consequence of (3.2) and (3.4) (as shown by Lemma 3.5 in [9]).
## 4. The definable site and the main problem.
Let \(X_{\mathcal{A}}(\mathbb{R}^{n})\) be the category of open bounded definable sets in \(\mathbb{R}^{n}\) (the morphisms are the inclusions, or the empty set). We endow \(X_{\mathcal{A}}(\mathbb{R}^{n})\) with the Grothendieck topology (note that this definition works for more general categories):
\(S\subset X_{\mathcal{A}}(\mathbb{R}^{n})\) is a covering of \(U\in X_{\mathcal{A}}(\mathbb{R}^{n})\) if and only if \(S\) is finite and \(U=\bigcup_{O\in S}O\).
We call this the definable site associated to \(\mathcal{A}\).
**Definition 4.1**.: A sheaf of \(\mathbb{C}\)-vector spaces on the site \(X_{\mathcal{A}}(\mathbb{R}^{n})\) is a contravariant functor
\[\mathcal{F}:X_{\mathcal{A}}(\mathbb{R}^{n})\rightarrow\mathbb{C}\text{-vector spaces},\]
such that for any \(U,V\in X_{\mathcal{A}}(\mathbb{R}^{n})\), the sequence
\[0\rightarrow\mathcal{F}(U\cup V)\rightarrow\mathcal{F}(U)\oplus\mathcal{F}( V)\rightarrow\mathcal{F}(U\cap V)\]
is exact.
This is equivalent (see Proposition 6.4.1 in [6]) to saying that if \(S=\{O_{1},...,O_{l}\}\subset X_{\mathcal{A}}(\mathbb{R}^{n})\) is a cover of \(O\in X_{\mathcal{A}}(\mathbb{R}^{n})\), and \(f_{i}\in\mathcal{F}(O_{i})\) such that
\[f_{i}\mid_{O_{i}\cap O_{j}}=f_{j}\mid_{O_{i}\cap O_{j}}\text{ for all }i\neq j\text{ with }O_{i}\cap O_{j}\neq\emptyset, \tag{4.1}\]
then there is a unique \(f\in\mathcal{F}(O)\) such that \(f\mid_{O_{i}}=f_{i}\) for \(i=1,...,l\).
If, in addition, we have that for any \(U,V\in X_{\mathcal{A}}(\mathbb{R}^{n})\) the sequence
\[0\rightarrow\mathcal{F}(U\cup V)\rightarrow\mathcal{F}(U)\oplus\mathcal{F}( V)\rightarrow\mathcal{F}(U\cap V)\to 0\]
is exact, then \(\mathcal{F}\) is an _acyclic_ sheaf (see Proposition 2.14 in [3]).
For a more comprehensive exploration of this topic, we refer to Kashiwara and Schapira [6].
The following example was introduced by Kashiwara [5] to prove the Riemann-Hilbert correspondence:
**Example 4.2**.: We denote by \(X_{sa}(\mathbb{R}^{n})\) the site associated to the o-minimal structure of globally subanalytic sets. We define the trace of distributions on open bounded subanalytic sets
\[\mathcal{T}:X_{sa}(\mathbb{R}^{n})\rightarrow\mathbb{R}\text{-vector spaces},\]
such that for \(U\subset\mathbb{R}^{n}\) we have
\[\mathcal{T}(U)=\{f\in\mathcal{D}^{\prime}(U)\;:\;\exists F\in\mathcal{D}^{ \prime}(\mathbb{R}^{n})\text{ such that }F_{|U}=f\}.\]
One can show that \(f\in\mathcal{T}(U)\) if and only if there are \(C>0\), \(m\in\mathbb{N}\), and \(r\in\mathbb{N}\) such that for any \(\phi\in C_{c}^{\infty}(U)\) we have
\[|<f,\phi>|\leqslant C\sum_{|\alpha|\leqslant m}\sup_{x\in U}\left(\frac{| \partial^{\alpha}\phi(x)|}{d(x,\partial U)^{r}}\right).\]
Then \(\mathcal{T}\) is an acyclic sheaf on the subanalytic site \(X_{sa}(\mathbb{R}^{n})\), which means that for any open bounded subanalytic sets \(U_{1}\) and \(U_{2}\) in \(\mathbb{R}^{n}\), the sequence
\[0\rightarrow\mathcal{T}(U_{1}\cup U_{2})\rightarrow\mathcal{T}(U_{1})\oplus \mathcal{T}(U_{2})\rightarrow\mathcal{T}(U_{1}\cap U_{2})\to 0\]
is exact. Indeed, take \(U_{1},U_{2}\subset\mathbb{R}^{n}\) open bounded subanalytic sets, and consider \(f\in\mathcal{D}^{\prime}(U_{1}\cup U_{2})\) such that \(f_{|U_{1}}\in\mathcal{T}(U_{1})\) and \(f_{|U_{2}}\in\mathcal{T}(U_{2})\). This means there exist \(C_{1}>0\), \(C_{2}>0\), \(m_{1}\in\mathbb{N}\), \(m_{2}\in\mathbb{N}\), \(r_{1}\in\mathbb{N}\), and \(r_{2}\in\mathbb{N}\) such that for any \(\phi\in C_{c}^{\infty}(U_{i})\) we have
\[\left|<f_{|U_{i}},\phi>\right|\leqslant C_{i}\sum_{|\alpha|\leqslant m_{i}} \sup_{x\in U_{i}}\left(\frac{|\partial^{\alpha}\phi(x)|}{d(x,\partial U_{i})^ {r_{i}}}\right).\]
By the Lojasiewicz's inequality, there are \(C>0\) and \(m\in\mathbb{N}\) such that
\[d(x,U_{1})+d(x,U_{2})\geqslant C(d(x,\partial(U_{1}\cup U_{2})))^{m}\text{ for all }x\in U_{1}\cup U_{2}.\]
Take \((\varphi_{1},\varphi_{2})\) as a partition of unity associated to \((U_{1},U_{2})\). Thus, for \(\phi\in C_{c}^{\infty}(U_{1}\cup U_{2})\) we have
\[|<f,\phi>| = |<f,\varphi_{1}\phi+\varphi_{2}\phi>|\] \[\leqslant \left|<f_{|U_{1}},\varphi_{1}\phi>\right|+\left|<f_{|U_{2}}, \varphi_{2}\phi>\right|\] \[\leqslant C_{1}\sum_{|\alpha|\leqslant m_{1}}\sup_{x\in U_{1}}\left( \frac{|\partial^{\alpha}\varphi_{1}\phi(x)|}{d(x,\partial U_{1})^{r_{1}}} \right)+C_{2}\sum_{|\alpha|\leqslant m_{2}}\sup_{x\in U_{2}}\left(\frac{| \partial^{\alpha}\varphi_{2}\phi(x)|}{d(x,\partial U_{2})^{r_{2}}}\right)\] \[\leqslant \frac{\max(C_{1},C_{2})}{C}\sum_{|\alpha|\leqslant\max(m_{1},m_{ 2})}\sup_{x\in U_{1}\cup U_{2}}\left(\frac{|\partial^{\alpha}\phi(x)|}{d(x, \partial(U_{1}\cup U_{2}))^{m\min(r_{1},r_{2})}}\right)..\]
Hence, \(f\in\mathcal{T}(U_{1}\cup U_{2})\).
**Problem:** Given \(s>0\), is there a sheaf \(\mathcal{F}^{s}\) on the definable site \(X_{\mathcal{A}}(\mathbb{R}^{n})\) such that for any \(U\in X_{\mathcal{A}}(\mathbb{R}^{n})\) with Lipschitz boundary, we have
\[\mathcal{F}^{s}(U)=W^{s,2}(U)\text{ and }H^{j}(U,\mathcal{F}^{s})=0\text{ for }j>0?\]
Recall that for any contravariant functor (a presheaf) \(\mathcal{F}:X_{\mathcal{A}}(\mathbb{R}^{n})\longrightarrow\mathbb{C}\)-vector spaces, and \(x\in\mathbb{R}^{n}\), we denote by \(\mathcal{F}_{x}\) the set of germs of sections of \(\mathcal{F}\) at \(x\)
\[\mathcal{F}_{x}=\quad\varinjlim_{x\in U}\quad\mathcal{F}(U)=\sqcup_{x\in U} \mathcal{F}(U)\left/\right._{\sim},\]
where \(f_{1}\sim f_{2}\) if and only if there is a neighborhood \(V\subset U_{1}\cap U_{2}\) of \(x\) such that \(f_{1}\mid_{V}=f_{2}\mid_{V}\). There is a canonical sheaf \(\mathcal{F}_{+}\) associated to \(\mathcal{F}\) defined by
\[U\in X_{\mathcal{A}}(\mathbb{R}^{n})\mapsto\mathcal{F}_{+}(U)\subset F(U, \sqcup_{x\in U}\mathcal{F}_{x}),\]
where \(f\in\mathcal{F}_{+}(U)\) if for any \(x\in U\), \(f(x)\in\mathcal{F}_{x}\) and there is a neighborhood \(V\subset U\) of \(x\) and \(\phi\in\mathcal{F}(V)\) such that for every \(y\in V\), \(f(y)\) is a representative of \(\phi\) in \(\mathcal{F}_{y}\).
For \(s>0\), consider \(W^{s,2}_{+}\) as the canonical sheaf associated to \(W^{s,2}\) on the site \(X_{\mathcal{A}}(\mathbb{R}^{n})\). However, let \(U\in X_{\mathcal{A}}(\mathbb{R}^{n})\) be with Lipschitz boundary. It can be shown that there is no way to identify \(W^{s,2}_{+}(U)\) with \(W^{s,2}(U)\), which makes the canonical sheafification method unsuitable for our purpose. Our goal is to create a sheaf out of Sobolev spaces while retaining their advantageous properties, as Sobolev spaces work effectively on domains with Lipschitz boundary. For \(s<0\), a sheafification in the derived category \(D^{+}(X_{\mathbb{R}_{\text{an}}}(\mathbb{R}^{n}))\) of sheaves on the subanalytic site was provided by Lebeau [9]:
**Theorem 4.3**.: _For \(s<0\), there exists an object \(\mathcal{F}^{s}\in D^{+}(X_{sa}(\mathbb{R}^{n}))\) such that if \(U\subset\mathbb{R}^{n}\) is a bounded open subanalytic set with Lipschitz boundary, the complex \(\mathcal{F}^{s}(U)\) is concentrated in degree \(0\) and is equal to \(W^{s,2}(U)\)._
## 5. The spaces \(W^{s,2}\) for \(s\in]-\frac{1}{2},\frac{1}{2}[\).
Using the results of Parusinski [14], it was noticed in [9] that for \(s\in]-\frac{1}{2},\frac{1}{2}[\), the presheaf \(U\mapsto W^{s,2}(U)\) is an acyclic sheaf on the subanalytic site. For the convenience of the reader, we provide detailed explanations of why this is true in the o-minimal case. Let us first recall a classical result on fractional Sobolev spaces (see Theorem 11.2 in [11]). Take \(s\in]0,\frac{1}{2}[\) and \(U\subset\mathbb{R}^{n}\) an open bounded set with Lipschitz boundary. Then there is a \(C>0\) such that for any \(f\in W^{s,2}(U)\), we have
\[\left\|\frac{f(x)}{d(x,\mathbb{R}^{n}\setminus U)^{s}}\right\|_{L^{2}(U)} \leqslant C\|f\|_{W^{s,2}(U)}. \tag{5.1}\]
**Fact**: Fix \(s\in]-\frac{1}{2},\frac{1}{2}[\) and let \(U\in X_{\mathcal{A}}(\mathbb{R}^{n})\) be with Lipschitz boundary. Then the linear operator
\[\begin{array}{c}1_{U}:W^{s,2}(\mathbb{R}^{n})\longrightarrow W^{s,2}( \mathbb{R}^{n})\\ f\mapsto 1_{U}f\end{array}\]
is well defined.
Proof.: The case of \(s=0\) is obvious. For \(0<s<\frac{1}{2}\), consider \(f\in W^{s,2}(\mathbb{R}^{n})\). It is clear that \(1_{U}f\in L^{2}(\mathbb{R}^{n})\), so by (3.2) we need to prove that
\[L=\int\int_{\mathbb{R}^{n}\times\mathbb{R}^{n}}\frac{\left|1_{U}f(x)-1_{U}f(y) \right|^{2}}{\left|x-y\right|^{n+2s}}dxdy<+\infty. \tag{5.2}\]
But
\[L=\int\int_{U\times U}\frac{\left|f(x)-f(y)\right|^{2}}{\left|x-y\right|^{n+2s}}dxdy +2\int_{U}\left|f(x)\right|^{2}\left(\int_{U^{c}}\frac{1}{\left|x-y\right|^{n+2 s}}dy\right)dx.\]
Since \(f\in W^{s,2}(\mathbb{R}^{n})\), by (5.1) it is enough to prove that
\[d(x,\mathbb{R}^{n}\setminus U)^{-2s}\lesssim\int_{U}\frac{1}{\left|x-y\right| ^{n+2s}}dy\lesssim d(x,\mathbb{R}^{n}\setminus U)^{-2s}, \tag{5.3}\]
where \(U\in X_{\mathcal{A}}(\mathbb{R}^{n})\) with Lipschitz boundary. Since \(\partial U\) is bounded, we can assume that
\[U=\{(y^{\prime},y_{n})\in\mathbb{R}^{n}\ :\ y_{n}>0\}. \tag{5.4}\]
A simple computation shows that
\[d(x,\mathbb{R}^{n}\setminus U)^{-2s}=\frac{1}{\left|x_{n}\right|^{2s}} \lesssim\int_{U}\frac{1}{\left|x-y\right|^{n+2s}}dy\lesssim\frac{1}{\left|x_{ n}\right|^{2s}}=d(x,\mathbb{R}^{n}\setminus U)^{-2s}.\]
For \(s\in]-\frac{1}{2},0[\), consider \(T\in W^{s,2}(\mathbb{R}^{n})\). We have
\[\begin{array}{c}1_{U}T:W^{-s,2}(\mathbb{R}^{n})\longrightarrow\mathbb{C}\\ f\mapsto<1_{U}T,f>=<T,1_{U}f>.\end{array}\]
By the case of \(s\in]0,\frac{1}{2}[\), \(1_{U}T\) is well defined and lies in \(W^{-s,2}(\mathbb{R}^{n})\).
Denote \(\mathcal{A}(\mathbb{R}^{n})\) as the algebra generated by the characteristic functions of open bounded definable sets in \(\mathbb{R}^{n}\), that is
\[\mathcal{A}(\mathbb{R}^{n})=\left\{\sum_{i\in I}m_{i}1_{U_{i}}\ :\ I\ \text{finite},\ m_{i}\in\mathbb{Z},\ \text{and}\ U_{i}\in X_{\mathcal{A}}(\mathbb{R}^{n})\right\}.\]
Then we have Parusinski's result in [14]:
**Theorem 5.1**.: _The algebra \(\mathcal{A}(\mathbb{R}^{n})\) is generated by the characteristic functions of Lipschitz definable domains._
Now we explain why for \(s\in]-\frac{1}{2},\frac{1}{2}[\), the presheaf \(W^{s,2}\) is an acyclic sheaf on the definable site \(X_{\mathcal{A}}(\mathbb{R}^{n})\), that is for any \(U,V\in X_{\mathcal{A}}(\mathbb{R}^{n})\) the sequence
\[0\to W^{s,2}(U\cup V)\to W^{s,2}(U)\oplus W^{s,2}(V)\to W^{s,2}(U\cap V)\to 0\]
is exact.
Proof.: By the definition of \(W^{s,2}\), we have the surjectivity of the map \(W^{s,2}(U\cap V)\to 0\). Take \((f,g)\in W^{s,2}(U)\oplus W^{s,2}(V)\) such that \(f_{\left|U\cap V\right.}=g_{\left|U\cap V\right.}\). Take \((\widehat{f},\widehat{g})\in(W^{s,2}(\mathbb{R}^{n}))^{2}\) such that
\[\widehat{f}_{\left|U\right.}=f\quad\text{and}\quad\widehat{g}_{\left|V\right. }=g.\]
By the previous fact and Theorem 5.1, we have \(h=1_{U}\widehat{f}+1_{V}\widehat{g}-1_{U\cap V}\widehat{f}\in W^{s,2}(\mathbb{ R}^{n})\). Then \(h_{\left|U\cup V\right.}\in W^{s,2}(U\cup V)\), \((h_{\left|U\cup V\right.})_{\left|U\right.}=f\), and \((h_{\left|U\cup V\right.})_{\left|V\right.}=g\)
## 6. Construction of the sheaf \(\mathcal{F}^{k}\) on \(\mathbb{R}^{2}\) for \(k\in\mathbb{N}\).
Before we begin, we fix anticlockwise orientation of the plane \(\mathbb{R}^{2}\) generated by the vectors \(\overrightarrow{e_{1}}=(1,0)\) and \(\overrightarrow{e_{2}}=(0,1)\). Given two definable \(C^{1}\)-curves \(\gamma_{1},\gamma_{2}:[0,a[\longrightarrow\mathbb{R}^{2}\), and \(r>0\) such that \(\gamma_{1}(0)=\gamma_{2}(0)=p_{0}\), we denote by \(R(r,\gamma_{1},\gamma_{2})\) the open definable subset (see Figure 2):
\[R(r,\gamma_{1},\gamma_{2})=\{P\in\mathbb{R}^{2}\;:\;P\in B(p_{0},r)\;\text{and }\;P\;\text{is between }\gamma_{1}\;\text{and}\;\gamma_{2}\}.\]
Formally,
\[P\in R(r,\gamma_{1},\gamma_{2})\;\text{if and only if }\angle(\gamma_{1} \cap C(p_{0},\|P\|),\overrightarrow{e_{1}})<\angle(\overrightarrow{p_{0}P}, \overrightarrow{e_{1}})<\angle(\gamma_{2}\cap C(p_{0},\|P\|),\overrightarrow{e _{1}}).\]
Here,
\[C(p_{0},\|P\|)=\{x\in\mathbb{R}^{2}\;:\;\|x-p_{0}\|=\|P\|\}.\]
If we parameterize \(\gamma_{1}\) and \(\gamma_{2}\) by the distance to \(p_{0}\) (assume that \(p_{0}=0\), which is always possible up to a translation):
\[\gamma_{1}(t)=te^{i\theta_{1}(t)}\;\text{and}\;\gamma_{2}(t)=te^{i\theta_{2}(t )}\;\text{with}\;t\in[0,r[\;\text{and}\;0<\theta_{1}(t)-\theta_{2}(t)<2\pi.\]
Then,
\[R(r,\gamma_{1},\gamma_{2})=\{te^{i\theta}\;:\;t\in]0,r[\;\text{and}\;\theta_{1 }(t)<\theta<\theta_{2}(t)\}.\]
**Remark 6.1**.: We can always choose \(r\) to be small enough such that \(R(r,\gamma_{1},\gamma_{2})\) is connected and the circle \(C(p_{0},r^{\prime})\) (for \(r^{\prime}<r\)) is transverse to \(\gamma_{1}\) and \(\gamma_{2}\) at the intersection points (which consist of only two points).
### The local nature of open definable sets in \(\mathbb{R}^{2}\)
Let \(U\) be a bounded connected open definable subset of \(\mathbb{R}^{2}\). By choosing a cell decomposition of \(\mathbb{R}^{2}\) compatible with \(U\) and \(\partial U\), we can prove that for any \(p_{0}\in\partial U\) there is \(r>0\) such that we have one of the following cases:
\((C_{1})\): **Punctured disk.**\(B_{r}(p_{0})\cap U=B_{r}(p_{0})\setminus\{p_{0}\}\).
Figure 3. The \((C_{1})\) case.
\((C_{2})\): **Sector.** There are two definable \(C^{1}\)-curves \(\gamma_{1},\gamma_{2}:[0,a[\longrightarrow\mathbb{R}^{2}\) such that \(\gamma_{1}(0)=\gamma_{2}(0)=p_{0}\), \(\angle(\gamma_{1}^{\prime}(0),\gamma_{2}^{\prime}(0))\neq 0,2\pi\), and
\[B_{r}(p_{0})\cap U=R(r,\gamma_{1},\gamma_{2}).\]
\((C_{3})\): **Cusp.** There are two definable \(C^{1}\)-curves \(\gamma_{1},\gamma_{2}:[0,a[\longrightarrow\mathbb{R}^{2}\) such that \(\gamma_{1}(0)=\gamma_{2}(0)=p_{0}\), \(\angle(\gamma_{1}^{\prime}(0),\gamma_{2}^{\prime}(0))=0\), and
\[B_{r}(p_{0})\cap U=R(r,\gamma_{1},\gamma_{2}).\]
\((C_{4})\): **Cusp complement.** There are two definable \(C^{1}\)-curves \(\gamma_{1},\gamma_{2}:[0,a[\longrightarrow\mathbb{R}^{2}\) such that \(\gamma_{1}(0)=\gamma_{2}(0)=p_{0}\), \(\angle(\gamma_{1}^{\prime}(0),\gamma_{2}^{\prime}(0))=2\pi\), and
\[B_{r}(p_{0})\cap U=R(r,\gamma_{1},\gamma_{2}).\]
\((C_{5})\): **Arc complement.** There exists a definable \(C^{1}\)-curve \(\gamma:[0,a[\longrightarrow\mathbb{R}^{2}\) such that \(\gamma(0)=p_{0}\) and
\[B_{r}(p_{0})\cap U=B_{r}(p_{0})\setminus Im(\gamma).\]
Figure 4. The \((C_{2})\) case.
Figure 5. The \((C_{3})\) case.
* \(B_{r}(p_{0})\cap U\) is a disjoint union of copies of open sets like \(C_{2}\), \(C_{3}\), and \(C_{4}\).
### Local definition of the sheaf \(\mathcal{F}^{k}\)
**Lemma 6.2**.: _Let \(U\), \(V\) be two Lipschitz definable bounded open subsets of \(\mathbb{R}^{n}\) such that \(U\cup V\) and \(U\cap V\) are Lipschitz. For any \(s\in\mathbb{R}_{+}\), the sequence of Hilbert spaces_
\[0\to W^{s,2}(U\cup V)\to W^{s,2}(U)\oplus W^{s,2}(V)\to W^{s,2}(U\cap V)\to 0\]
_is exact._
Proof.: See [9] for the proof (or see Section 8 for a categorical proof).
**Remark 6.3**.: For \(s\in\mathbb{N}\), the requirement for \(U\cap V\) to be Lipschitz in the statement of Lemma 6.2 is not necessary.
Proof.: Take \(s=k\in\mathbb{N}\). By (3.4), for \(\Omega=U\cup V\subset\mathbb{R}^{n}\), we have
\[W^{k,2}(\Omega)=\{f\in L^{2}(\Omega)\;:\;\forall\alpha\in\mathbb{N}^{n}\;:| \alpha|\leqslant k\Longrightarrow\partial^{\alpha}f\in L^{2}(\Omega)\;\},\]
where \(\partial^{\alpha}f\) is the distributional derivative of \(f\). The Hilbert structure of \(W^{k,2}(\Omega)\) is given by
\[\|f\|_{W^{k,2}(\Omega)}^{2}=\sum_{|\alpha|\leqslant k}\|\partial^{\alpha}f\| _{L^{2}(\Omega)}^{2}.\]
Now, consider \((f,g)\in W^{k,2}(U)\oplus W^{k,2}(V)\) such that \(f\mid_{U\cap V}=g\mid_{U\cap V}\). There exists \(H\in L^{2}(U\cup V)\) such that \(H\mid_{U}=f\in W^{k,2}(U)\) and \(H\mid_{V}=g\in W^{k,2}(V)\). We aim to show that for any \(\alpha\in\mathbb{N}^{n}\) with \(|\alpha|\leqslant k\), there exists \(h_{\alpha}\in L^{2}(U\cup V)\) such that \(\partial^{\alpha}H=h_{\alpha}\) (in the distributional sense).
Let \((\varphi_{U},\varphi_{V})\) be a partition of unity associated to \((U,V)\). For any \(\phi\in C_{c}^{\infty}(U\cup V)\), we have
Figure 8. The \((C_{6})\) case.
Figure 7. The \((C_{5})\) case.
\[<\partial^{\alpha}H,\phi> =<\partial^{\alpha}H,\varphi_{U}\phi>+<\partial^{\alpha}H,\varphi_{V }\phi>\] \[=(-1)^{|\alpha|}\int_{U}H\partial^{\alpha}(\varphi_{U}\phi)+(-1)^{| \alpha|}\int_{V}H\partial^{\alpha}(\varphi_{V}\phi)\] \[=(-1)^{|\alpha|}\int_{U}f\partial^{\alpha}(\varphi_{U}\phi)+(-1)^ {|\alpha|}\int_{V}g\partial^{\alpha}(\varphi_{V}\phi)\] \[=(-1)^{|\alpha|}\int_{U}\partial^{\alpha}f(\varphi_{U}\phi)+(-1)^ {|\alpha|}\int_{V}\partial^{\alpha}g(\varphi_{V}\phi)\] \[=(-1)^{|\alpha|}\int_{U\cup V}(\varphi_{U}\partial^{\alpha}f+ \varphi_{V}\partial^{\alpha}g)\phi\] \[=(-1)^{|\alpha|}\int_{U\cup V}h_{\alpha}\phi.\]
Here, \(h_{\alpha}:=\varphi_{U}\partial^{\alpha}f+\varphi_{V}\partial^{\alpha}g\in L ^{2}(U\cup V)\), which completes the proof.
From now on, we consider \(k\in\mathbb{N}\). Let \(U\) be a connected open definable bounded subset of \(\mathbb{R}^{2}\). We define the \(\mathbb{C}\)-vector space \(\widehat{\mathcal{F}}^{k}(U)\) in the following special cases:
1. If \(U=B_{r}(p_{0})\setminus\{p_{0}\}\), we can assume \(p_{0}=(0,0)\) and \(r=1\). In this case, we can decompose \(U=U_{1}\cup U_{2}\), where \[U_{1}=\{(x,y)\in U:\;y>x\;\text{or}\;y<-x\}\;\text{and}\;U_{2}=\{(x,y)\in U:\;y >-x\;\text{or}\;y<x\}.\] We have the sequence \[\begin{CD}0@>{}>{}>W^{k,2}(U)@>{d_{0}}>{}>W^{k,2}(U_{1})\oplus W^{k,2}(U_{2}) @>{d_{1}}>{}>W^{k,2}(U_{1}\cap U_{2})\end{CD}\] It follows from Lemma 6.2 that \[Ker(d_{1})=\{f\in L^{2}(U)\;:\;f\mid_{L}\in W^{k,2}(L)\;\text{for any}\;L\;\text{Lipschitz in}\;U\}=W^{k,2}_{\star}(U).\] But we have a fact (we refer to Exercise 11.9 in [10]) about Sobolev spaces: **Fact:** Take \(\Omega\subset\mathbb{R}^{n}\) open and \(W\subset\Omega\) such that \(\mathcal{H}^{n-1}(W)=0\), where \(\mathcal{H}^{n-1}\) is the \((n-1)\)-Hausdorff measure on \(\mathbb{R}^{n}\). Then we have \[W^{k,2}_{\star}(\Omega\setminus W)=W^{k,2}_{\star}(\Omega).\] That gives \[W^{k,2}_{\star}(U)=W^{k,2}_{\star}(B_{r}(p_{0}))=W^{k,2}(B_{r}(p_{0})).\] So, this means that the sequence \[\begin{CD}0@>{}>{}>W^{k,2}(U)@>{d_{0}}>{}>W^{k,2}(U_{1})\oplus W^{k,2}(U_{2}) @>{d_{1}}>{}>W^{k,2}(U_{1}\cap U_{2})\end{CD}\] is exact. Therefore, we can define \(\widehat{\mathcal{F}}^{k}(U)\) by \[\widehat{\mathcal{F}}^{k}(U):=W^{k,2}(U).\]
2. If \(U\) is connected with Lipschitz boundary, then we define \(\widehat{\mathcal{F}}^{k}(U)=W^{k,2}(U)\).
3. If \(U\) is a cusp, meaning that there are \(r>0\) and two definable \(C^{1}\)-curves \(\gamma_{1},\gamma_{2}:[0,a[\longrightarrow\mathbb{R}^{2}\) such that \(\gamma_{1}(0)=\gamma_{2}(0)\), \(\angle(\gamma_{1}^{\prime}(0),\gamma_{2}^{\prime}(0))=0\), and \[U=R(r,\gamma_{1},\gamma_{2}).\]
Then we define: \(\widehat{\mathcal{F}}^{k}(U)=W^{k,2}(U)\).
\((C_{4})\): If \(U\) is a complement of a cusp, meaning that there are \(r>0\) and two definable \(C^{1}\)-curves \(\gamma_{1},\gamma_{2}:[0,a[\longrightarrow\mathbb{R}^{2}\) such that \(\gamma_{1}(0)=\gamma_{2}(0)=p_{0}\), \(\angle(\gamma_{1}^{\prime}(0),\gamma_{2}^{\prime}(0))=2\pi\), and
\[U=R(r,\gamma_{1},\gamma_{2}).\]
Take \(\gamma_{3},\gamma_{4}:[0,a[\longrightarrow\mathbb{R}^{2}\) such that \(\gamma_{3}(0)=\gamma_{4}(0)=p_{0}\), \(\angle(\gamma_{1}^{\prime}(0),\gamma_{3}^{\prime}(0))\notin\{0,2\pi\}\), and \(\angle(\gamma_{4}^{\prime}(0),\gamma_{2}^{\prime}(0))\notin\{0,2\pi\}\).
In this case, the sequence
\[0\to W^{k,2}(U)\to W^{k,2}(R(r,\gamma_{1},\gamma_{4}))\oplus W^{k,2}(R(r, \gamma_{3},\gamma_{2}))\to W^{k,2}(R(r,\gamma_{3},\gamma_{4}))\]
is not exact in general.
**Example 6.4**.: Assume that \(k>2\), then we have the continuous embedding \(W^{k,2}(\mathbb{R}^{2})\hookrightarrow C^{1}(\mathbb{R}^{2})\). Take \(U,V\in X_{\mathcal{A}}(\mathbb{R}^{2})\) defined by
\[U=(]-1,1[\times]-1,0[)\cup(]-1,0[\times]-1,1[),\]
and
\[V=(]-1,0[\times]-1,1[)\cup\{(x,y)\ :\ 0\leqslant x<1\ \text{and}\ x^{k+1}<y<1\}.\]
Define \(F\in L^{2}(U\cup V)\) by \(F\mid_{U}=0\), \(F(x,y)=x^{k+1}\) for \(x\in[0,1[\) and \(x^{k+1}<y<1\). It is clear that \(F\mid_{U}\in W^{k,2}(U)\) and \(F\mid_{V}\in W^{k,2}(V)\) but \(F\notin W^{k,2}(U\cup V)\), because if \(F\in W^{k,2}(U\cup V)\) then there will be a \(C^{1}\) extension \(\widehat{F}\) of \(F\) to \(\mathbb{R}^{2}\), but this can not be true because
\[\lim_{x\to 0}\frac{\widehat{F}(x,x^{k+1})-\widehat{F}(x,0)}{x^{k+1}-0}=1.\]
_Question 1_.: What happens in this case if we replace \(k\) by \(s\in[\frac{1}{2},2]\)? is the sequence
\[0\to W^{s,2}(U)\to W^{s,2}(R(r,\gamma_{1},\gamma_{4}))\oplus W^{s,2}(R(r, \gamma_{3},\gamma_{2}))\to W^{s,2}(R(r,\gamma_{3},\gamma_{4}))\]
exact?
Now we define \(\widehat{\mathcal{F}}^{k}(U)\) to be the kernal of the map
\[J:W^{k,2}(R(r,\gamma_{1},\gamma_{4}))\oplus W^{k,2}(R(r,\gamma_{3},\gamma_{2} ))\to W^{k,2}(R(r,\gamma_{3},\gamma_{4})).\]
We use the notation
\[\widehat{\mathcal{F}}^{k}(U)=Ker(J)=K(\gamma_{3},\gamma_{4}).\]
We need to prove that \(K(\gamma_{3},\gamma_{4})\) doesn't depend on \(\gamma_{3}\) and \(\gamma_{4}\), but only on \(U\). Take \(\alpha,\beta:[0,a[\longrightarrow\mathbb{R}^{2}\) two definable curves that satisfy the same conditions as \(\gamma_{3}\) and \(\gamma_{4}\). Let's prove that
\[K(\gamma_{3},\gamma_{4})=K(\alpha,\beta).\]
We can identify \(K(\gamma_{3},\gamma_{4})\) and \(K(\alpha,\beta)\) with the spaces
\[\begin{array}{ll}K(\gamma_{3},\gamma_{4})=\{f\in\mathcal{D}^{\prime}(U)\ :&f_{|R(r,\gamma_{1},\gamma_{4})}\in W^{k,2}(R(r,\gamma_{1},\gamma_{4}))\ \text{and}\ f_{|R(r,\gamma_{3},\gamma_{2})}\in\\ &W^{k,2}(R(r,\gamma_{3},\gamma_{2}))\}\end{array}\]
\[\begin{array}{ll}K(\alpha,\beta)=\{f\in\mathcal{D}^{\prime}(U)\ :&f_{|R(r,\gamma_{1},\beta)}\in W^{k,2}(R(r,\gamma_{1},\beta))\ \text{and}\ f_{|R(r,\alpha,\gamma_{2})}\in\\ &W^{k,2}(R(r,\alpha,\gamma_{2}))\}.\end{array}\]
We can distinguish four possible cases:
Case 1: \(Im(\alpha)\subset R(r,\gamma_{3},\gamma_{4})\) and \(Im(\beta)\subset R(r,\gamma_{3},\gamma_{4})\).
Case 2: \(Im(\alpha)\subset R(r,\gamma_{3},\gamma_{4})\) and \(Im(\beta)\subset R(r,\gamma_{4},\gamma_{2})\).
Case 3: \(Im(\alpha)\subset R(r,\gamma_{1},\gamma_{3})\) and \(Im(\beta)\subset R(r,\gamma_{3},\gamma_{4})\).
Case 4: \(Im(\alpha)\subset R(r,\gamma_{1},\gamma_{3})\) and \(Im(\beta)\subset R(r,\gamma_{1},\gamma_{3})\).
The first case is obvious, because in this case we have \(R(r,\gamma_{1},\beta)\subset R(r,\gamma_{1},\gamma_{4})\) and \(R(r,\alpha,\gamma_{2})\subset R(r,\gamma_{3},\gamma_{2})\). The cases 3 and 4 can be proven using the same computation as Case 2.
**Proof in Case 2:** We will prove that \(K(\gamma_{3},\gamma_{4})\subset K(\alpha,\beta)\) (the other inclusion follows from the other cases). Take \(f\in K(\gamma_{3},\gamma_{4})\). In this case, since \(R(r,\alpha,\gamma_{2})\subset R(r,\gamma_{3},\gamma_{2})\), we have \(f_{|R(r,\alpha,\gamma_{2})}\in W^{k,2}(R(r,\alpha,\gamma_{2}))\). Now let's prove that
\[f_{|R(r,\gamma_{1},\beta)}\in W^{k,2}(R(r,\gamma_{1},\beta)).\]
Take \(c:[0,a[\longrightarrow\mathbb{R}^{2}\) a definable curve such that \(c(0)=p_{0}\), \(\angle(\gamma_{1}^{\prime}(0),c^{\prime}(0))>0\), \(\angle(c^{\prime}(0),\gamma_{2}^{\prime}(0))>0\), \(\angle(\beta^{\prime}(0),c^{\prime}(0))>0\), and \(Im(c)\subset R(r,\beta,\gamma_{2})\). We can see that \(f_{|R(r,\gamma_{1},\gamma_{4})}\in W^{k,2}(R(r,\gamma_{1},\gamma_{4}))\) and \(f_{|R(r,\gamma_{3},c)}\in W^{k,2}(R(r,\gamma_{3},c))\) (note that \(R(r,\gamma_{3},c)\subset R(r,\gamma_{3},\gamma_{2})\)). Now, by Lemma6.2 the sequence
\[\begin{array}{ll}0\to W^{k,2}(R(r,\gamma_{1},c))\to W^{k,2}(R(r,\gamma_{1}, \gamma_{4}))\oplus W^{k,2}(R(r,\gamma_{3},c))\to W^{k,2}(R(r,\gamma_{3}, \gamma_{4}))\\ \text{is exact. Hence, }f_{|R(r,\gamma_{1},c)}\in W^{k,2}(R(r,\gamma_{1},c))\text{, which implies }f_{|R(r,\gamma_{1},\beta)}\in W^{k,2}(R(r,\gamma_{1},\beta)).\end{array}\]
\((C_{5})\) If there exists a definable \(C^{1}\)-curve \(\gamma:[0,a[\longrightarrow\mathbb{R}^{2}\) such that \(\gamma(0)=p_{0}\) and
\[U=B_{r}(p_{0})\setminus Im(\gamma),\]
take \(\gamma_{1},\gamma_{2}:[0,a[\longrightarrow\mathbb{R}^{2}\) two definable \(C^{1}\)-curves such that \(\angle(\gamma_{1}^{\prime}(0),\gamma_{2}^{\prime}(0))\neq 2\pi\).
By Sobolev embeddings and continuity reasons, we can find an example such that
\[0\to W^{k,2}(U)\to W^{k,2}(R(r,\gamma,\gamma_{2}))\oplus W^{k,2}(R(r,\gamma_{ 1},\gamma))\to W^{k,2}(R(r,\gamma_{1},\gamma_{2}))\]
is not exact.
**Example 6.5**.: Assume that \(k>1\). So we have an embedding \(W^{k,2}(\mathbb{R}^{2})\hookrightarrow C^{0}(\mathbb{R}^{2})\). Take \(U,V\in X_{\mathcal{A}}(\mathbb{R}^{2})\) defined by
\[U=(]-1,1[\times]-1,0[)\cup(]-1,0[\times]-1,1[),\]
and
\[V=(]-1,0[\times]-1,1[)\cup(]-1,1[\times]0,1[).\]
Define \(F\in L^{2}(U\cup V)\) by \(F\mid_{U}=0\) and \(F(x,y)=e^{-\frac{1}{x^{2}}}\) for \(0<x<1\) and \(0<y<1\). Then \(F\mid_{U}\in W^{k,2}(U)\) and \(F\mid_{V}\in W^{k,2}(V)\), but \(F\notin W^{k,2}(U\cup V)\), as it cannot be extended to a continuous function on \(\mathbb{R}^{2}\).
_Question 2_.: What happens in this case if we replace \(k\) with \(s\in[\frac{1}{2},1]\)? Is the sequence
\(0\to W^{s,2}(U)\to W^{s,2}(R(r,\gamma,\gamma_{2}))\oplus W^{s,2}(R(r,\gamma_{1}, \gamma))\to W^{s,2}(R(r,\gamma_{1},\gamma_{2}))\)
exact?
So this motivate us to define \(\widehat{\mathcal{F}}^{k}(U)\) to be the kernal of the map
\[J:W^{k,2}(R(r,\gamma,\gamma_{2}))\oplus W^{k,2}(R(r,\gamma_{1},\gamma))\to W^{ k,2}(R(r,\gamma_{1},\gamma_{2})).\]
That is,
\[\widehat{\mathcal{F}}^{k}(U)=Ker(J)=K(\gamma_{1},\gamma_{2}).\]
Applying the same techniques we did with the previous case, we can show that \(K(\gamma_{1},\gamma_{2})\) doesn't depend by \(\gamma_{1}\) and \(\gamma_{2}\).
**Remark 6.6**.: Note that this is a special case of the previous case.
\((C_{6})\) If \(U\) is as described in case \((C_{6})\), we define \(\widehat{\mathcal{F}}^{k}(U)\) to be the direct sum of the sections of \(\widehat{\mathcal{F}}^{k}\) on the connected components of \(U\cap B_{r}(p_{0})\).
### The global definition of \(\mathcal{F}^{k}\) on the site \(X_{\mathcal{A}}(\mathbb{R}^{2})\).
Take \(k\in\mathbb{N}\). For every \(U\in X_{\mathcal{A}}(\mathbb{R}^{2})\), we define \(\mathcal{F}^{k}(U)\) by
\[\mathcal{F}^{k}(U):=\{f\in W^{k,2}_{loc}(U)\ :\ \text{for each}\ x\in \partial U,\ \exists r>0\ \text{such that}\ B(x,r)\cap U\in\] \[\ \
By the definition of \(\mathcal{F}^{k}\) and assuming that \(f\) is supported in \((U\cup V)\cap B(x,r)\), it is enough to prove that \(f\mid_{(U\cup V)\cap B(x,r)}\in\widehat{\mathcal{F}}^{k}((U\cup V)\cap B(x,r))= \widehat{\mathcal{F}}^{k}(R(r,\gamma_{1},\gamma_{4}))\) knowing that \(f\mid_{U\cap B(x,r)}\in\widehat{\mathcal{F}}^{k}(U\cap B(x,r))\) and \(f\mid_{V\cap B(x,r)}\in\widehat{\mathcal{F}}^{k}(V\cap B(x,r))\). We will discuss several cases for this:
\(\bullet\)**case(1)**\(\angle(\gamma_{1}^{\prime}(0),\gamma_{4}^{\prime}(0))=0\)**:** In this case, everything is a cusp near \(x\). So we can find \(U^{\prime}\) and \(V^{\prime}\) Lipschitz such that \(U^{\prime}\cup V^{\prime}\) is Lipschitz, \(U\cap B(r,x)\subset U^{\prime}\), \(V\cap B(r,x)\subset V^{\prime}\), and \(U^{\prime}\cap V^{\prime}=(U\cap V)\cap B(r,x)\). In this case, we have
\[\widehat{\mathcal{F}}^{k}((U\cup V)\cap B(x,r))=W^{k,2}((U\cup V)\cap B(x,r))\]
\[\widehat{\mathcal{F}}^{k}(U\cap B(x,r))=W^{k,2}(U\cap B(x,r))\]
\[\widehat{\mathcal{F}}^{k}(V\cap B(x,r))=W^{k,2}(V\cap B(x,r))\]
Take \(f_{U^{\prime}}\in W^{k,2}(U^{\prime})\) an extension of \(f\mid_{U\cap B(x,r)}\) and \(f_{V^{\prime}}\in W^{k,2}(V^{\prime})\) an extension of \(f\mid_{V\cap B(x,r)}\), and define \(F\in\mathcal{D}^{\prime}(U^{\prime}\cup V^{\prime})\) by gluing \(f_{U^{\prime}}\) and \(f_{V^{\prime}}\). By Lemma 6.2 we have that \(F\in W^{k,2}(U^{\prime}\cup V^{\prime})\) and since \(F\mid_{(U\cup V)\cap B(x,r)}=f\mid_{(U\cup V)\cap B(x,r)}\), \(f\in W^{k,2}((U\cup V)\cap B(x,r))=\widehat{\mathcal{F}}^{k}((U\cup V)\cap B (x,r))\).
\(\bullet\)**case(2)**\(\angle(\gamma_{1}^{\prime}(0),\gamma_{4}^{\prime}(0))\neq 0,2\pi\)**:** In this case, either \(U\) is Lipschitz or \(V\) is Lipschitz. If both are Lipschitz, then the proof follows from Lemma 6.2. Let's assume that \(U\) is not Lipschitz. In this case, we can find \(U^{\prime}\) Lipschitz such that \(U^{\prime}\cup V\) is Lipschitz, \(U\cap B(r,x)\subset U^{\prime}\), and \(U^{\prime}\cap V=(U\cap V)\cap B(r,x)\). As in the previous case, we have
\[\widehat{\mathcal{F}}^{k}((U\cup V)\cap B(x,r))=W^{k,2}((U\cup V)\cap B(x,r))\]
\[\widehat{\mathcal{F}}^{k}(U\cap B(x,r))=W^{k,2}(U\cap B(x,r))\]
\[\widehat{\mathcal{F}}^{k}(V\cap B(x,r))=W^{k,2}(V\cap B(x,r))\]
Take \(f_{U^{\prime}}\in W^{k,2}(U^{\prime})\) an extension of \(f\mid_{U\cap B(x,r)}\), and define \(F\in\mathcal{D}^{\prime}(U^{\prime}\cup V)\) by gluing \(f_{U^{\prime}}\) and \(f\mid_{V}\). By Lemma 6.2 we have that \(F\in W^{k,2}(U^{\prime}\cup V)\) and since \(F\mid_{(U\cup V)\cap B(x,r)}=f\mid_{(U\cup V)\cap B(x,r)}\), \(f\in W^{k,2}((U\cup V)\cap B(x,r))=\widehat{\mathcal{F}}^{k}((U\cup V)\cap B( x,r))\).
\(\bullet\)**case(3)**\(\angle(\gamma_{1}^{\prime}(0),\gamma_{4}^{\prime}(0))=2\pi\)**:**
\(\bullet\)**Subcase3.1:** If \(U\cap B(x,r)\) and \(V\cap B(x,r)\) are lipschitz, then we have by definition that
\[\widehat{\mathcal{F}}^{k}((U\cup V)\cap B(x,r))=K(\gamma_{2},\gamma_{3})\]
And this gives that \(f\mid_{(U\cup V)\cap B(x,r)}\in\widehat{\mathcal{F}}^{k}((U\cup V)\cap B(x,r))\).
\(\bullet\)**Subcase3.2:** If \(\angle(\gamma_{1}^{\prime}(0),\gamma_{3}^{\prime}(0))=2\pi\) and \(\angle(\gamma_{2}^{\prime}(0),\gamma_{4}^{\prime}(0))=2\pi\), then in this case we can find \(\alpha\) and \(\beta\) in \((U\cap V)\cap B(x,r)\) with a starting point \(x\), such that
\[\widehat{\mathcal{F}}^{k}((U\cup V)\cap B(x,r))=K(\alpha,\beta)\]
And since \(f\mid_{R(r,\gamma_{1},\beta)}\in W^{k,2}(R(r,\gamma_{1},\beta))\) and \(f\mid_{R(r,\alpha,\gamma_{4})}\in W^{k,2}(R(r,\alpha,\gamma_{4}))\), we have \(f\mid_{(U\cup V)\cap B(x,r)}\in\widehat{\mathcal{F}}^{k}((U\cup V)\cap B(x,r))\).
\(\bullet\)**Subcase3.3:** If \(\angle(\gamma_{1}^{\prime}(0),\gamma_{3}^{\prime}(0))=0\) and \(\angle(\gamma_{2}^{\prime}(0),\gamma_{4}^{\prime}(0))=2\pi\), then in this case we can find \(\alpha,\ \beta:[0,a[\longrightarrow\mathbb{R}^{2}\) such that \(Im(\beta),\ Im(\alpha)\subset V\cap B(x,r)\), and
\[\widehat{\mathcal{F}}^{k}((U\cup V)\cap B(x,r))=K(\alpha,\beta)\]
we have that \(f\mid_{R(r,\alpha,\gamma_{4})}\in W^{k,2}(R(r,\alpha,\gamma_{4}))\), and by applying **case (2)** on \(R(r,\gamma_{1},\gamma_{3})\), \(R(r,\gamma_{2},\beta)\), we deduce that also \(f\mid_{R(r,\gamma_{1},\beta)}\in W^{k,2}(R(r,\gamma_{1},\beta))\), hence we have that \(f\mid_{(U\cup V)\cap B(x,r)}\in\widehat{\mathcal{F}}^{k}((U\cup V)\cap B(x,r))\).
\(\bullet\)**Subcase3.4:** If \(\angle(\gamma_{1}^{\prime}(0),\gamma_{3}^{\prime}(0))=2\pi\) and \(\angle(\gamma_{2}^{\prime}(0),\gamma_{4}^{\prime}(0))=0\), then it is the symmetry statement of subcase3.
**Remark 6.7**.: Note that the case where \(\gamma_{1}(t)=\gamma_{4}(t)\) is included in the **case(3)**.
**Step2:** We don't assume here the local connectivity of \(U\), \(V\), and \(U\cup V\).
In this case, there is a finite number of definable curves ( with beginning point \(x\) )
\(\gamma_{1},\lambda_{1}...,\gamma_{m},\lambda_{m}:[0,a[\longrightarrow\mathbb{R }^{2},\)\(\alpha_{1},\beta_{1}...,\alpha_{l},\beta_{l}:[0,a[\longrightarrow\mathbb{R }^{2}\) such that
\[B(x,r)\cap U=\sqcup_{i}R(r,\gamma_{i},\lambda_{i})\text{ and }B(x,r)\cap V=\sqcup_{i}R(r, \alpha_{i},\beta_{i}).\]
Take \(f\in\mathcal{D}^{\prime}((U\cup V)\cap B(x,r))\) such that \(f\mid_{U\cap B(x,r)}\in\widehat{\mathcal{F}}^{k}((U)\cap B(x,r))\) and \(f\mid_{V\cap B(x,r)}\in\widehat{\mathcal{F}}^{k}((V)\cap B(x,r))\), clearly this implies that
\(f\mid_{R(r,\gamma_{i},\lambda_{i})}\in\widehat{\mathcal{F}}^{k}(R(r,\gamma_{ i},\lambda_{i}))\) and \(f\mid_{R(r,\alpha_{j},\beta_{j})}\in\widehat{\mathcal{F}}^{k}(R(r,\alpha_{j}, \beta_{j}))\) for all \(i\) and \(j\).
We want to prove that \(f\mid_{(U\cup V)\cap B(x,r)}\in\widehat{\mathcal{F}}^{k}((U\cup V)\cap B(x,r))\). By the local definition of \(\widehat{\mathcal{F}}^{k}\), it is enough to prove that \(f\mid_{C}\in\widehat{\mathcal{F}}^{k}(C)\) for every connected component \(C\) of \((U\cup V)\cap B(x,r)\). So take \(C^{\prime}\) a connected component of \(C\), we can reorder the curves \(\gamma_{1},\lambda_{1}...,\gamma_{m},\lambda_{m},\alpha_{1},\beta_{1}..., \alpha_{l},\beta_{l}\) to find a definable curves \(c_{1},...,c_{n}\) such that
\(C^{\prime}=\cup_{i}R(r,c_{i},c_{i+1})\), \(f\mid_{R(r,c_{i},c_{i+1})}\in\widehat{\mathcal{F}}^{k}(R(r,c_{i},c_{i+1}))\), and
\(R(r,c_{i},c_{i+1})\cap R(r,c_{i+2},c_{i+3})\neq\emptyset\) for any \(i\in\{1,...,n-3\}\).
Using induction and **Step1** we deduce that \(f\mid_{C^{\prime}}\in\widehat{\mathcal{F}}^{k}(C^{\prime})\).
\(\bullet\)**Case(B):** Let's be out of the assumption of **Case(A)**. Since we assumed that the germs \((\partial U,x)\) and \((\partial V,x)\) are not comparable, the only non trivial case is when \(U\),\(V\in\{C_{2},...,C_{6}\}\) and \(U\cup V\) is like \(C_{1}\). Let \(L\) be a Lipschitz open subset in \(U\cup V\). If \(x\notin\overline{L}\), then \(f\mid_{L}\in W^{k,2}(L)\) because for any \(p\in\overline{L}\) there is a neighborhood \(O_{p}\) of \(p\) in \(U\) or \(V\) such that \(f\mid_{O_{p}}\in W^{k,2}(O_{p})\). Now, if \(x\in\overline{L}\) then in this case near \(x\), \(L\) is like \(C_{2}\) and covered by two open sets \(U_{L}\in\{C_{2},...,C_{6}\}\) and \(V_{L}\in\{C_{2},...,C_{6}\}\) such that \(f\mid_{U_{L}}\in\widehat{\mathcal{F}}^{k}(U_{L})\) and \(f\mid_{V_{L}}\in\widehat{\mathcal{F}}^{k}(V_{L})\), and by the discussion of the **Case(A)**, it follows that \(f\mid_{L}\in\widehat{\mathcal{F}}^{k}(L)=W^{k,2}(L)\).
**Remark 6.8**.: Take \(k\in\mathbb{N}\). By analyzing each case, we can show that
1. Let \(U\in X_{\mathcal{A}}(\mathbb{R}^{2})\) such that \(U\) falls into \(C_{1}\) to \(C_{6}\). Then, we have \[\mathcal{F}^{k}(U)=\widehat{\mathcal{F}}^{k}(U).\]
2. If \(W\in X_{\mathcal{A}}(\mathbb{R}^{2})\) has only cuspidal singularities (singularities on the boundary of \(W\) are Lipschitz or of type \(C_{3}\)), then \[\mathcal{F}^{k}(W)=W^{k,2}(W).\]
Consequently, if \(U\) and \(V\) belong to \(X_{\mathcal{A}}(\mathbb{R}^{2})\) such that \(U\), \(V\), \(U\cap V\), and \(U\cup V\) possess only cuspidal singularities, the sequence \[0\to W^{k,2}(U\cup V)\to W^{k,2}(U)\oplus W^{k,2}(V)\to W^{k,2}(U\cap V)\to 0\]
is exact.
\(\bullet\) For any \(U\in X_{\mathcal{A}}(\mathbb{R}^{2})\), the space \(\mathcal{F}^{k}(U)\) naturally carries a Hilbert structure. Consider \(\mathcal{L}=(L_{1},L_{2},...,L_{m})\) as an L-regular decomposition of \(U\). Since each open L-regular set in \(\mathbb{R}^{2}\) only contains cuspidal singularities, the following mapping
\[\mathcal{N}_{\mathcal{L}}:\mathcal{F}^{k}(U)\longrightarrow\mathbb{R},\quad f \mapsto\mathcal{N}_{\mathcal{L}}(f)=\sum_{\dim(L_{i})=2}\|f_{|L_{i}}\|_{W^{k,2 }(L_{i})},\]
defines a Hilbert structure on \(\mathcal{F}^{k}(U)\) that is independent of \(\mathcal{L}\). Furthermore, if \(U\) exclusively has cuspidal singularities, this norm coincides with the Sobolev norm \(\|\cdot\|_{W^{k,2}(U)}\).
Proof.: Let's address each part of the proof step by step:
\(\bullet\)**(1)** We proceed by considering different cases. The cases \(C_{1}\) and \(C_{2}\) follow straightforwardly from the fact that any \(x\in\partial U\) (except for the center of the punctured disk) has a Lipschitz boundary in \(U\). The case \(C_{6}\) is a consequence of the additive property of \(\mathcal{F}^{k}\) and the other cases. Therefore, we focus on proving \(C_{3}\) and \(C_{4}\) (where \(C_{5}\) is analogous to \(C_{4}\)).
\(\bullet\)\(C_{3}\)**:** In this case, \(U=R(r,\alpha,\beta)\) represents a cusp between angles \(\alpha\) and \(\beta\). If \(f\in\mathcal{F}^{k}(U)\), then for any \(x\in\overline{U}\), there exists \(r_{x}>0\) such that \(f\mid_{U_{r_{x}}(x)}\in\widehat{\mathcal{F}}^{k}(U_{r_{x}}(x))=W^{k,2}(U_{r_{ x}}(x))\). This holds because locally, on the boundary of \(U\), the types are limited to \(C_{2}\) and \(C_{3}\). Thus, by using a partition of unity argument, we find that \(f\in W^{k,2}(U)=\widehat{\mathcal{F}}^{k}(U)\). Similarly, if \(f\in\widehat{\mathcal{F}}^{k}(U)=W^{k,2}(U)\), it is evident that \(f\in\mathcal{F}^{k}(U)\) since \(W^{k,2}\) is always a subspace of \(\mathcal{F}^{k}\).
\(\bullet\)\(C_{4}\)**:** In this case, we have two definable \(C^{1}\)-curves \(\gamma_{1},\gamma_{2}:[0,a[\longrightarrow\mathbb{R}^{2}\) such that \(\gamma_{1}(0)=\gamma_{2}(0)=p_{0}\), \(\angle(\gamma_{1}^{\prime}(0),\gamma_{2}^{\prime}(0))=2\pi\), and
\[U=R(r,\gamma_{1},\gamma_{2}).\]
Let \(\gamma_{3},\gamma_{4}:[0,a[\longrightarrow\mathbb{R}^{2}\) such that \(\gamma_{3}(0)=\gamma_{4}(0)=p_{0}\in\mathbb{R}^{2}\), \(\angle(\gamma_{1}^{\prime}(0),\gamma_{3}^{\prime}(0))>0\), and \(\angle(\gamma_{4}^{\prime}(0),\gamma_{2}^{\prime}(0))>0\). Consequently,
\[\widehat{\mathcal{F}}^{k}(U)=\{f\in\mathcal{D}^{\prime}(U)\;:\;f_{|R(r,\gamma _{1},\gamma_{4})}\in W^{k,2}(R(r,\gamma_{1},\gamma_{4}))\text{ and }f_{|R(r,\gamma_{3},\gamma_{2})}\in\]
\[W^{k,2}(R(r,\gamma_{3},\gamma_{2}))\}.\]
For \(f\in\widehat{\mathcal{F}}^{k}(U)\) and \(x\in\partial U\), we can choose a sufficiently large \(r\) so that \(U_{r}(x)=U\) and \(f\mid U_{r}(x)\in\widehat{\mathcal{F}}^{k}(U_{r}(x))\), implying \(f\in\mathcal{F}^{k}(U)\). Conversely, consider \(f\in\mathcal{F}^{k}(U)\). For the point \(p_{0}\), we can find \(r^{\prime}>0\) such that \(U_{r^{\prime}}(p_{0})=R(r^{\prime},\gamma_{1},\gamma_{2})\) and \(f\mid U_{r^{\prime}}(p_{0})\in\widehat{\mathcal{F}}^{k}(U_{r^{\prime}}(p_{0}))\) due to the definition. This leads to
\[\begin{array}{c}\widehat{\mathcal{F}}^{k}(U_{r^{\prime}}(p_{0}))=\{f\in \mathcal{D}^{\prime}(U)\;:\;f_{|R(r^{\prime},\gamma_{1},\gamma_{4})}\in W^{k, 2}(R(r^{\prime},\gamma_{1},\gamma_{4}))\text{ and }f_{|R(r^{\prime},\gamma_{3},\gamma_{2})}\in \\ W^{k,2}(R(r^{\prime},\gamma_{3},\gamma_{2}))\}\quad(\star).\end{array}\]
Considering that \(U\) is Lipschitz near each point \(x\in\partial U\setminus\{p_{0}\}\), it follows that \(f\) is Sobolev near each of these points. Combining this with \((\star)\) shows that \(f_{|R(r,\gamma_{1},\gamma_{4})}\in W^{k,2}(R(r,\gamma_{1},\gamma_{4}))\) and \(f_{|R(r,\gamma_{3},\gamma_{2})}\in W^{k,2}(R(r,\gamma_{3},\gamma_{2}))\), implying \(f\in\widehat{\mathcal{F}}^{k}(U)\).
\(\bullet\)**(2)** When \(W\in X_{\mathcal{A}}(\mathbb{R}^{2})\) only possesses cuspidal singularities, consider any point \(x\in\overline{W}\). There exists \(r_{x}>0\) such that \(W_{r_{x}}(x)\) is either Lipschitz or a standard cusp. Therefore, \(\widehat{\mathcal{F}}^{k}(W_{r_{x}}(x))=W^{k,2}(W_{r_{x}}(x))\). By using a partition of unity \((\phi_{x})_{x\in W}\) for the covering \((W_{r_{x}}(x))_{x\in W}\), it's evident that
\[f=\sum_{x}\phi_{x}f\mid_{W_{rx}(x)}\in W^{k,2}(W).\]
This establishes exactness on cuspidal domains.
\(\bullet\) **(3)** The result in this part is obvious from the \(L\)-regular decomposition and the (2) established in this remark.
Thus, we have demonstrated each part of the remark.
**Notation:** For \(k\in\mathbb{N}\) and \(U\in X_{\mathcal{A}}(\mathbb{R}^{2})\) with only cuspidal singularities, we denote by \(E_{U}\) a linear extension operator
\[E_{U}:W^{k,2}(U)\longrightarrow W^{k,2}(\mathbb{R}^{2})\] \[f\mapsto E_{U}(f)\text{ with }(E_{U}(f))_{|U}=f.\]
## 7. Cohomology of the sheaf \(\mathcal{F}^{k}\).
For the cohomology computation, we need to introduce the concept of "good directions".
**Good directions:** Consider a definable subset \(A\subset\mathbb{R}^{n}\) and a unit vector \(\lambda\in\mathbb{S}^{n-1}\). We say that \(\lambda\) is a _good direction_ for \(A\) if there exists \(\varepsilon>0\) such that for all \(x\in A^{reg}\), we have
\[d(\lambda,T_{x}A^{reg})>\varepsilon.\]
Given \(\lambda\in\mathbb{S}^{n-1}\), let \(\pi_{\lambda}:\mathbb{R}^{n}\longrightarrow N_{\lambda}=<\lambda>^{\perp}\) be the orthogonal projection, and let \(x_{\lambda}\) denote the coordinate of \(x\) along \(<\lambda>\).
Consider definable sets \(A\subset\mathbb{R}^{n}\) and \(A^{\prime}\subset N_{\lambda}\), along with a definable function \(f:A^{\prime}\longrightarrow\mathbb{R}\). We say that \(A\) is the graph of the function \(f\) with respect to \(\lambda\) if
\[A=\{x\in\mathbb{R}^{n}\;:\;\pi_{\lambda}(x)\in A^{\prime}\text{ and }x_{\lambda}=f(\pi_{ \lambda}(x))\}.\]
It's important to note that \(\lambda\in\mathbb{S}^{n-1}\) is a good direction for \(A\) if and only if \(A\) is a union of graphs of Lipschitz definable functions over certain subsets of \(N_{\lambda}\). It's worth mentioning that the sphere \(\mathbb{S}^{n}\) doesn't possess any good direction. To address this, we need to partition it into finite subsets, each of which has a distinct good direction. However, there exists a beautiful theorem by G. Valette [18] which asserts that after applying a bi-Lipschitz deformation to the ambient space, a good direction can always be found:
**Theorem 7.1**.: _For any definable \(X\subset\mathbb{R}^{n}\) with \(\dim(X)<n\), there exists a definable bi-Lipschitz function \(h:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{n}\) such that \(h(X)\) exhibits a good direction \(\lambda\in\mathbb{S}^{n-1}\)._
**Definition 7.2**.: Take \(U\in X_{\mathcal{A}}(\mathbb{R}^{2})\) and \(\mathcal{U}=(U_{i})_{i\in I}\) a cover of \(U\) in the definable site \(X_{\mathcal{A}}(\mathbb{R}^{2})\). An _adapted_ cover of \(\mathcal{U}\) is a definable cover \(\mathcal{V}=\{V_{j}\}_{j\in J}\) of \(\mathbb{R}^{2}\) such that the following properties are satisfied:
1. \(\mathcal{V}\) is compatible with \(\mathcal{U}\), that is each element in \(\mathcal{U}\) is a finite union of elements in \(\mathcal{V}\).
2. Every finite intersection of elements in \(\mathcal{V}\) is either empty or a connected domain with only cuspidal singularities, and intersection of more than three elements is always empty.
3. There exist \(m\in\mathbb{N}\), \(r>0\), and \((k_{l},p_{l})\in\mathbb{N}^{2}\) for each \(l\in\{0,...,m\}\) such that \(\mathcal{V}=\{V_{j}\}_{j\in J}\) can be rearranged as follows \[\mathcal{V} = \{O_{l,p}\;:\;l\in\{0,1,...,m+1\}\;\text{and}\;p\in\{0,...,p_{l}\}\}\] \[\cup \{\widehat{O}_{l,p}\;:\;l\in\{0,1,...,m+1\}\;\text{and}\;p\in\{0,...,p_{l}-1\}\}\] \[\cup \{V_{l,k}\;:\;l\in\{0,1,...,m\}\;\text{and}\;k\in\{0,...,k_{l}+1\}\}\] \[\cup \{B(a_{l,k},r)\;:\;a_{k,l}\in\mathbb{R}^{2},\;l\in\{0,1,...,m\}\; \text{and}\;k\in\{0,...,k_{l}\}\}\]
4. For each \(l\in\{1,...,m\}\) and \(p\in\{0,...,p_{l}\}\) there is a unique \((L(l,p),R(l,p))\in\mathbb{N}^{2}\) such that the only possible non-Lipschitz singularities of \(O_{l,p}\) and \(\widehat{O}_{l,p}\) (only in the case of \(p<p_{l}\)) are \(a_{l-1,L(l,p)}\) and \(a_{l,R(l,p)}\). \(\bullet\) For each \(p\in\{0,...,p_{0}\}\) there is a unique \(R(0,p)\in\mathbb{N}\) such that the only possible non-Lipschitz singularities of \(O_{0,p}\) and \(\widehat{O}_{0,p}\) (only in the case of \(p<p_{l}\)) is \(a_{l,R(0,p)}\). \(\bullet\) For each \(p\in\{0,...,p_{m+1}\}\) there is a unique \(L(m+1,p)\in\mathbb{N}\) such that the only possible non-Lipschitz singularities of \(O_{m+1,p}\) and \(\widehat{O}_{m+1,p}\) (only in the case of \(p<p_{l}\)) is \(a_{l,L(m+1,p)}\).
5. The only non empty intersections of two open sets in \(\mathcal{V}\) are the open sets \(O_{l,p}\cap\widehat{O}_{l,p}\), \(\widehat{O}_{l,p}\cap O_{l,p+1}\), \(O_{l,p}\cap V_{l,R(l,p)}\), \(O_{l,p}\cap V_{l-1,L(l,p)}\), \(B(a_{l,k},r)\cap V_{l,k}\), \(B(a_{l,k},r)\cap V_{l,k+1}\), \(B(a_{l-1,L(l,p)},r)\cap\widehat{O}_{l,p}\), \(B(a_{l,R(l,p)},r)\cap\widehat{O}_{l,p}\), \(B(a_{l-1,L(l,p)},r)\cap O_{l,p}\), and \(B(a_{l,R(l,p)},r)\cap O_{l,p}\).
6. The only non empty intersections of three open sets in \(\mathcal{V}\) are the open sets \(O_{l,p}\cap V_{l,R(l,p)}\cap B(a_{l,R(l,p)},r)\), \(O_{l,p}\cap V_{l-1,L(l,p)}\cap B(a_{l-1,L(l,p)},r)\), \(\widehat{O}_{l,p}\cap V_{l,R(l,p)}\cap B(a_{l,R(l,p)},r)\),
Figure 9. Example of a bi-Lipschitz transformation to get a good direction for a closed hypersurface in \(\mathbb{R}^{n}\)
\(\widehat{O}_{l,p}\cap V_{l-1,L(l,p)}\cap B(a_{l-1,L(l,p)},r)\), \(O_{l,p}\cap\widehat{O}_{l,p}\cap B(a_{l,R(l,p)},r)\), \(O_{l,p}\cap\widehat{O}_{l,p}\cap B(a_{l-1,L(l,p)},r)\), \(O_{l,p+1}\cap\widehat{O}_{l,p}\cap B(a_{l,R(l,p+1)},r)\), and \(O_{l,p+1}\cap\widehat{O}_{l,p}\cap B(a_{l-1,L(l,p+1)},r)\).
This definition is motivated by the construction in Figure 10 and explained in detail in the proof of Proposition 7.3. These covers will be essential in the computation of the cohomology of the sheaves \(\mathcal{F}^{k}\) (see Theorem 7.5).
**Cech cohomology:** Recall that for a given sheaf \(\mathcal{F}\) on a topological space \(M\) and a covering \(\mathcal{U}=(U_{i})_{i\in I}\) with \(I\) an ordered set, we have the Cech complex \(\mathcal{C}^{\star}_{\mathcal{U}}(M,\mathcal{F})\) defined by
\[\mathcal{C}^{0}_{\mathcal{U}}(M,\mathcal{F})\overset{d_{0}}{\longrightarrow} \mathcal{C}^{1}_{\mathcal{U}}(M,\mathcal{F})\overset{d_{1}}{\longrightarrow} \mathcal{C}^{2}_{\mathcal{U}}(M,\mathcal{F})\longrightarrow\cdots\]
such that
\[\mathcal{C}^{m}_{\mathcal{U}}(M,\mathcal{F})=\bigoplus_{J=(i_{0}<i_{1}<...<i_{ m})}\mathcal{F}(U_{J})\]
and
\[(d_{m}\alpha)_{U_{J}}:=(d_{m}\alpha)_{J=\{i_{0}<...<i_{m}\}}=\sum_{j=0,...,m}( -1)^{j}(\alpha_{J\setminus i_{j}})_{|U_{J}}.\]
Clearly, if \(\mathcal{V}\) is a refinement of \(\mathcal{U}\), then there is a canonical morphism \(\mathcal{C}^{\star}_{\mathcal{U}}(M,\mathcal{F})\longrightarrow\mathcal{C}^{ \star}_{\mathcal{V}}(M,\mathcal{F})\). Thus, the Cech cohomology of degree \(j\) of \(M\) with respect to \(\mathcal{F}\) is defined to be the colimit
\[H^{j}(M,\mathcal{F})=\lim_{\mathcal{U}}H^{j}(\mathcal{C}^{\star}_{\mathcal{U }}(M,\mathcal{F})).\]
It is well know that this cohomology coincide with the cohomology of the sheaf \(\mathcal{F}\) on paracompact spaces, and so on definable sets. We prove in the following Proposition that any cover in the site \(X_{\mathcal{A}}(\mathbb{R}^{2})\) has an _adapted_ cover, and so we can use adapted covers to compute the cohomology of \(\mathcal{F}^{k}\).
**Proposition 7.3**.: _Take \(U\in X_{\mathcal{A}}(\mathbb{R}^{2})\) and \(\mathcal{U}=(U_{i})_{i\in I}\subset X_{\mathcal{A}}(\mathbb{R}^{2})\) a cover of \(U\). Then there is an adapted cover \(\mathcal{V}\) of \(\mathcal{U}\)._
Proof.: Take \(\mathcal{U}=(U_{i})_{i\in I}\) a definable cover of \(U\), with \(I=\{1,...,m\}\). It is obvious that finding such a cover is sufficient after a bi-Lipschitz definable homeomorphism \(h:\mathbb{R}^{2}\longrightarrow\mathbb{R}^{2}\). Therefore, by Theorem 7.1, we can assume that \(\bigcup_{i}\partial(U_{i})\) is included in a finite union of graphs of definable Lipschitz functions \(\xi_{j}:\mathbb{R}\longrightarrow\mathbb{R}\). We are going to construct an adapted cover \(\mathcal{V}\) (see Figure 10).
Take
\[n=Max\{\sharp(\pi^{-1}(x)\cap(\bigcup_{j}\Gamma_{\xi_{j}}))\ :\ x\in \mathbb{R}\}<+\infty.\]
Consider \(\mathcal{C}=\{]-\infty,a_{0}[,\{a_{0}\},|a_{0},a_{1}[,...,]a_{m-1},a_{m}[,\{a_{ m}\},]a_{m},+\infty[\}\) a cell decomposition of \(\mathbb{R}\) compatible with the collection of sets
\[A_{k}=\{x\in\mathbb{R}\ :\ \sharp(\pi^{-1}(x)\cap(\bigcup_{j}\Gamma_{\xi_{j}}))=k\} \ \text{for}\ k\in\{1,...,n\}.\]
For \(l\in\{0,...,m\}\), we have
\[\pi^{-1}(a_{l})\cap(\bigcup_{j}\Gamma_{\xi_{j}})=\{a_{l,0},a_{l,1},...,a_{l, kl}\}.\]
We denote \(a_{-1}:=-\infty\) and \(a_{m+1}:=+\infty\). For \(l\in\{-1,0,...,m\}\), there exist Lipschitz definable functions
\[\phi_{l,0}<...<\phi_{l,p_{l}}:]a_{l},a_{l+1}[\rightarrow\mathbb{R}\]
such that \(\pi^{-1}\left(\pi(\bigcup_{i}\partial(U_{i}))\cap]a_{l},a_{l+1}[)\cap(\bigcup _{j}\Gamma_{\xi_{j}})=\bigcup_{p}\Gamma_{\phi_{l,p}}\). For each \(l\in\{0,1,...,m\}\) and \(p\in\{0,...,p_{l}\}\), there exist definable Lipschitz functions \(\phi_{l,p}^{-}<\phi_{l,p}^{+}:]a_{l},a_{l+1}[\mapsto\mathbb{R}\), such that we have
\[\phi_{l,0}^{-}<\phi_{l,0}<\phi_{l,0}^{+}<\phi_{l,1}^{-}<\phi_{l,1}<\phi_{l,p}^ {+}\cdots<\phi_{l,p_{l}}^{+},\]
\[\lim_{t\to a_{l}}\phi_{l,p}^{-}=\lim_{t\to a_{l}}\phi_{l,p}^{+}=\lim_{t \to a_{l}}\phi_{l,p},\]
and
\[\lim_{t\to a_{l+1}}\phi_{l,p}^{-}=\lim_{t\to a_{l+1}}\phi_{l,p}^{+}=\lim_{t \to a_{l+1}}\phi_{l,p}.\]
Denote by \(a_{l,-1}:=-\infty\) and \(a_{l,k+1}:=+\infty\). For each \(l\in\{0,...,m\}\) and \(k\in\{-1,...,l_{k}\}\), there exist Lipschitz functions (with respect to the direction \(\{(0,1)\}\))
\[\varphi_{l,k}^{-}<a_{l}<\varphi_{l,k}^{+}:]a_{l,k},a_{l,k+1}[\to\mathbb{R}\]
such that the graphs of these functions do not intersect the graphs of the functions \(\phi_{l,p}^{s}\) (for any \(l\), \(p\), and \(s\in\{0,-,+\}\) with \(\phi_{l,p}^{0}:=\phi_{l,p}\)), and
\[\lim_{t\to a_{l,k}}\varphi_{l,k}^{s}=a_{l}=\lim_{t\to a_{l,k+1}}\varphi_{l,k}^ {s}.\]
For each \((l,k)\) such that \(a_{l,k},\in U\), there exists \(r_{l,k}>0\) such that \(B(a_{l,k},r_{l,k})\subset U_{i}\) for all \(U_{i}\) that contain \(a_{l,k}\). Choose \(r<\min_{l,k}(r_{l,k})\) such that \(\partial B(a_{l,k},r)\) is transverse to all the graphs oh the functions \(\phi_{l,p}^{s}\) and \(\varphi_{l,k}^{s}\) (here also \(\varphi_{l,k}^{0}:=a_{l,k}\)), with
\[\overline{B}(a_{l^{\prime},k^{\prime}},r)\cap\overline{B}(a_{l,k},r)=\emptyset \text{ if }(l,k)\neq(l^{\prime},k^{\prime}).\]
Consider the collection of open definable sets
\[\mathcal{V}=\{\Gamma(\varphi_{l,k}^{-},\varphi_{l,k}^{+}),\Gamma(\phi_{l,p}^{ -},\phi_{l,p}^{+}),\Gamma(\phi_{l,p},\phi_{l,p+1}),B(a_{l,k},r)\ \}_{l,k,p}.\]
Clearly, the collection \(\mathcal{V}\) is an adapted cover of \(\mathcal{U}\).
Figure 10 illustrates an example of an adapted cover \(\mathcal{V}\) following the notation used in the proof of Proposition 7.3.
**Remark 7.4**(1): The sheaf \({\mathcal{F}}^{k}:X_{\mathcal{A}}({\mathbb{R}}^{2})\longrightarrow{\mathbb{C}}-\)vector spaces is not acyclic for \(k>1\). In this case, we have an inclusion \(W^{k,2}({\mathbb{R}}^{2})\subset C^{0}({\mathbb{R}}^{2})\). Consider the punctured disk \(W=B(0,1)\setminus\{0\}=U\cup V\), with
\[U=\{(x,y)\in W:\;y>x\;or\;y<-x\}\;\text{and}\;V=\{(x,y)\in W:\;y>-x\;or\;y<x\}.\]
And \(U\cap V=O_{1}\sqcup O_{2}\) with \(\overline{O_{1}}\cap\overline{O_{2}}=\{0\}\), such that
\[O_{1}=\{(x,y)\in W:\;y>\mid x\mid\}\;\text{and}\;O_{2}=\{(x,y)\in W:\;y<-\mid x \mid\}.\]
If \(H^{1}(W,{\mathcal{F}}^{s})=0\), then by the long Mayer-Vietoris exact sequnece, the sequence
\[0\mapsto{\mathcal{F}}^{k}(W)\mapsto W^{k,2}(U)\oplus W^{k,2}(V)\mapsto W^{k, 2}(O_{1})\oplus W^{k,2}(O_{2})\mapsto 0\]
is exact. However, this is not possible because for \((f\equiv 1,g\equiv 0)\in W^{k,2}(O_{1})\oplus W^{k,2}(O_{2})\) there are no continuous functions \((u,v)\in W^{k,2}(U)\oplus W^{k,2}(V)\) such that \((u-v)\mid_{O_{1}}=1\) and \((u-v)\mid_{O_{2}}=0\). Hence \(H^{1}(W,{\mathcal{F}}^{k})\neq 0\).
2. In Theorem 7.5 we will compute the cohomology of \({\mathcal{F}}^{k}\). The proof of Theorem 7.5 is based on the following observations: from the construction of _adapted_ covers, we can deduce that \(H^{j}(\cdot,{\mathcal{F}}^{k})=0\) for \(j\geqslant 2\). For the first cohomology groups of \({\mathcal{F}}^{k}\), the only obstruction for \(H^{1}(U,{\mathcal{F}}^{k})\) to vanish is the existence of punctured disk in \(U\). If we take \(U\) with no punctured disk singularity, then locally gluing cocycles from \({\mathcal{C}}^{1}_{\mathcal{U}}(U,{\mathcal{F}}^{k})\) to cochains in \({\mathcal{C}}^{0}_{\mathcal{U}}(U,{\mathcal{F}}^{k})\) is summarized in the following simple example: take \(x_{0}\in\partial U\) and \(\gamma_{0},...,\gamma_{4}=\gamma_{0}:[0,a[\rightarrow\overline{U}\) (see Figure 11) with \(\gamma_{0}(0)=...=\gamma_{4}(0)=x_{0}\) and \(\gamma_{i}^{-}<\gamma_{i}^{+}:[0,a[\rightarrow{\mathbb{R}}^{2}\) (see Figure 12). Then locally, we choose two situations (in fact they are the only situations that will show up locally in the proof of Theorem 7.5):
Figure 10. The cover \({\mathcal{V}}\).
Figure 11. The curves \(\gamma_{i}\) around \(x_{0}\).
\(\bullet\) **Situation 1:** Assume that \(x_{0}\in U\). In this situation, for some \(0<r<r^{\prime}\), we assume that (see Figure 13)
\[U=\bigcup_{i}R(r^{\prime},\gamma_{i},\gamma_{i+1})\bigcup_{i}R(r^{\prime},\gamma _{i}^{-},\gamma_{i}^{+})\bigcup B(x_{0},r).\]
For each \(i\), we take functions \(f_{i,+}\in W^{k,2}(R(r^{\prime},\gamma_{i}\gamma_{i}^{+}))\), \(f_{-,i}\in W^{k,2}(R(r^{\prime},\gamma_{i}^{-}\gamma_{i}))\), \(g_{i}\in W^{k,2}(R(r,\gamma_{i},\gamma_{i+1}))\), and \(h_{i}\in W^{k,2}(B(r,\gamma_{i}^{-},\gamma_{i}^{+}))\) such that
\[\begin{array}{c}(f_{i,+})_{|R(r,\gamma_{i}\gamma_{i}^{+})}=(g_{i})_{|R(r, \gamma_{i}\gamma_{i}^{+})},\\ (f_{-,i})_{|R(r,\gamma_{i}^{-}\gamma_{i})}=(g_{i-1})_{|R(r,\gamma_{i}^{-} \gamma_{i})},\\ (h_{i})_{|R(r,\gamma_{i}\gamma_{i}^{+})}=(g_{i})_{|R(r,\gamma_{i}\gamma_{i}^{ +})},\end{array}\]
and
\[(h_{i})_{|R(r,\gamma_{i}^{-}\gamma_{i})}=(g_{i-1})_{|R(r,\gamma_{i}^{-}\gamma_ {i})}.\]
We want to glue these functions to functions in \(W^{k,2}(R(r^{\prime},\gamma_{i},\gamma_{i+1}))\), \(W^{k,2}(R(r^{\prime},\gamma_{i}^{-},\gamma_{i}^{+}))\), and \(W^{k,2}(B(x_{0},r))\). Take \((\phi^{\prime},\phi)\) a partition of unity associated to the covering \((C=B(x_{0},r^{\prime})\setminus B(x_{0},r),B(\frac{r+r^{\prime}}{2},x_{0}))\) (see Figure 13). Define \(u\in W^{k,2}(B(x_{0},r))\) by taking just the values of \(g_{i}^{\prime}s\) and \(h_{i}^{\prime}s\). On each \(R(r^{\prime},\gamma_{i}^{-},\gamma_{i}^{+})\), we choose the zero function. We take smooth compactly supported functions \(F_{i}:\mathbb{R}^{2}\rightarrow[0,1]\) such that \(F_{i}=1\) on a neighborhood of \(R(r^{\prime},\gamma_{i}^{-},\gamma_{i}^{+})\cap C\) and \(F_{i}=0\) on the other sets of type \(R(r^{\prime},\gamma_{j}^{-},\gamma_{j}^{+})\cap C\). So, in each \(W^{k,2}(R(r^{\prime},\gamma_{i},\gamma_{i+1}))\) we define \(v_{i}\) by
\[v_{i}:=\Big{(}\phi^{\prime}(F_{i}E_{R(r^{\prime},\gamma_{i},\gamma_{i}^{+})}( f_{i,+})+F_{i+1}E_{R(r^{\prime},\gamma_{i+1}^{-},\gamma_{i+1})}(f_{i+1,-}))+\phi(E_{B( x_{0},r)}(u))\Big{)}_{|R(r^{\prime},\gamma_{i},\gamma_{i+1})}\,.\]
Then, clearly, the functions \(u\), \(0\) and \(v_{i}\) glue the functions \(f_{i,+}\), \(f_{-,i}\), \(g_{i}\) and \(h_{i}\).
Figure 13. The covering of \(U\) in Situation 1.
\(\bullet\)**Situation 2:** Assume that \(x_{0}\notin U\). In this situation (with the same notation as in the first case), we assume that (see Figure 14)
\[U=R(r^{\prime},\gamma_{0},\gamma_{1})\bigcup R(r^{\prime},\gamma_{1}^{-},\gamma_ {1}^{+})\bigcup R(r^{\prime},\gamma_{1},\gamma_{2})\bigcup R(r^{\prime},\gamma_ {2}^{-},\gamma_{2}^{+})\bigcup R(r^{\prime},\gamma_{2},\gamma_{3}),\]
with a given functions \(f_{i,+}\in W^{k,2}(R(r^{\prime},\gamma_{i},\gamma_{i}^{+}))\) and \(f_{-,i}\in W^{k,2}(R(r^{\prime},\gamma_{i}^{-},\gamma_{i}))\). To glue these functions, it is enough to take the functions
\[v_{0}:=0\in W^{k,2}(R(r^{\prime},\gamma_{0},\gamma_{1})),\]
\[u_{1}:=\Big{(}E_{R(r^{\prime},\gamma_{1}^{-},\gamma_{1})}(f_{-,1})\Big{)}_{|R( r^{\prime},\gamma_{1}^{-},\gamma_{1}^{+})}\in W^{k,2}(R(r^{\prime},\gamma_{1}^{-},\gamma_{1}^{+})),\]
\[v_{1}:=\Big{(}E_{R(r^{\prime},\gamma_{1}^{-},\gamma_{1}^{+})}(f_{1,+})+E_{R(r^ {\prime},\gamma_{1}^{-},\gamma_{1}^{+})}(u_{1})\Big{)}_{|R(r^{\prime},\gamma_ {1},\gamma_{2})}\in W^{k,2}(R(r^{\prime},\gamma_{1},\gamma_{2})),\]
\[u_{2}:=\Big{(}E_{R(r^{\prime},\gamma_{2}^{-},\gamma_{2})}(f_{-,2})+E_{R(r^{ \prime},\gamma_{1},\gamma_{2})}(v_{1})\Big{)}_{|R(r^{\prime},\gamma_{2}^{-}, \gamma_{2}^{+})}\in W^{k,2}(R(r^{\prime},\gamma_{2}^{-},\gamma_{2}^{+})),\]
\[v_{2}:=\Big{(}E_{R(r^{\prime},\gamma_{2},\gamma_{2}^{+})}(f_{2,+})+E_{R(r^{ \prime},\gamma_{2}^{-},\gamma_{2}^{+})}(u_{2})\Big{)}_{|R(r^{\prime},\gamma_ {2},\gamma_{3})}\in W^{k,2}(R(r^{\prime},\gamma_{2},\gamma_{3})).\]
For the sake of notation, we will use \(\mathcal{F}\) instead of \(\mathcal{F}^{k}\) and \(W^{2}\) instead of \(W^{k,2}\).
**Theorem 7.5**.: _Take \(U\in X_{\mathcal{A}}(\mathbb{R}^{2})\). Then for any \(j>1\) we have_
\[H^{j}(U,\mathcal{F})=0.\]
_And if \(U\) has no singularities of type \(C_{1}\), then for any \(j\in\mathbb{N}\)_
\[H^{j}(U,\mathcal{F})=\left\{\begin{array}{ll}\mathcal{F}(U)&\text{if }j=0\\ \{0\}&\text{if }j\geqslant 1.\end{array}\right.\]
Proof.: By the definition of the Cech cohomology, it is enough to compute the Cech cohomology on an adapted cover. So take \(\mathcal{V}\) an adapted cover of \(\{U\}\) as given by Proposition 7.3 and take \(\mathcal{W}\) the cover of \(U\) defined by
\[\mathcal{W}=\{O\in\mathcal{V}\;:\;O\subset U\}.\]
Then we have the Cech complex
\[\mathcal{C}^{0}_{\mathcal{W}}(U,\mathcal{F})\xrightarrow{d_{0}}\mathcal{C}^{ 1}_{\mathcal{W}}(U,\mathcal{F})\xrightarrow{d_{1}}\mathcal{C}^{2}_{\mathcal{W} }(U,\mathcal{F})\to 0.\]
Figure 14. The covering of \(U\) in Situation 2.
For \(j>2\), we have \(\mathcal{C}^{j}_{\mathcal{W}}(U,\mathcal{F})=0\), because the intersection of four elements in \(\mathcal{W}\) is always empty. Take \(\omega\in\mathcal{C}^{2}_{\mathcal{U}}(U,\mathcal{F})\). So we can write \(\omega\) as follows
\[\omega=\sum_{W\in\mathcal{W}_{2}}\omega(W),\]
where for \(O\in\mathcal{W}_{2}\) we define
\[(\omega(W))_{O}=\left\{\begin{array}{ll}(\omega)_{W}&\mbox{if }O=W,\\ 0&\mbox{if }O\neq W.\end{array}\right.\]
To show that \(\omega=0\) in \(H^{2}(U,\mathcal{F})\), it is enough to find for each \(W\in\mathcal{W}_{2}\) an element \(\alpha(W)\in\mathcal{C}^{1}_{\mathcal{W}}(U,\mathcal{F})\) such that \(d(\alpha(W))=\omega(W)\). For each \(a_{k,l}\in\mathbb{R}^{2}\), we take a smooth function \(F_{k,l}\in C^{\infty}_{c}(\mathbb{R}^{2})\) such that \(F_{k,l}=1\) on \(B(a_{k,l},r)\) and \(F_{k,l}=0\) on each other \(B(a_{k^{\prime},l^{\prime}},r)\). Take \(W\in\mathcal{W}_{2}\). Then \(W=B(a_{l,k},r)\cap Y\), where \(Y\) is one of the cases in (5) of Proposition 7.3. For any \(O\in\mathcal{W}_{2}\) we define
\[(\alpha(W)_{O}=\left\{\begin{array}{ll}\left(\sum_{k,l}F_{k,l}E_{W}((\omega )_{W})\right)_{|Y}&\mbox{if }O=Y,\\ 0&\mbox{if }O\neq Y.\end{array}\right.\]
So clearly we have \(d(\alpha(W))=\omega(W)\), and so \(H^{2}(U,\mathcal{F})=0\).
Now assume that \(U\) has no punctured disk singularity, and let's show that \(H^{1}(U,\mathcal{F})=0\). Take \(\alpha\in\mathcal{C}^{1}_{\mathcal{W}}(U,\mathcal{F})\) such that \(d(\alpha)=0\), so we need to find \(u\in\mathcal{C}^{0}_{\mathcal{W}}(U,\mathcal{F})\) such that \(d(u)=\alpha\). For \(O\in\mathcal{W}\), we define \(u\in\mathcal{C}^{0}_{\mathcal{W}}(U,\mathcal{F})\) by induction on \(l\) and \(p\) (see (3) of Proposition 7.3):
\(\bullet\)\(O=O_{0,0}\)**:**: In this case we define \(u_{O}=0\in W^{s,2}(O)\). \(\bullet\)\(O=\widehat{O}_{0,p}\)**:**: Assuming that we have constructed \(u_{O_{0,p}}\), we define \(u_{O}\in W^{s,2}(O)\) by
\[u_{O}=\left(E_{O_{0,p}}(u_{O_{0,p}})+E_{O_{0,p}\cap\widehat{O}_{0,p}}(\alpha_ {O_{0,p}\cap\widehat{O}_{0,p}})\right)_{|O}.\]
\(\bullet\)\(O=O_{0,p+1}\)**:**: Assuming that we have constructed \(u_{\widehat{O}_{0,p}}\), we define \(u_{O}\in W^{s,2}(O)\) by
\[u_{O}=\left(E_{\widehat{O}_{0,p}}(u_{\widehat{O}_{0,p}})+E_{\widehat{O}_{0,p} \cap O_{0,p+1}}(\alpha_{\widehat{O}_{0,p}\cap O_{0,p+1}})\right)_{|O}.\]
This was induction on \(p\) with fixing \(l=0\). Now assume that for \(l\) fixed we have constructed \(u_{O_{l,p}}\) and \(u_{\widehat{O}_{l,p}}\) for each \(p\). If \(O=V_{l,k}\in\mathcal{W}\), then by (4) of Proposition 7.3 there is a unique \(p\) such that
\[O_{l,p}\cap V_{l,k}\neq\emptyset.\]
In this case we define \(u_{O}\) by
\[u_{O}=\left(E_{O_{l,p}}(u_{O_{l,p}})+E_{O_{l,p}\cap V_{l,k}}(\alpha_{O_{l,p} \cap V_{l,k}})\right)_{|O}.\]
To finish, we need to construct \(u\) on each \(O=O_{l+1,p}\) and \(O=\widehat{O}_{l+1,p}\) for each \(p\). We discuss the following cases:
\(\bullet\)\(O=O_{l+1,0}\)**:**: Assume that there is a unique \(k\) such that \(O\cap V_{l,k}\neq\emptyset\) (if not we define \(u_{O}\) to be \(0\)), so we define \(u_{O}\) by
\[u_{O}=\left(E_{V_{l,k}}(u_{V_{l,k}})+E_{O\cap V_{l,k}}(\alpha_{O\cap V_{l,k}}) \right)_{|O}.\]
\(\bullet\)\(O=\widehat{O}_{l+1,p}\)**:**: Assume that we have constructed \(u_{O_{l+1,p}}\). We define \(u_{O}\) by
\[u_{O}:=\left(E_{O_{l+1,p}}(u_{O_{l+1,p}})+E_{\widehat{O}_{l+1,p}\cap O_{l+1,p}}( \alpha_{\widehat{O}_{l+1,p}\cap O_{l+1,p}})\right)_{|O}.\]
\(\bullet\)\(O=O_{l+1,p+1})\)**:**: We break it into two cases:
* **Case(1):**: For any \(k\) we have \(V_{l+1,k}\cap O=\emptyset\). We define \(u_{O}\) by \[u_{O}:=\left(E_{\widehat{O}_{l+1,p}}(u_{\widehat{O}_{l+1,p}})+E_{\widehat{O}_ {l+1,p^{\prime}}\cap O_{l+1,p+1}}(\alpha_{\widehat{O}_{l+1,p}\cap O_{l+1,p+1} })\right)_{|O}.\]
* **Case(2):**: There exists \(k\) such that \[V_{l+1,k}\cap O\neq\emptyset.\] In this case, \(B(a_{l+1,k},r)\in\mathcal{W}\) (because otherwise \(a_{l+1,k}\) will be a punctured disk singularity for \(U\)), and we choose \(u_{B(a_{l+1,k},r)}\) to take the values of \(\alpha\). Take \(r^{\prime}>r\) such that \(B(a_{l+1,k},r^{\prime})\cap B(a_{l+1,k+1},r^{\prime})=\emptyset\) and \((f,g)\) a partition of unity associated to the cover \((B(a_{l+1,k},r^{\prime}),\mathbb{R}^{2}\setminus B(a_{l+1,k},r))\). We also take \(h\) and \(h^{\prime}\in C^{\infty}(\mathbb{R}^{2})\) such that \[h_{|V_{l+1,k}\cap\mathbb{R}^{2}\setminus B(a_{l+1,k},r))}=0,\,h_{|\widehat{O} _{l+1,p}\cap\mathbb{R}^{2}\setminus B(a_{l+1,k},r))}=1,\] \[h^{\prime}_{|V_{l+1,k}\cap\mathbb{R}^{2}\setminus B(a_{l+1,k},r))}=1,\,\text {and }h^{\prime}_{|\widehat{O}_{l+1,p}\cap\mathbb{R}^{2}\setminus B(a_{l+1,k},r))}=0.\] So in this case we define \(u_{O}\) by \[u_{O}:=h\left(fE_{B(a_{l+1,k},r)}(u_{B(a_{l+1,k},r)})+gE_{\widehat{O}_{l+1,p} }(u_{\widehat{O}_{l+1,p}})\right)_{|O}+\] \[h^{\prime}\left(fE_{B(a_{l+1,k},r)}(u_{B(a_{l+1,k},r)})+gE_{V_{l+1,k}}(u _{V_{l+1,k}})\right)_{|O}.\] And in this case, for any \(O\) such that \(a_{l+1,k}\in\overline{O}\), we need to modify the definition of \(u_{O}\) by (note here \(u^{\prime}_{O}\) the old definition given in the previous stages of the induction)
\[u_{O}:=\left(fE_{B(a_{l+1,k},r)}(u_{B(a_{l+1,k},r)})+gE_{O}(u^{\prime}_{O}) \right)_{|O}.\] Finally, from the construction of \(u\), we have \(d(u)=\alpha\).
\((W^{1,2},W^{0,2})\)-double extension is a sufficient condition for the sheafification of \(W^{s,2}\).
In this section, we provide a categorical proof of Lemma 6.2, and we discuss the case where \(U\cap V\) is not Lipschitz. The only assumption we require here is that \(U\), \(V\), and \(U\cup V\) are Lipschitz. We use the fact that the sequences
\[0\to W^{0,2}(U\cup V)\to W^{0,2}(U)\oplus W^{0,2}(V)\to W^{0,2}(U\cap V)\to 0\]
and
\[0\to W^{1,2}(U\cup V)\to W^{1,2}(U)\oplus W^{1,2}(V)\to W^{1,2}(U\cap V)\to 0\]
are exact.
We assume that we have the following double extension:
**Assumption:** There exists a linear continuous extention operator
\[\mathcal{T}:W^{0,2}(U\cap V)\longrightarrow W^{0,2}(\mathbb{R}^{n}),\]
such that \(\mathcal{T}\) induces a linear continuous extension from \(W^{1,2}(U\cap V)\) to \(W^{1,2}(\mathbb{R}^{n})\).
**Remark 8.1**.: Note that this assumption holds if \(U\cap V\) is Lipschitz, due to the Stein extension Theorem.
Note that here \(W^{0,2}=L^{2}\), and we need only Sobolev spaces with regularity \(s\in(0,1)\). We will pass to our exact sequence for \(s\in(0,1)\) by a linear combination of the last two, which leads us to expect it to be exact. To achieve this, we will use the notion of **exact category** (see [2]). An exact category is not abelian but has a structure that enables us to perform homological algebra.
Let \(\mathcal{C}\) be an additive category. A pair of composable morphisms
is said to be a **KC-pair** (Kernel-Cokernel pair) if \(f\) is the kernel of \(g\) and \(g\) is the Cokernel of \(f\). Fix \(\mathcal{E}\) as a class of KC-pairs. An **admissible monomorphism** (with respect to \(\mathcal{E}\)) is a morphism \(f\) such that there is a morphism \(g\) with \((f,g)\in\mathcal{E}\). **Admissible epimorphisms** are defined dually.
**Definition 8.2**.: An **exact structure** is a pair \((\mathcal{C},\mathcal{E})\) where \(\mathcal{C}\) is an additive category and \(\mathcal{E}\) is a class of KC-pairs, closed under isomorphisms, and satisfying the following proprieties:
\((E_{0})\)**:**: For any \(X\in Obj(\mathcal{C})\), \(Id_{X}\) is an admissible monomorphism.
\((E_{0})^{c}\)**:**: The dual statement of \((E_{0})\).
\((E_{1})\)**:**: The composition of admissible monomorphisms is an admissible monomorphism.
\((E_{1})^{c}\)**:**: The dual statement of \((E_{1})\).
\((E_{2})\)**:**: If \(f:X\to Y\) is an admissible monomorphism and \(t:X\to T\) a morphism, then the pushout
exists and \(s_{T}\) is an admissible monomorphism.
\((E_{2})^{c}\)**:**: The dual statement of \((E_{2})\).
If \((\mathcal{C},\mathcal{E})\) is an exact structure, a morphism \(f:X\longrightarrow Y\) is said to be \(\mathcal{E}\)**-strict** if it can be decomposed into
where \(e:X\longrightarrow Z\) is an admissible epimorphism (with respect to \(\mathcal{E}\)), and \(m:Z\longrightarrow Y\) is an admissible monomorphism (with respect to \(\mathcal{E}\)).
Now fix \(\mathcal{C}\) an additive category. It is well known (see [2]) that the following class of KC-pairs
\[\mathcal{E}_{0}=\{(f,g)\ :\ \ X\stackrel{{ f}}{{\dashrightarrow}}Y \stackrel{{ g}}{{\dashrightarrow}}Z\ \ \mbox{split}\}\]
is an exact structure on \(\mathcal{C}\) (it is the smallest one on \(\mathcal{C}\)).
**Definition 8.3**.: Let \((\mathcal{C},\mathcal{E})\) be an exact structure, \(\mathcal{A}\) an **abelian** category, and \(F:\mathcal{C}\longrightarrow\mathcal{A}\) an **additive functor**. \(F\) is said to be **injective** if for any pair \(\ X\stackrel{{ f}}{{\dashrightarrow}}Y\stackrel{{ g}}{{ \dashrightarrow}}Z\) in \(\mathcal{E}\), the sequence
is exact in \(\mathcal{A}\).
The following result is well known in the theory of exact categories:
**Proposition 8.4**.: \(F\) _is injective if and only if it preserves the Kernel of every \(\mathcal{E}\)-strict morphism._
Proof.: See [2].
We will construct the category \(\mathcal{C}\) to serve our case, and the category \(\mathcal{A}\) will be just the category of \(\mathbb{C}\)-vector spaces. Let's recall the concept of Interpolation:
**Definition 8.5**.: A good pair of Banach spaces (or **GB-pair**) is a pair \((X,Y)\) of Banach spaces such that \(X\subset Y\) with continuous inclusion, that is, there is \(C>0\) such that for any \(x\in X\) we have
\[\|x\|_{Y}\leqslant C\|x\|_{X}.\]
We recall the interpolation \(K\)-method. So fix \((X,Y)\) a GB-pair and \(t>0\), and define the \(K\)-norm on \(Y\) by
\[u\mapsto K(t,u)=\inf\{\|x\|_{X}+t\|y\|_{Y}\ :\ u=x+y,\ x\in X,\ y\in Y\}.\]
For \(s\in]0,1[\), we define the interpolation space \([X,Y]_{s}\) by
\[[X,Y]_{s}=\{u\in Y\ :\ \int_{0}^{+\infty}\left(t^{-s}K(t,u)\right)^{2}\frac{dt}{ t}<+\infty\}.\]
It is a Banach space with the norm
\[\|u\|_{[X,Y]_{s}}=\left(\int_{0}^{+\infty}\left(t^{-s}K(t,u)\right)^{2}\frac{ dt}{t}\right)^{\frac{1}{2}}.\]
Recall the following theorem of interpolation spaces:
**Theorem 8.6**.: _Let \((X,Y)\) and \((X^{\prime},Y^{\prime})\) be two GB-pairs and_
\[L:Y\longrightarrow Y^{\prime}\]
a continuous linear map such that \(L\) induces a continuous linear map from \(X\) to \(X^{\prime}\). Then, for any \(s\in]0,1[\), \(L\) induced a linear continuous map from \([X,Y]_{s}\) to \([X^{\prime},Y^{\prime}]_{s}\)._
Proof.: See [11].
Let \(\mathcal{A}\) be the category of \(\mathbb{C}-\)vector spaces and \(\mathcal{C}\) be the category where the object are \(GB\)-pairs. For \(((X,Y),(X^{\prime},Y^{\prime}))\in(Obj(\mathcal{C}))^{2}\), we define the morphisms as:
\[Hom_{\mathcal{C}}((X,Y),(X^{\prime},Y^{\prime}))=\{L\in\mathcal{L}(Y,Y^{\prime })\ :\ L\ |_{X}\in\mathcal{L}(X,X^{\prime})\ \}.\]
Clearly, \(\mathcal{C}\) is an additive category. We consider the exact structure \(\mathcal{E}_{0}\) on \(\mathcal{C}\) of splitting KC-pairs. For any \(s\in]0,1[\), we define the functor \(F_{s}:\mathcal{C}\longrightarrow\mathcal{A}\) as follows
\[F_{s}((X,Y))=[X,Y]_{s}\text{ and for }f\in Hom_{\mathcal{C}}((X,Y),(X^{ \prime},Y^{\prime}))\] \[F_{s}(f)=f\ |_{[X,Y]_{s}}.\]
By Theorem 8.6, \(F_{s}\) is well defined additive functor.
**Lemma 8.7**.: _For \((X,Y),(X^{\prime},Y^{\prime})\in Ob(\mathcal{C})\) and for \(s\in[0,1]\), there is a natural isomorphism_
\[[X\oplus X^{\prime},Y\oplus Y^{\prime}]_{s}\simeq[X,Y]_{s}\oplus[X^{\prime},Y^ {\prime}]_{s}.\]
Proof.: Take the projections
\[P:Y\oplus Y^{\prime}\longrightarrow Y\text{, and}\]
\[P^{\prime}:Y\oplus Y^{\prime}\longrightarrow Y^{\prime}.\]
Since \(P\ |_{X\oplus X^{\prime}}\in\mathcal{L}(X\oplus X^{\prime},X)\) and \(P^{\prime}\ |_{X\oplus X^{\prime}}\in\mathcal{L}(X\oplus X^{\prime},X^{\prime})\), by Theorem 8.6 this induces a continuous linear map
\[(P,P^{\prime}):[X\oplus X^{\prime},Y\oplus Y^{\prime}]_{s} \longrightarrow[X,Y]_{s}\oplus[X^{\prime},Y^{\prime}]_{s},\] \[(u)\mapsto(P(u),P^{\prime}(u)).\]
The same way applying Theorem 8.6 on the injections
\[I:Y\longrightarrow Y\oplus Y^{\prime}\text{ and }I^{\prime}:Y^{\prime} \longrightarrow Y\oplus Y^{\prime},\]
we get a continuous linear map
\[(I,I^{\prime}):[X,Y]_{s}\oplus[X^{\prime},Y^{\prime}]_{s} \longrightarrow[X\oplus X^{\prime},Y\oplus Y^{\prime}]_{s},\] \[(z,z^{\prime})\mapsto z\oplus z^{\prime}.\]
It is clear that \((I,I^{\prime})\circ(P,P^{\prime})=Id\) and \((P,P^{\prime})\circ(I,I^{\prime})=Id\).
**Lemma 8.8**.: _The functor \(F_{s}:\mathcal{C}\longrightarrow\mathcal{A}\) is injective with respect to the exact structure \((\mathcal{C},\mathcal{E}_{0})\)._
Proof.: By Proposition 8.4, it is enough to prove that \(F_{s}\) preserves the Kernel of every \(\mathcal{E}_{0}\)-strict morphism. Take \(f:(X,Y)\longrightarrow(X^{\prime},Y^{\prime})\) a \(\mathcal{E}_{0}\)-strict morphism. Then there exist an admissible epimorphism \(e:(X,Y)\longrightarrow(Z,W)\) and an admissible monomorphism \(m:(Z,W)\longrightarrow(X^{\prime},Y^{\prime})\) such that we have a decomposition
By **Remark 3.28** in [2], if \(k_{f}:K_{f}\longrightarrow(X,Y)\) is the Kernel of \(f\), then \((k_{f},e)\in\mathcal{E}_{0}\). Easy computation shows that the kernel of \(f\) is the morphism
\[\begin{array}{c}k_{f}:K_{f}=(X\cap Ker(f),Ker(f))\longrightarrow(X,Y).\\ u\longrightarrow k_{f}(u)=u.\end{array}\]
Here \(Ker(f)\) is given the norm of \(Y\), and \(X\cap Ker(f)\) is given the norm
\[\|u\|_{X\cap Ker(f)}=\max\{\|u\|_{X},\|u\|_{Ker(f)}\}.\]
By Lemma 3.8 in [2], there exists a morphism \(P:(X,Y)\longrightarrow K_{f}\) such that \(P\circ k_{f}=Id_{K_{f}}\), and this means that \((X\cap Ker(f),Ker(f))\) is a complemented sub-couple of \((X,Y)\). Hence, by Theorem 1 in Section 1.17.1 of [17], we have
\[[X\cap Ker(f),Ker(f)]_{s}=Ker(f)\cap[X,Y]_{s}=Ker(F_{s}(f)).\]
Now we have the KC-pair in the category \(\mathcal{C}\)
\[\begin{array}{c}(W^{1,2}(U\cup V),L^{2}(U\cup V))\rightarrow(W^{1,2}(U)\oplus W ^{1,2}(V),L^{2}(U)\oplus L^{2}(V))\rightarrow\\ (W^{1,2}(U\cap V),L^{2}(U\cap V)).\end{array}\]
And by the assumption of the existence of \((W^{1,2},W^{0,2})\)-double extension, this sequence splits, so it is in the structure \(\mathcal{E}_{0}\). Hence, by Lemma 8.8, if we apply the functor \(F_{1-s}\) (for any \(s\in]0,1[\)) we get an exact sequence. Therefore, by (3.4) we get the exact sequence
\[0\to W^{s,2}(U\cup V)\rightarrow[W^{1,2}(U)\oplus W^{s,2}(V),L^{2}(U) \oplus L^{2}(V)]_{1-s}\rightarrow[W^{s,2}(U\cap V),L^{2}(U\cap V)]_{1-s}.\]
By Lemma 8.7 and (3.4) we can write it the following way
\[0\to W^{s,2}(U\cup V)\to W^{s,2}(U)\oplus W^{s,2}(V)\rightarrow[W^{1,2}(U \cap V),L^{2}(U\cap V)]_{1-s}.\]
Hence, we have the exactness of the sequence
\[0\to W^{s,2}(U\cup V)\to W^{s,2}(U)\oplus W^{s,2}(V)\to W^{s,2}(U \cap V)\to 0\]
**Remark 8.9**.: The answer to the exactness of the sequence
\[0\to W^{s,2}(U\cup V)\to W^{s,2}(U)\oplus W^{s,2}(V)\to W^{s,2}(U \cap V)\to 0\]
is important. A positive affirmation of its exactness would implies the possibility of sheafifying Sobolev spaces in the usual sense. Conversely, a negative outcome would indicate that there exist no degree-independent extension operator from \(W^{i,2}(\Omega)\) to \(W^{i,2}(\mathbb{R}^{n})\) (for \(i\in\{[s],[s]+1\}\)) when \(\Omega\) is a cuspidal domain.
## 9. Further discussion
\(\bullet\): It may be helpful to construct a broader exact structure \(\mathcal{E}\) on the category \(\mathcal{C}\) of GB-pairs, such that the KC-pair
\[\begin{array}{c}(W^{1,2}(U\cup V),L^{2}(U\cup V))\to(W^{1,2}(U)\oplus W^{1,2}( V),L^{2}(U)\oplus L^{2}(V))\to\\ (W^{1,2}(U\cap V),L^{2}(U\cap V))\cdots(\star)\end{array}\]
is in \(\mathcal{E}\) (when \(U\), \(V\), and \(U\cup V\) are Lipschitz domains). For example, we can demonstrate that the maximal class of all KC-pairs is exact on \(\mathcal{C}\) (although this is not always true, as seen in [2]). However, a challenge arises when enlarging the class \(\mathcal{E}\), as this also broadens the class of \(\mathcal{E}\)-strict morphisms. For instance, if we consider \(\mathcal{E}\) as the maximal class, a morphism \(f:(X,Y)\longrightarrow(X^{\prime},Y^{\prime})\) is \(\mathcal{E}\)-strict if and only if \(f(Y)\) is closed in \(Y^{\prime}\), \(f(X)\) is closed in \(X^{\prime}\), \(f\) is open into \(f(Y)\), and \(f\mid_{X}\) is open into \(X^{\prime}\). Nonetheless, at present, no result establishes the compatibility of interpolation with the kernel of such morphisms. In [12] and [4], some sufficient conditions for morphisms to have a kernel that is compatible with interpolation are provided. However, connecting these conditions with our specific situation remains unclear. Hence, it might be possible to devise an exact structure for the category of GB-pairs \(\mathcal{C}\) that encompasses the KC-pair (\(\star\)) and simultaneously satisfies the conditions outlined in [12] and [4] for the class of strict morphisms."
\(\bullet\): Sheafification of Sobolev spaces in the usual sense in higher dimensions is much more challenging and remains unclear to us. Therefore, this requires a sheafification in the derived sense, as was achieved for negative regularity by G. Lebeau [9] (building upon the work of Guillermou-Schapira [3] and Parusinski [14]). The two-dimensional case can be summarized with the following idea: take \(U\) and \(V\) as two cuspidal domains in \(X_{\mathcal{A}}(\mathbb{R}^{2})\), such that \(U\cup V\) and \(U\cap V\) are also cuspidal (see Figure 15).
From the fact that we have enough space (from the metric point of view) outside \(U\) and \(V\), we can build two domains \(\widehat{U}\in X_{\mathcal{A}}(\mathbb{R}^{2})\) and \(\widehat{V}\in X_{\mathcal{A}}(\mathbb{R}^{2})\) with Lipschitz boundaries (see Figure 16) outside \(U\) and \(V\), such that \(\widehat{U}\cup\widehat{V}\) has a Lipschitz boundary.
Figure 15. \(U\) and \(V\).
This gives a commutative diagram
with the second exact line and exact rows. This implies the exactness of the first line.
However, this flexibility is no longer true in higher dimensions, let's mention the following example (due to Parusinski): take \(U\in X_{\mathcal{A}}(\mathbb{R}^{3})\) and \(U\in X_{\mathcal{A}}(\mathbb{R}^{3})\), both L-regular such that \(U\cup V\) and \(U\cap V\) are also L-regular (see the figure below).
Then you can see directly that there is no enough space outside to build domains with Lipschitz boundaries and use Lemma 6.2. This point is not clear and it is interesting to ask the following question:
_Question 3_.: For \(k\in\mathbb{N}\), we define the presheaf
\[\mathcal{F}^{k}:X_{\mathcal{A}}(\mathbb{R}^{n})\rightarrow\mathbb{C}\text{- vector spaces}\]
such that for \(U\in X_{\mathcal{A}}(\mathbb{R}^{n})\), we have
\[\mathcal{F}^{k}(U)=\{f\in L^{2}(U)\ :\ f_{|K}\in W^{k,2}(K)\ for\ any\ open\ L-regular\ K\subset U\}.\]
Is \(\mathcal{F}^{k}\) a sheaf on the site \(X_{\mathcal{A}}(\mathbb{R}^{n})\)?
|
2310.12265 | Coincidences between intervals in two partial orders on complex
reflection groups | In a finite real reflection group, the reflection length of each element is
equal to the codimension of its fixed space, and the two coincident functions
determine a partial order structure called the absolute order. In complex
reflection groups, the reflection length is no longer always equal to the
codimension of fixed space, and the two functions give rise to two different
partial orders on the group. We characterize the elements $w$ in the
combinatorial family $G(m, p, n)$ of complex reflection groups for which the
intervals below $w$ in these two posets coincide. We also explore the
relationship between this property and other natural properties of elements in
complex reflection groups; some general theory of posets arising from
subadditive functions on groups; and the particular case of subadditive
functions on the symmetric group. | Joel Brewster Lewis, Jiayuan Wang | 2023-10-18T19:04:01Z | http://arxiv.org/abs/2310.12265v2 | # Coincidences between intervals in two partial orders on complex reflection groups
###### Abstract
In a finite real reflection group, the reflection length of each element is equal to the codimension of its fixed space, and the two coincident functions determine a partial order structure called the absolute order. In complex reflection groups, the reflection length is no longer always equal to the codimension of fixed space, and the two functions give rise to two different partial orders on the group. We characterize the elements \(w\) in the combinatorial family \(G(m,p,n)\) of complex reflection groups for which the intervals below \(w\) in these two posets coincide.
## 1 Introduction
Suppose that \(G\) is a group and \(f:G\to\mathbb{R}_{\geq 0}\) is a subadditive function (that is, \(f(xy)\leq f(x)+f(y)\) for all \(x,y\in G\)) such that \(f(x)=0\) if and only if \(x\) is the identity in \(G\). It is easy to show (see Proposition 5) that such a function naturally gives rise to a partial order \(\leq_{f}\) on \(G\): one has
\[x\leq_{f}y\qquad\Longleftrightarrow\qquad f(x)+f(x^{-1}y)=f(y).\]
Two sources of functions \(f\) naturally present themselves, one algebraic and one geometric. First, if \(T\) is any generating set of \(G\), then the _\(T\)-length_
\[\ell_{T}:G \to\mathbb{N}\] \[x \mapsto\min\{k:\exists t_{1},\ldots,t_{k}\in T\text{ such that }x=t_{1} \cdots t_{k}\}\]
has the requisite properties. The resulting partial order \(\leq_{\ell_{T}}\) may equivalently be characterized by saying that \(x\leq_{\ell_{T}}y\) if \(x\) lies along a path of minimum length from the identity to \(y\) in the Cayley graph of \(G\) generated by \(T\), or that each minimum-length \(T\)-word \(x=t_{1}\cdots t_{\ell_{T}(x)}\) for \(x\) can be extended to a minimum-length \(T\)-word \(y=t_{1}\cdots t_{\ell_{T}(y)}\) for \(y\).
Second, if \(G\) acts on a finite-dimensional vector space \(V\) (i.e., by choice of a linear representation), with the element \(g\) having _fixed space_\(\operatorname{fix}(g):=\ker(g-1)\), it is easy to see [8, Prop. 2.9] that the fixed space codimension \(\operatorname{codim\,fix}(g):=\dim(V)-\dim\operatorname{fix}(g)\) is subadditive. Moreover, we have \(\operatorname{codim\,fix}(g)=0\Longleftrightarrow x=\operatorname{id}\) precisely when the representation of \(G\) is faithful. In this case, we denote by \(\leq_{\operatorname{cdf}}\) the resulting partial order.
It is reasonable to inquire when the two subadditive functions, and hence the two partial orders, just defined (one by a set of generators, the other by a choice of representation) coincide. In [8,
Prop. 2.11], it was shown that a necessary condition for this equality is that \(G\) be a reflection group, with \(T=R\) its subset of reflections. In this case, it's easy to see that
\[\operatorname{codim\,fix}(g)\leq\ell_{R}(g)\text{ for all }g\in G. \tag{1}\]
It has been known for fifty years that if \(G\) is a _real_ reflection group (i.e., a finite Coxeter group), then in fact \(\operatorname{codim\,fix}(g)=\ell_{R}(g)\) for all \(g\) in \(G\)[3, Lem. 2]. The same holds true in the complex reflection group \(G(m,1,n)\) (the wreath product \((\mathbb{Z}/m\mathbb{Z})\wr\mathfrak{S}_{n}\)) [13, Rem. 2.3(1)], as well as various other natural groups [2, 8, 4], but it does _not_ hold in the other finite complex reflection groups [5].
Even when the whole partial orders \((G,\leq_{\operatorname{cdf}})\) and \((G,\leq_{\ell_{R}})\) do not coincide, certain important pieces of them may. In [9, Cor. 6.6], it was shown that when \(G\) is a well generated complex reflection group and \(c\) is a Coxeter element in \(G\), the two intervals \([\operatorname{id},c]_{\operatorname{cdf}}\) and \([\operatorname{id},c]_{\ell_{R}}\) are identical--they are the _noncrossing partition lattice_ of \(G\). This naturally raises the question [9, Q. 8.11] of which other elements have this property. The main result of this paper is to characterize these elements in the infinite family \(G(m,p,n)\) of irreducible complex reflection groups. We do this in terms of the combinatorial description of the groups \(G(m,p,n)\)--see Section 2 for details.
**Theorem 1**.: _An element \(w\in G(m,p,n)\) satisfies \([\operatorname{id},w]_{\ell_{R}}=[\operatorname{id},w]_{\operatorname{cdf}}\) if and only if the cycle weights of \(w\) that are not \(0\pmod{p}\) can be partitioned into pairs that sum to \(0\), and any subset of cycle weights that sums to \(0\pmod{p}\) is a disjoint union of some weights that are \(0\pmod{p}\) and some pairs of weights that sum to \(0\)._
The rest of the paper is devoted to the proof of Theorem 1. In Section 2, we introduce the necessary background definitions and notations. After this, we divide the proof of Theorem 1 in several stages. In Section 3, we prove the given conditions are necessary by explicitly constructing elements that belong to one interval but not the other when the conditions are not met. To show that they are sufficient, we develop in Section 4 a detailed combinatorial description of the interval \([\operatorname{id},w]_{\operatorname{cdf}}\) for \(w\in G(m,p,n)\), allowing us to establish that \(u\in[\operatorname{id},w]_{\operatorname{cdf}}\Longrightarrow u\in[ \operatorname{id},w]_{\ell_{R}}\) when \(w\) satisfies the necessary conditions.
## 2 Background
### The infinite family of complex reflection groups
Say that a linear transformation \(t\) on a vector space \(V\) is a _reflection_ if its _fixed space_
\[\operatorname{fix}(t):=\{v\in V:t(v)=v\}=\ker(t-1)\]
has codimension \(1\), and that a finite subgroup \(W\subset\operatorname{GL}(V)\) is a _reflection group_ if \(W\) is generated by its subset of reflections. Complex reflection groups (i.e., those for which the field of scalars of \(V\) is \(\mathbb{C}\)) were classified by Shephard and Todd [11]: every complex reflection group is a direct product of irreducible groups, and the irreducible groups belong either to an infinite family \(G(m,p,n)\) for positive integers \(m\), \(p\), \(n\) with \(p\mid m\), or are one of \(34\) exceptional examples.1 This paper is concerned primarily with the groups of the infinite family; we describe them now.
By appropriate choice of basis, the group \(G(m,1,n)\) may be realized concretely as the group of \(n\times n\) monomial matrices whose nonzero entries are \(m\)th roots of \(1\). Algebraically, \(G(m,1,n)\) is the wreath product \((\mathbb{Z}/m\mathbb{Z})\wr\mathfrak{S}_{n}\) of the cyclic group of order \(m\) with the symmetric group \(\mathfrak{S}_{n}\). Thus, its elements may be represented by a pair \(w=[u;a]\) where \(u\in\mathfrak{S}_{n}\) is the _underlying permutation_ of \(w\) and \(a=(a_{1},\ldots,a_{n})\in(\mathbb{Z}/m\mathbb{Z})^{n}\) is its tuple of _weights_. A _cycle_ of \(w\) simply means a cycle of its underlying permutation; in particular, we consider fixed points to be cycles (of size \(1\)), and we denote by \(c(w)\) the number of cycles of \(w\). For any subset \(I\subseteq[n]\), we say that the weight of \(I\) (relative to \(w\)) is \(\sum_{i\in I}a_{i}\). This notion will be especially relevant when \(I\) is (the underlying set of) a cycle of \(w\) or a collection of cycles of \(w\). In particular, we denote by \(\operatorname{wt}(w)\) the weight \(\sum_{i=1}^{n}a_{i}\) of \(w\). For \(p\mid m\), the group \(G(m,p,n)\) is the normal subgroup of \(G(m,1,n)\) consisting of all those elements whose weight is a multiple of \(p\).
Let \(\zeta=\exp(2\pi i/m)\). If \(w=[u;a]\) is an element of \(G(m,p,n)\) and \((x_{1}\cdots x_{k})\) is a weight-0 cycle of \(w\), the vector in \(\mathbb{C}^{n}\) whose \(x_{i}\)th entry is \(\zeta^{a_{x_{1}}+\ldots+a_{x_{i-1}}}\) for \(i=1,\ldots,k\) and whose other entries are \(0\) is easily seen to be fixed by the action of \(w\). In fact, the collection of such vectors (taken over all weight-0 cycles of \(w\)) span \(\operatorname{fix}(w)\), and consequently \(\operatorname{codim\,fix}(w)=n-c_{0}(w)\), where \(c_{0}(w)\) represents the number of weight-0 cycles of \(w\). Considering the case that \(\operatorname{codim\,fix}(w)=1\), we see that there are two flavors of reflection in \(G(m,p,n)\): first, for any \(i\neq j\) in \(\{1,\ldots,n\}\) and any \(a\in\mathbb{Z}/m\mathbb{Z}\), the element
\[[(i\ j);(0,\ldots,0,a,0,\ldots,0,-a,0,\ldots,0)]\]
with \(a_{i}=a\) and \(a_{j}=-a\) and \(a_{k}=0\) if \(k\neq i,j\) is a _transposition-like reflection_. Second, if \(p<m\), then for any \(k\) in \(\{0,1,\ldots,m/p-1\}\), the element
\[[\operatorname{id};(0,\ldots,0,kp,0,\ldots,0)]\]
is a _diagonal reflection_ (where the nonzero weight may occur in any of the \(n\) positions).
Any real reflection group may be complexified by extension of scalars, yielding a complex reflection group. In particular, the four infinite families of real reflection groups are all realized inside the infinite family \(G(m,p,n)\):
* the symmetric group \(\mathfrak{S}_{n}\) is \(G(1,1,n)\) (type A);
* the hyperoctahedral group of signed permutations of degree \(n\) is \(G(2,1,n)\) (type B/C);
* its normal subgroup of even-signed permutations is \(G(2,2,n)\) (type D); and
* the dihedral group of order \(2\times m\) is \(G(m,m,2)\) (type I).
### Cycle partitions and reflection length
In any reflection group \(W\) with reflections \(R\), the _reflection length_\(\ell_{R}(w)\) of an element \(w\) is defined to be
\[\ell_{R}(w)=\min\{k:\exists t_{1},\ldots,t_{k}\in R\text{ s.t. }w=t_{1}\cdots t_{k}\}.\]
As mentioned in the introduction, when \(W\) is a real reflection group or the group \(G(m,1,n)\), we have \(\ell_{R}(w)=\operatorname{codim\,fix}(w)\) for all \(w\in W\). For the other groups \(G(m,p,n)\) in the combinatorial family, a formula for reflection length was given by Shi. In order to state it, we need some additional terminology.
Given a finite set \(S\), a _(set) partition_ of \(S\) is a collection of disjoint nonempty sets whose union is \(S\). The elements of the partition are called its _parts_. We use the following notation for set
partitions: \([1\ 3\ |\ 2\ |\ 4]\) represents the set partition whose three parts are \(\{1,3\}\), \(\{2\}\), and \(\{4\}\); the same set partition could be written many different ways, e.g., as \([2\ |\ 3\ 1\ |\ 4]\).
We will frequently deal with set partitions \(\Pi\) of the set of cycles of an element \(w\) of \(G(m,p,n)\). Let \(w\in G(m,p,n)\) with cycles \(C_{1},\ldots,C_{k}\). We say that a set partition \(\Pi\) on \(C_{1},\ldots,C_{k}\) is a _null cycle partition2_ if, for every part in \(\Pi\), the weights of its cycles sum to \(0\pmod{p}\). For every null cycle partition \(\Pi\) of \(w\), the _value_\(v(\Pi)\) is defined to be
Footnote 2: In [10], these partitions of the cycles were called simply βcycle partitionsβ.
\[v(\Pi):=|\Pi|+v_{m}(\Pi),\]
where \(|\Pi|\) denotes the number of parts of \(\Pi\) and \(v_{m}(\Pi)\) denotes the number of parts of \(\Pi\) whose cycle weights sum to \(0\pmod{p}\)). For a fixed element \(w\in G(m,p,n)\), there can be many null cycle partitions with many different values. Let
\[v_{\max}(w):=\max\{v(\Pi):\Pi\text{ is a null cycle partition for }w\}\]
be the maximum value of any null cycle partition of \(w\); we say that the cycle partitions that realize \(v_{\max}(w)\) are its _maximum (null) cycle partitions_. Then we have the following formula for reflection length.
**Theorem 2** (Shi [13, Thm. 4.4]).: _For \(w\in G(m,p,n)\), its reflection length is_
\[\ell_{R}(w)=n+c(w)-v_{\max}(w).\]
By combining Theorem 2 with the formula for fixed space codimension, Shi was able to characterize the elements \(w\) in \(G(m,p,n)\) that satisfy \(\operatorname{codim fix}(w)=\ell_{R}(w)\).
**Proposition 3** ([13, Prop. 5.3 (2)]).: _Let \(w\in G(m,p,n)\). Then \(\ell_{R}(w)=\operatorname{codim fix}(w)\) if and only if \(w\) has a null cycle partition \(\Pi\) in which the size \(|B|\) of each part \(B\) is at most \(2\), and which further satisfies the following conditions: if \(|B|=1\) then the cycle in \(B\) has weight \(0\pmod{p}\); if \(|B|=2\), the cycles in \(B\) have nonzero weights that sum to \(0\). Moreover, in this case, the given cycle partition is maximum._
The following rephrasing is perhaps more congenial to work with.
**Corollary 4**.: _An element \(w\in G(m,p,n)\) satisfies \(\ell_{R}(w)=\operatorname{codim fix}(w)\) if and only if the multiset of cycle weights of \(w\) that are not \(0\pmod{p}\) can be partitioned into pairs that sum to \(0\)._
### Posets from subadditive functions
We end this section by providing the proof of the result (mentioned in the introduction) that the subadditive functions we consider really do determine a poset structure. As observed in [5, Fn. 1], the proof in general is essentially the same as the proof given in [2, Prop. 3] that \(\leq_{\operatorname{cdf}}\) is a partial order on the orthogonal group.
**Proposition 5**.: _Let \(G\) be any group and suppose that \(f:G\to\mathbb{R}_{\geq 0}\) is a subadditive function (i.e., \(f(xy)\leq f(x)+f(y)\) for all \(x,y\in G\)) such that \(f(x)=0\) if and only if \(x\) is the identity in \(G\). Define a relation \(\leq_{f}\) on \(G\) by_
\[x\leq_{f}y\qquad\Longleftrightarrow\qquad f(x)+f(x^{-1}y)=f(y).\]
_Then \(\leq_{f}\) is a partial order on \(G\)._
Proof.: We have three properties to check.
Since \(f(\operatorname{id})=0\), for any \(x\in G\) we have \(f(x)=f(x)+f(\operatorname{id})=f(x)+f(x^{-1}x)\), and so \(x\leq_{f}x\).
Suppose \(x\leq_{f}y\) and \(y\leq_{f}x\). Since \(f\) takes only nonnegative values, we have
\[f(x)\leq f(x)+f(x^{-1}y)=f(y)\leq f(y)+f(y^{-1}x)=f(x).\]
This forces \(f(x^{-1}y)=0\); by the hypothesis on \(f\), we have \(x^{-1}y=\operatorname{id}\), so \(x=y\).
Finally, suppose \(x\leq_{f}y\) and \(y\leq_{f}z\). By subadditivity of \(f\) and the definition of \(\leq_{f}\), we have
\[f(z) \leq f(x)+f(x^{-1}z)\] \[\leq f(x)+f(x^{-1}y)+f(y^{-1}z)\] \[=f(y)+f(y^{-1}z)\] \[=f(z).\]
This forces \(f(z)=f(x)+f(x^{-1}z)\), so \(x\leq_{f}z\).
**Remark 6**.: When \(f=\ell_{T}\) is the length function for a generated group \(G\), the poset \((G,\leq_{T})\) is automatically graded by \(\ell_{T}\), i.e., each cover relation is between a pair of elements of \(T\)-lengths \(k\) and \(k+1\) for some \(k\). Nothing similar can be said for more general choices of the function \(f\). We mention two examples involving fixed space codimension:
* In the group \(G(4,2,2)\), the element \(w:=[\operatorname{id};(1,1)]=\begin{bmatrix}i&\\ &i\end{bmatrix}\) has \(\operatorname{codim}\operatorname{fix}(w)=2\) but the only element \(x\) with \(x<_{\operatorname{cdf}}w\) is \(x=\operatorname{id}\).
* In the group \(G(3,3,6)\), the element \(w=[\operatorname{id};(1,1,1,2,2,2)]\) has \(\operatorname{codim}\operatorname{fix}(w)=\ell_{R}(w)=6\). There are maximal chains of length \(6\) from \(\operatorname{id}\) to \(w\) in the fixed space codimension order (for example, \[\operatorname{id}<_{\operatorname{cdf}}[(1\ 4);\mathbf{0}]<_{ \operatorname{cdf}}[\operatorname{id};(1,0,0,2,0,0)]<_{\operatorname{cdf}}[(2 \ 5);(1,0,0,2,0,0)]<_{\operatorname{cdf}}\\ [\operatorname{id};(1,1,0,2,2,0)]<_{\operatorname{cdf}}[(3\ 6);(1,1,0,2,2,0)]<_{ \operatorname{cdf}}w\] is one such chain), but there is also the maximal chain \[\operatorname{id}<_{\operatorname{cdf}}[\operatorname{id};(1,1,1,0,0,0)]<_{ \operatorname{cdf}}w,\] of length \(2\).
Both examples involve elements that cover \(\operatorname{id}\) in the fixed space codimension poset that are not reflections. Indeed, Foster-Greenwood showed [5, Prop. 2.4] that whenever a complex reflection group \(W\) satisfies \(\ell_{R}(w)\neq\operatorname{codim}\operatorname{fix}(w)\) for some \(w\in W\), there exists such a non-reflection atom in the \(\operatorname{cdf}\)-poset. The second example further illustrates that \(\ell_{R}(w)=\operatorname{codim}\operatorname{fix}(w)\) is not sufficient to guarantee \([\operatorname{id},w]_{\ell_{R}}=[\operatorname{id},w]_{\operatorname{cdf}}\).
We now move to the proof of the main theorem.
Necessity
In this section, we prove that the given conditions are necessary, i.e., that if the cycle weights of \(w\) cannot be partitioned into pairs that sum to \(0\) and singletons that are \(0\pmod{p}\), or if they can be so partitioned but there exists a subset of the cycle weights of total weight \(0\pmod{p}\) that cannot be similarly partitioned, then \([\operatorname{id},w]_{\ell_{R}}\neq[\operatorname{id},w]_{\operatorname{cdf}}\). It will be more convenient at first to phrase the first possibility in terms of reflection length, allowing us to state a uniform result for all complex reflection groups.
**Proposition 7**.: _Let \(W\) be any complex reflection group and let \(w\) be an element of \(W\). If \(\ell_{R}(w)>\operatorname{codim\,fix}(w)\), then there exists a reflection that belongs to \([\operatorname{id},w]_{\ell_{R}}\) but not to \([\operatorname{id},w]_{\operatorname{cdf}}\)._
Proof.: We first establish the result for irreducible groups by a case-based approach; then at the end we show that it extends to reducible groups.
Consider \(w\in G(m,p,n)\) with \(\ell_{R}(w)>\operatorname{codim\,fix}(w)\). By Proposition 3, we equivalently have that in any maximum null cycle partition of \(w\), there is either a part with at least three cycles, or there is a part with at least two cycles and nonzero total weight. Fix such a maximum null cycle partition \(\Pi\), let \(B\) be the part promised by the last sentence, and let \(C_{1}=(a\cdots)\) and \(C_{2}=(b\cdots)\) be two of the cycles in \(B\), of respective nonzero weights \(\alpha\) and \(\beta\). Choose a reflection \(t:=[(a\ b);(0,\ldots,0)]\in G(m,p,n)\) that transposes an element from one of these cycles with an element from the other. Since \(t\) is a reflection, \(\ell_{R}(t)=\operatorname{codim\,fix}(t)=1\). Since the entries transposed by \(t\) are in different cycles of \(w\), multiplying \(w\) by \(t=t^{-1}\) merges these two cycles into a single cycle, necessarily of weight \(\alpha+\beta\). Let us consider \(\operatorname{codim\,fix}(tw)\) and \(\ell_{R}(tw)\).
If \(|B|=2\), then by assumption \(\alpha+\beta\neq 0\). If instead \(|B|>2\), since \(B\) is a part of a maximum null cycle partition, no subset of \(B\) can have weights that sum to \(0\) (or else we could split this subset into its own part, increasing the value of the partition), so also in this case \(\alpha+\beta\neq 0\). Consequently in both cases \(tw\) has the same number of weight-\(0\) cycles as \(w\). It follows that \(\operatorname{codim\,fix}(tw)=\operatorname{codim\,fix}(w)\). Thus \(t\not\in[\operatorname{id},w]_{\operatorname{cdf}}\).
Let \(\Pi^{\prime}\) be the partition of the cycles of \(tw\) that we get from \(\Pi\) by deleting the two cycles \(C_{1}\) and \(C_{2}\) from \(B\) and replacing them with the merged cycle \(tC_{1}C_{2}\), otherwise leaving the partition the same. Clearly \(v(\Pi^{\prime})=v(\Pi)\). Thus \(\ell_{R}(tw)\leq n+c(tw)-v(\Pi^{\prime})=n+c(w)-1-v(\Pi)=\ell_{R}(w)-1\). On the other hand, by subadditivity, \(1+\ell_{R}(tw)=\ell_{R}(t)+\ell_{R}(tw)\geq\ell_{R}(w)\), so in fact we have equality, and \(t\in[\operatorname{id},w]_{\ell_{R}}\). This completes the proof in the case of the infinite family \(G(m,p,n)\).
We next consider the exceptional groups. Here we proceed by a brute-force computer calculation. For each exceptional group \(W\), we performed for each conjugacy class representative \(w\) the following computation: if \(\ell_{R}(w)>\operatorname{codim\,fix}(w)\), we checked for each reflection \(t\) whether \(t\leq_{\ell_{R}}w\) and \(t\not\leq_{\operatorname{cdf}}w\). In all cases, the result of the calculation was to verify the existence of such a reflection. The entire computation took a few minutes runtime on CoCalc, using SageMath and its interface with GAP and the CHEVIE package [12, 6, 7].
Finally, we extend the result to all (not necessarily irreducible) complex reflection groups. Let \(W=W_{1}\times\cdots\times W_{k}\) be the decomposition of \(W\) into irreducibles (an internal direct product) and \(w=w_{1}\cdots w_{k}\) the corresponding decomposition of \(w\). Reflection length and fixed space codimension are both additive over direct products, so if \(\ell_{R}(w)>\operatorname{codim\,fix}(w)\), then some \(w_{i}\) must satisfy \(\ell_{R}(w_{i})>\operatorname{codim\,fix}(w_{i})\). Since \(W_{i}\) is irreducible, it falls into one of the cases above, and so there exsits a reflection \(t\) in \(W_{i}\) that satisfies \(t\in[\operatorname{id},w_{i}]_{\ell_{R}}\) and \(t\not\in[\operatorname{id},w_{i}]_{\operatorname{cdf}}\). The set of reflections of \(W\) is precisely the union of the sets of reflections of the \(W_{i}\), so \(t\) is a reflection in \(W\). Using again that reflection length and fixed space codimension are additive over direct products, it follows immediately that \(t\leq_{\ell_{R}}w\) but \(t\not\leq_{\operatorname{cdf}}w\). This completes the proof.
The following observation has appeared many times in the literature (e.g., as [5, Lem. 2.4]); we include its proof for completeness.
**Proposition 8**.: _Let \(W\) be any reflection group and \(w\in W\). If \(\ell_{R}(w)=\operatorname{codim\,fix}(w)\), then \([\operatorname{id},w]_{\ell_{R}}\subseteq[\operatorname{id},w]_{\operatorname {cdf}}\)._
Proof.: For any \(u\in[\operatorname{id},w]_{\ell_{R}}\), we have by (1) and the definition of the reflection length order that \(\operatorname{codim\,fix}(u)+\operatorname{codim\,fix}(u^{-1}w)\leq\ell_{R}( u)+\ell_{R}(u^{-1}w)=\ell_{R}(w)\). By the subadditivity of \(\operatorname{codim\,fix}\), we have \(\operatorname{codim\,fix}(w)\leq\operatorname{codim\,fix}(u)+\operatorname{ coddim\,fix}(u^{-1}w)\). Since \(\ell_{R}(w)=\operatorname{codim\,fix}(w)\), we have \(\operatorname{codim\,fix}(u)+\operatorname{codim\,fix}(u^{-1}w)= \operatorname{codim\,fix}(w)\), i.e., \(u\in[\operatorname{id},w]_{\operatorname{cdf}}\).
We now shift our discussion to the particular case of the combinatorial groups \(G(m,p,n)\), and show the necessity of the second condition in Theorem 1.
**Proposition 9**.: _Let \(W=G(m,p,n)\) and \(w\in W\). Suppose that \(\ell_{R}(w)=\operatorname{codim\,fix}(w)\) and that there exists a subset of the cycles of \(w\) whose weight is \(0\pmod{p}\) and which cannot be partitioned into pairs of cycles whose weights sum to \(0\) and singleton sets containing a cycle of weight \(0\pmod{p}\). Then there is an element of \([\operatorname{id},w]_{\operatorname{cdf}}\) that does not belong to \([\operatorname{id},w]_{\ell_{R}}\)._
Proof.: Fix \(w\in G(m,p,n)\) with cycles \(C_{1},\dots,C_{k}\). Let \(S=\{C_{i_{1}},\dots,C_{i_{s}}\}\) be a subset of cycles of \(w\) whose weights sum to \(0\pmod{p}\), but which cannot be partitioned into pairs of cycles whose weights sum to \(0\) and singleton sets containing a cycle of weight \(0\pmod{p}\). Since any cycles of weight \(0\pmod{p}\) can be removed from \(S\) while preserving this property, it suffices to assume that \(S\) does not contain any such cycle of \(w\).
Let \(u=w|_{S}\) be the element of \(G(m,1,n)\) that agrees with \(w\) on \(\operatorname{Supp}(S)\) and acts as the identity (with weight \(0\)) on \(\{1,\dots,n\}\smallsetminus\operatorname{Supp}(S)\). Since the sum of the weights of the cycles in \(S\) is \(0\pmod{p}\), in fact \(u\in G(m,p,n)\). We now show that \(u\in[\operatorname{id},w]_{\operatorname{cdf}}\).
Since \(S\) contains no cycles of weight \(0\pmod{p}\), the only cycles of weight \(0\) in \(u\) are the fixed points outside \(\operatorname{Supp}(S)\), and so \(c_{0}(u)=n-|\operatorname{Supp}(S)|\). On the other hand, \(u^{-1}w\) contains a fixed point (of weight \(0\)) for each element of \(\operatorname{Supp}(S)\) and also all the cycles of \(w\) not in \(S\); in particular, it contains all weight-\(0\) cycles of \(w\). Thus \(c_{0}(u^{-1}w)=|\operatorname{Supp}(S)|+c_{0}(w)\). It follows immediately that
\[\operatorname{codim\,fix}(u)+\operatorname{codim\,fix}(u^{-1}w) =(n-c_{0}(u))+(n-c_{0}(u^{-1}w))\] \[=|\operatorname{Supp}(S)|+n-|\operatorname{Supp}(S)|-c_{0}(w)\] \[=n-c_{0}(w)\] \[=\operatorname{codim\,fix}(w).\]
Then \(u\in[\operatorname{id},w]_{\operatorname{cdf}}\).
Let us now show that \(u\notin[\operatorname{id},w]_{\ell_{R}}\). By Corollary 4 and the defining property of \(S\), \(\ell_{R}(u)>\operatorname{codim\,fix}(u)\). Then we have
\[\ell_{R}(w)=\operatorname{codim\,fix}(w)=\operatorname{codim\,fix}(u)+ \operatorname{codim\,fix}(u^{-1}w)<\ell_{R}(u)+\ell_{R}(u^{-1}w).\]
Therefore, \(u\notin[\operatorname{id},w]_{\ell_{R}}\).
**Corollary 10**.: _If \(w\in G(m,p,n)\) satisfies \([\operatorname{id},w]_{\ell_{R}}=[\operatorname{id},w]_{\operatorname{cdf}}\), then the cycle weights of \(w\) that are not \(0\pmod{p}\) can be partitioned into pairs that sum to \(0\), and any subset of cycle weights that sums to \(0\pmod{p}\) is a disjoint union of some weights that are \(0\pmod{p}\) and some pairs of weights that sum to \(0\)._
Proof.: This follows immediately from Proposition 7 and Proposition 9, once we translate the condition \(\ell_{R}(w)=\operatorname{codim}\operatorname{fix}(w)\) to a statement about cycles using Corollary 4.
## 4 Sufficiency
It remains to show that the given conditions are sufficient to imply \([\operatorname{id},w]_{\ell_{R}}=[\operatorname{id},w]_{\operatorname{cdf}}\). We begin with a general lemma on the structure of the cdf-interval below an arbitrary element \(w\).
**Definition**.: Suppose that \(u,w\in G(m,p,n)\). Define an equivalence relation \(\sim\) on the cycles of \(w\), as follows: for two cycles \(C_{1}\) and \(C_{2}\) of \(w\), we have that \(C_{1}\sim C_{2}\) if there exists a cycle of \(u\) that intersects both \(C_{1}\) and \(C_{2}\), and we extend by transitivity. Denote by \(\Pi_{u}(w)\) the resulting set partition of the cycles of \(w\).
**Example 11**.: For any \(w\in G(m,p,n)\), we have \(\Pi_{\operatorname{id}}(w)\) is the fully refined partition in which each cycle belongs to its own part. If \(c\) is an element of \(G(m,p,n)\) that has a single \(n\)-cycle, then \(\Pi_{c}(w)\) is the trivial partition in which all cycles of \(w\) belong to the same part.
For \(w=[(1\ 2\ 3)(6\ 7);(0,0,1,-1,-2,2,0,0)]\) and \(u=[(2\ 3)(5\ 6)(7\ 8);(0,0,0,3,0,3,0,0)]\) in \(G(6,6,8)\), we have
\[\Pi_{u}(w)=[(1\ 2\ 3)\mid(4)\mid(5)(6\ 7)(8)].\]
For each part \(B\) of \(\Pi_{u}(w)\), we have by construction that the underlying set of \(B\) is stabilized by (the underlying permutations of) both \(u\) and \(w\). Let \(u|_{B}\) and \(w|_{B}\) be the associated restrictions, both of which may be viewed as elements of \(G(m,1,\#\operatorname{Supp}(B))\subseteq G(m,1,n)\).
**Example 12**.: In the previous example, we have
\[w|_{B_{1}}=[(1\ 2\ 3);(0,0,1,0,0,0,0)], u|_{B_{1}}=[(2\ 3);\mathbf{0}],\] \[w|_{B_{2}}=[\operatorname{id};(0,0,0,-1,0,0,0,0)] u|_{B_{2}}=[\operatorname{id};(0,0,0,3,0,0,0,0)],\] \[w|_{B_{3}}=[(6\ 7);(0,0,0,0,-2,2,0,0)], u|_{B_{3}}=[(5\ 6)(7\ 8);(0,0,0,0,0,3,0,0)].\]
The elements \(w|_{B_{1}}\) and \(u|_{B_{1}}\) may be viewed as elements of \(G(6,1,3)\subset G(6,1,8)\) via the embedding in which the first three coordinates are permuted. Note that \(w|_{B_{1}}\) is _not_ an element of \(G(6,6,8)\), since its weight is \(1\) (not \(0\)), even though \(w\) is.
**Proposition 13**.: _Suppose that \(u,w\) are any two elements of \(G(m,p,n)\). Then_
\[\operatorname{codim}\operatorname{fix}(u)+\operatorname{codim}\operatorname{ fix}(u^{-1}w)\geq n+c(w)-2|\Pi_{u}(w)|+\#\{\text{parts of $\Pi_{u}(w)$ of nonzero weight}\}.\]
_Furthermore, in the case of equality, for each part \(B\) of \(\Pi_{u}(w)\) we have that \(c(u|_{B})+c(u^{-1}w|_{B})=\#\operatorname{Supp}(B)-c(w|_{B})+2\) and exactly one of the following holds:_
* \(\operatorname{wt}(w|_{B})=0\) _and all cycles of_ \(u|_{B}\) _and_ \(u^{-1}w|_{B}\) _have weight_ \(0\)_,_
* \(\operatorname{wt}(u|_{B})=\operatorname{wt}(w|_{B})\neq 0\)_, one cycle of_ \(u|_{B}\) _has weight_ \(\operatorname{wt}(w|_{B})\) _while the others have weight_ \(0\)_, and all cycles of_ \(u^{-1}w|_{B}\) _have weight_ \(0\)_, or_
* \(\operatorname{wt}(u^{-1}w|_{B})=\operatorname{wt}(w|_{B})\neq 0\)_, one cycle of_ \(u^{-1}w|_{B}\) _has weight_ \(\operatorname{wt}(w|_{B})\) _while the others have weight_ \(0\)_, and all cycles of_ \(u|_{B}\) _have weight_ \(0\)
Proof.: Choose a part \(B\) of \(\Pi_{u}(w)\). Given any partition of \(\operatorname{Supp}(B)\) into two parts, it may be that there is a cycle of \(w|_{B}\) that includes elements from both parts. If not, there must be a cycle of \(u|_{B}\) that includes elements from both parts (since otherwise the cycles of \(w\) in \(B\) would not be connected under \(\sim\)). Thus, the underlying permutations of \(u|_{B}\) and \(w|_{B}\) generate a group that acts transitively on \(\operatorname{Supp}(B)\). Furthermore, since \(u\) and \(w\) stabilize \(\operatorname{Supp}(B)\), we have \((u|_{B})^{-1}\cdot(w|_{B})=(u^{-1}w)|_{B}\), with \(u|_{B}\) and \(u^{-1}w|_{B}\) generating the same group as \(u|_{B}\) and \(w|_{B}\).
If \(\tau,\sigma_{1},\dots,\sigma_{k}\) are permutations of \([N]\) such that \(\sigma_{1}\cdots\sigma_{k}=\tau\) and the group generated by \(\sigma_{1},\dots,\sigma_{k}\) acts transitively on \([N]\), then [1, Eq. (4)]
\[\sum_{i=1}^{k}c(\sigma_{i})\leq(k-1)N-c(\tau)+2. \tag{2}\]
Taking \(k=2\), \(N=\#\operatorname{Supp}(B)\), and \(\tau,\sigma_{1},\sigma_{2}\) to be the underlying permutations of \(w|_{B},u|_{B},u^{-1}w|_{B}\), respectively, it follows immediately from (2) that
\[c(u|_{B})+c(u^{-1}w|_{B})\leq\#\operatorname{Supp}(B)-c(w|_{B})+2. \tag{3}\]
Summing (3) over all parts \(B\) of \(\Pi_{u}(w)\), we have that
\[c(u)+c(u^{-1}w)\leq n-c(w)+2|\Pi_{u}(w)|. \tag{4}\]
Let us now consider how many cycles of \(u\) and \(u^{-1}w\) may be weight \(0\). Obviously \(c_{0}(u|_{B})+c_{0}(u^{-1}w|_{B})\leq c(u|_{B})+c(u^{-1}w|_{B})\) (since each weight-\(0\) cycle is a cycle). Furthermore, if the total weight of \(w|_{B}\) is nonzero, then (since \(u|_{B}\cdot u^{-1}w|_{B}=w|_{B}\)) at least one of \(u|_{B}\), \(u^{-1}w|_{B}\) must have nonzero weight, and so in this case at least one cycle of at least one of the two factors must have nonzero weight. Therefore we can refine the previous inequality to give
\[c_{0}(u|_{B})+c_{0}(u^{-1}w|_{B})\leq c(u|_{B})+c(u^{-1}w|_{B})-\Big{[} \operatorname{wt}(w|_{B})\neq 0\Big{]}, \tag{5}\]
where the last summand on the right is an Iverson bracket. Summing (5) over all parts \(B\), we have
\[c_{0}(u)+c_{0}(u^{-1}w)\leq c(u)+c(u^{-1}w)-\#\{\text{parts of $\Pi_{u}(w)$ of nonzero weight}\}.\]
Combining this with (4) yields
\[c_{0}(u)+c_{0}(u^{-1}w)\leq n-c(w)+2|\Pi_{u}(w)|-\#\{\text{parts of $\Pi_{u}(w)$ of nonzero weight}\}.\]
Subtracting both sides from \(2n\) gives the desired inequality. The equality condition forces equality in (3) and (5) for every part \(B\); when \(\operatorname{wt}(w)=0\), this forces \(c_{0}(u|_{B})=c(u|_{B})\) and \(c_{0}(u^{-1}w|_{B})=c(u^{-1}w|_{B})\), while otherwise it forces one of \(u\) and \(u^{-1}w\) to have all cycles of weight \(0\) and the other to have all but one cycle of weight \(0\) (with the extra cycle necessarily of weight \(\operatorname{wt}(w)\)).
**Proposition 14**.: _Suppose that \(w\in G(m,p,n)\) is arbitrary and \(u\in[\operatorname{id},w]_{\operatorname{cdf}}\). Then_
1. \(\operatorname{codimfix}(u)+\operatorname{codimfix}(u^{-1}w)=n+c(w)-2|\Pi_{u}( w)|+\#\{\text{parts of $\Pi_{u}(w)$ of nonzero weight}\}\) _and_
2. _every part of_ \(\Pi_{u}(w)\) _is either a singleton or consists of exactly two cycles of nonzero weights that sum to_ \(0\)
Proof.: Since \(u\in[\operatorname{id},w]_{\operatorname{cdf}}\), we have \(\operatorname{codim\,fix}(u)+\operatorname{codim\,fix}(u^{-1}w)=\operatorname{ codim\,fix}(w)=n-c_{0}(w)\). By Proposition 13, we have
\[n-c_{0}(w)=\operatorname{codim\,fix}(u)+\operatorname{codim\,fix} (u^{-1}w)\geq\\ n+c(w)-2|\Pi_{u}(w)|+\#\{\text{parts of $\Pi_{u}(w)$ of nonzero weight}\}, \tag{6}\]
and hence that
\[c_{0}(w)+c(w)\leq|\Pi_{u}(w)|+\#\{\text{parts of $\Pi_{u}(w)$ of weight $0$}\}. \tag{7}\]
Let \(a\) be the number of parts of \(\Pi_{u}(w)\) consisting of a single weight-0 cycle of \(w\), let \(b\) be the number of parts consisting of a single cycle of nonzero weight, let \(c\) be the number of parts having total weight 0 with more than one cycle, and let \(d\) be the total number of remaining parts. Then of course \(|\Pi_{u}(w)|=a+b+c+d\), \(c_{0}(w)\geq a\), and \(c(w)\geq a+b+2c+2d\). Plugging these inequalities into (7) yields
\[2a+b+2c+2d =a+(a+b+2c+2d)\] \[\leq c_{0}(w)+c(w)\] \[\leq(a+b+c+d)+a+c\] \[=2a+b+2c+d.\]
This immediately forces \(d=0\). Moreover, when \(d=0\), the equality between the first and last terms forces equality in the middle: hence we have equality in (6) (the first of the two desired statements) as well as \(c_{0}(w)=a\) and \(c(w)=a+b+2c\). From \(c_{0}(w)=a\), we learn that each weight-0 cycle in \(w\) belongs to its own part in \(\Pi_{u}(w)\). From \(c(w)=a+b+2c\), we learn that each non-singleton part in \(\Pi_{u}(w)\) consists of exactly two cycles and has total weight 0; since the weight-0 cycles are in singleton parts, it follows that the non-singleton part of \(\Pi_{u}(w)\) contain two cycles, both of which have nonzero weight, and that the two weights sum to 0. This is precisely the second claim.
As a last preliminary step towards Theorem 1, we prove the theorem in the case of certain very special elements.
**Lemma 15**.: _Suppose that \(w\in G(m,p,n)\) is an element with a single cycle, or with two cycles of nonzero weights that sum to \(0\). Then \([\operatorname{id},w]_{\operatorname{cdf}}=[\operatorname{id},w]_{\ell_{R}}\)._
Proof.: We consider three cases.
First, suppose that \(w\) consists of a single cycle of weight 0. By Theorem 2 and the definition of \(\operatorname{codim\,fix}\), \(\ell_{R}(w)=\operatorname{codim\,fix}(w)=n-1\), and therefore by Proposition 8, \([\operatorname{id},w]_{\ell_{R}}\subseteq[\operatorname{id},w]_{\operatorname{ cdf}}\). We consider the reverse inclusion. Let \(u\in[\operatorname{id},w]_{\operatorname{cdf}}\). Since \(w\) has only one cycle, \(\Pi_{u}(w)\) is the unique set partition of the one-element set containing this cycle. By the definition of \(\leq_{\operatorname{cdf}}\), we have
\[\operatorname{codim\,fix}(u)+\operatorname{codim\,fix}(u^{-1}w) =n+1-2+0\\ =n+c(w)-2|\Pi_{u}(w)|+\#\{\text{parts of $\Pi_{u}(w)$ of nonzero weight}\}.\]
Therefore, by Proposition 13, all cycles of \(u\) and \(u^{-1}w\) have weight 0. It follows immediately from Proposition 3 that \(\ell_{R}(u)=\operatorname{codim\,fix}(u)\) and \(\ell_{R}(u^{-1}w)=\operatorname{codim\,fix}(u^{-1}w)\), and hence that \(u\in[\operatorname{id},w]_{\ell_{R}}\), as claimed.
Second, suppose that \(w\) consists of a single cycle of nonzero weight. By Theorem 2 and the definition of \(\operatorname{codim\,fix}\), \(\ell_{R}(w)=\operatorname{codim\,fix}(w)=n\), and therefore by Proposition 8,
\([\operatorname{id},w]_{\operatorname{cdf}}\). We consider the reverse inclusion. Let \(u\in[\operatorname{id},w]_{\operatorname{cdf}}\). Since \(w\) has only one cycle, \(\Pi_{u}(w)\) is the unique set partition of the one-element set containing this cycle. By the definition of \(\leq_{\operatorname{cdf}}\), we have
\[\operatorname{codim}\operatorname{fix}(u)+\operatorname{codim} \operatorname{fix}(u^{-1}w) =n+1-2+1\] \[=n+c(w)-2|\Pi_{u}(w)|+\#\{\text{parts of $\Pi_{u}(w)$ of nonzero weight}\}.\]
Therefore, by Proposition 13, one of \(u\) and \(u^{-1}w\) has one cycle of nonzero weight (and some number of cycles of weight \(0\)) and the other has all cycles of weight \(0\); moreover, this nonzero weight is \(\operatorname{wt}(w)\), which is \(0\pmod{p}\). It follows immediately from Proposition 3 that \(\ell_{R}(u)=\operatorname{codim}\operatorname{fix}(u)\) and \(\ell_{R}(u^{-1}w)=\operatorname{codim}\operatorname{fix}(u^{-1}w)\), and hence that \(u\in[\operatorname{id},w]_{\ell_{R}}\), as claimed.
Finally, suppose that \(w\) consists of two cycles of nonzero weights that sum to \(0\). By Theorem 2 and the definition of \(\operatorname{codim}\operatorname{fix}\), \(\ell_{R}(w)=\operatorname{codim}\operatorname{fix}(w)=n\), and therefore by Proposition 8, \([\operatorname{id},w]_{\ell_{R}}\subseteq[\operatorname{id},w]_{\operatorname {cdf}}\). We consider the reverse inclusion. Let \(u\in[\operatorname{id},w]_{\operatorname{cdf}}\). We have two sub-cases, depending on the structure of \(\Pi_{u}(w)\).
First, suppose that \(\Pi_{u}(w)\) is the set partition with a single part that contains both cycles of \(w\). By the definition of \(\leq_{\operatorname{cdf}}\), we have
\[\operatorname{codim}\operatorname{fix}(u)+\operatorname{codim} \operatorname{fix}(u^{-1}w) =n+2-2+0\] \[=n+c(w)-2|\Pi_{u}(w)|+\#\{\text{parts of $\Pi_{u}(w)$ of nonzero weight}\}.\]
Therefore, by Proposition 13, all cycles of \(u\) and \(u^{-1}w\) have weight \(0\). It follows immediately from Corolary 4 that \(\ell_{R}(u)=\operatorname{codim}\operatorname{fix}(u)\) and \(\ell_{R}(u^{-1}w)=\operatorname{codim}\operatorname{fix}(u^{-1}w)\), and hence that \(u\in[\operatorname{id},w]_{\ell_{R}}\), as claimed.
Alternatively, it could be that \(\Pi_{u}(w)\) is the set partition of two parts \(B_{1},B_{2}\), each containing exactly one cycle of \(w\). By the definition of \(\leq_{\operatorname{cdf}}\), we have
\[\operatorname{codim}\operatorname{fix}(u)+\operatorname{codim} \operatorname{fix}(u^{-1}w) =n+2-4+2\] \[=n+c(w)-2|\Pi_{u}(w)|+\#\{\text{parts of $\Pi_{u}(w)$ of nonzero weight}\}.\]
By Proposition 13, we know that for either \(i=1,2\), either \(u|_{B_{i}}\) has one cycle with weight \(\operatorname{wt}(w|_{B_{i}})\) and some cycles of weight \(0\) (possibly none) while \(u^{-1}w|_{B_{i}}\) has all cycles of weight \(0\), or the reverse. This leads to three possibilities for \(u\), \(u^{-1}w\): each of them may have all cycles of weight \(0\), may have two cycles of nonzero weight that sum to zero and all other cycles of weight \(0\), or may have one cycle of nonzero weight and all others of weight \(0\) (in which case necessarily the one cycle must have weight \(0\pmod{p}\), since \(u,u^{-1}w\in G(m,p,n)\)). In all three situations, we have immediately from Corollary 4 that \(\ell_{R}(u)=\operatorname{codim}\operatorname{fix}(u)\) and \(\ell_{R}(u^{-1}w)=\operatorname{codim}\operatorname{fix}(u^{-1}w)\), and hence that \(u\in[\operatorname{id},w]_{\ell_{R}}\).
Since the preceding cases are exhaustive, the proof is complete.
We are ready now to complete the second direction of the biconditional in the main theorem.
**Proposition 16**.: _If \(w\in G(m,p,n)\) satisfies \(\ell_{R}(w)=\operatorname{codim}\operatorname{fix}(w)\) and any subset of cycle weights that sums to \(0\pmod{p}\) is a disjoint union of some weights that are \(0\pmod{p}\) and some pairs of weights that sum to \(0\), then \([\operatorname{id},w]_{\ell_{R}}=[\operatorname{id},w]_{\operatorname{cdf}}\)._
Proof.: Let \(w\in G(m,p,n)\) be as in the statement. Since \(\ell_{R}(w)=\operatorname{codim}\operatorname{fix}(w)\), we have by Proposition 8 that \([\operatorname{id},w]_{\ell_{R}}\subseteq[\operatorname{id},w]_{\operatorname {cdf}}\), and we seek to prove the opposite inclusion. To that end, choose \(u\in[\operatorname{id},w]_{\operatorname{cdf}}\).
Consider the partition \(\Pi_{u}(w)=\{B_{1},\ldots,B_{k}\}\) of the cycles of \(w\) induced by \(u\). Since \(u\in[\mathrm{id},w]_{\mathrm{cdf}}\), we have on one hand by Proposition 14(2) that each \(B_{i}\) either consists of a single cycle of \(w\) or of a pair of cycles whose weights sum to \(0\). On the other hand, we have by Proposition 14(1) that \(\mathrm{codim\,fix}(u)+\mathrm{codim\,fix}(u^{-1}w)=n+c(w)-2|\Pi_{u}(w)|+\#\{\)parts of \(\Pi_{u}(w)\) of nonzero weight\(\}\). Therefore, by the equality case of Proposition 13, we have for each part \(B_{i}\) of \(\Pi_{u}(w)\) that either \(u|_{B_{i}}\) has all cycles of weight \(0\), or that \(u|_{B_{i}}\) has a single cycle whose weight is equal to \(\mathrm{wt}(w|_{B_{i}})\) and some number of other cycles of weight \(0\). Combining these two separate statements leaves us with three possibilities for each part \(B_{i}\): either
* \(w|_{B_{i}}\) consists of two cycles of nonzero weight whose weights sum to \(0\), and \(u|_{B_{i}}\) consists of a number of cycles of weight \(0\), or
* \(w|_{B_{i}}\) consists of a single cycle, and \(u|_{B_{i}}\) consists of a number of cycles of weight \(0\), or
* \(w|_{B_{i}}\) consists of a single cycle of nonzero weight, and \(u|_{B_{i}}\) has one cycle of this weight and possibly some other cycles, all of which have weight \(0\).
By taking the union over all \(B_{i}\), it follows in particular that the multiset of cycle weights of \(u\) is the union of a submultiset of the multiset of cycle weights of \(w\) and a multiset containing some number of copies of \(0\).
Let \(S\) be the set consisting of all cycles \(C\) of \(w\) that fall into the third category above, i.e., for which there is a corresponding cycle of \(u\) supported on \(\mathrm{Supp}(C)\) having the same weight as \(C\). Thus, the multiset of nonzero cycle weights of \(u\) is precisely the same as the multiset of weights of cycles in \(S\). Since \(u\in G(m,p,n)\), the nonzero cycle weights of \(u\) sum to \(0\pmod{p}\), and therefore \(S\) is a set of cycles of \(w\) with the property that the sum of the weights of its cycles is \(0\pmod{p}\). By the hypothesis on \(w\), it follows that there exists a partition (call it \(Q\)) of \(S\) consisting of some singleton sets containing a cycle of weight \(0\pmod{p}\) and some pairs in which the weights are nonzero but sum to \(0\) (and no other parts).
Define a partition \(P_{u}(w)\) of the cycles of \(w\) into parts of size \(1\) and \(2\), as follows:
* First, if two cycles of \(w\) belong to the same part \(B_{i}\) of \(\Pi_{u}(w)\), then they belong to the same part in \(P_{u}(w)\). (The cycles in each pair in this case have nonzero weights that sum to \(0\).)
* Second, if two cycles of \(w\) both belong to \(S\) (so, by definition of \(S\), are not covered in the prior case) and belong to the same part in \(Q\), then they belong to the same part in \(P_{u}(w)\). (The cycles in each pair in this case have nonzero weights that sum to \(0\).)
* Finally, let \(S^{\prime}\) be the set of cycles of \(w\) not covered in either of the prior cases. Since \(w\in G(m,p,n)\), the sum of all its cycle weights is \(0\pmod{p}\). The cycles of \(w\) covered by the preceding cases have total weight \(0\), so the cycles of \(S^{\prime}\) have total weight \(0\pmod{p}\). Therefore, by the hypothesis on \(w\), there is a partition \(Q^{\prime}\) of \(S^{\prime}\) each of whose parts is either a singleton of weight \(0\pmod{p}\) or a pair of cycles of nonzero weight that sum to \(0\). Let \(P_{u}(w)\) include all the parts of \(Q^{\prime}\).
An example of this construction is given as Example 17 below.
Since each part of \(P_{u}(w)\) is a union of parts of \(\Pi_{u}(w)\), the underlying permutations of \(u\) and \(w\) respect the underlying set partition of \([n]\), and it makes sense to speak of the restrictions \(u|_{B}\) and \(w|_{B}\) for \(B\) a part of \(P_{u}(w)\). Furthermore, the partition \(P_{u}(w)\) is a _null_ cycle partition of \(w\): each part of size two has total weight \(0\), and each part of size one has weight \(0\pmod{p}\). Thus,
each element \(w|_{B}\) belongs to \(G(m,p,n)\), not just to \(G(m,1,n)\). Moreover, the same holds true for \(u\) because, my construction, each restriction \(u|_{B}\) either has weight \(0\) or has the same weight as \(w|_{B}\). Thus, we may view the products
\[w=\prod_{B\in P_{u}(w)}w|_{B}\qquad\text{and}\qquad u=\prod_{B\in P_{u}(w)}u|_{B}\]
as factorizations of \(w,u\) into elements of \(G(m,p,n)\).
Next we claim that for each part \(B\) of \(P_{u}(w)\), the element \(u|_{B}\) belongs to \([\operatorname{id},w|_{B}]_{\operatorname{cdf}}\). To see this we must consider the several cases in which parts \(P_{u}(w)\) were constructed. If \(B\) is a part of \(\Pi_{u}(w)\) (consisting of either one or two cycles of \(w\)), then by combining Propositions 13 and 14 in the same was as before: Proposition 14 says that we are in the equality case of Proposition 13, and the conditions of the equality case imply that \(\operatorname{codim fix}(u|_{B})+\operatorname{codim fix}(u^{-1}w|_{B})= \operatorname{codim fix}(w|_{B})\) in each case. The other possibility is that \(B\) consists of two cycles, each of which formed a singleton part in \(\Pi_{u}(w)\). This could happen either because the two cycles of \(w\) belonged to the same part in the partition \(Q\) of \(S\), or because they belonged to the same part in the partition \(Q^{\prime}\) of \(S^{\prime}\). We spell out the details for only the first of these two cases; the other is very similar. So suppose that \(B=\{C_{1},C_{2}\}\) consists of two cycles of \(w\) having nonzero weights that sum to \(0\), and that \(u|_{B}\) consists of some number of cycles of weight \(0\), as well as one cycle of the same weight as \(C_{1}\) and another of the same weight as \(C_{2}\). As before, by Propositions 13 and 14, we have for \(i=1,2\) that \(c(u|_{\{C_{i}\}})+c(u^{-1}w|_{\{C_{i}\}})=\#\operatorname{Supp}(C_{i})+1\) and that \(c_{0}(u|_{\{C_{i}\}})=c(u|_{\{C_{i}\}})-1\), \(c_{0}(u^{-1}w|_{\{C_{i}\}})=c(u^{-1}w|_{\{C_{i}\}})\). Substituting the latter two equations into the former, subtracting from \(2\#\operatorname{Supp}(C_{i})\), and adding the two resulting equations for \(i=1,2\) gives \(\operatorname{codim fix}(u|_{B})+\operatorname{codim fix}(u^{-1}w|_{B})=\# \operatorname{Supp}(B)=\operatorname{codim fix}(w|_{B})\), as needed.
Since each element \(w|_{B}\) is either a single cycle of weight \(0\pmod{p}\) or a product of two cycles of nonzero weights that sum to \(0\), we have by Lemma 15 that \(u|_{B}\in[\operatorname{id},w|_{B}]_{\ell_{R}}\). The final result is now a simple calculation: we have
\[\operatorname{codim fix}(w)=\operatorname{codim fix}(u)+ \operatorname{codim fix}(u^{-1}w)\leq\\ \ell_{R}(u)+\ell_{R}(u^{-1}w)\stackrel{{\operatorname{sub \text{-}}}}{{\leq}}\sum_{B\in P_{u}(w)}\ell_{R}(u|_{B})+\sum_{B\in P_{u}(w)} \ell_{R}(u^{-1}w|_{B})=\\ \sum_{B\in P_{u}(w)}\left(\ell_{R}(u|_{B})+\ell_{R}(u^{-1}w|_{B}) \right)\stackrel{{\operatorname{Lem.}\ 15}}{{=}}\sum_{B\in P_{u}(w)} \operatorname{codim fix}(w|_{B})\stackrel{{\operatorname{cycle}}}{{= }}\operatorname{codim fix}(w),\]
and therefore \(\ell_{R}(u)=\operatorname{codim fix}(u)\) and \(\ell_{R}(u^{-1}w)=\operatorname{codim fix}(u^{-1}w)\), so \(u\in[\operatorname{id},w]_{\ell_{R}}\), as claimed.
**Example 17**.: Consider the elements
\[w=[(1\ 2)(3)(4)(5)(6)(7)(8)(9)(10);(0,1,-1,2,-2,4,-4,8,8,8)]\in G(16,8,10)\]
and \(u=[(1)(2)(3)(4\ 5)(6)(7)(8\ 9)(10);(0,0,0,1,-1,4,-4,0,0,8)]\). Then
\[\Pi_{u}(w)=[(1\ 2)\mid(3)\mid(4)(5)\mid(6)\mid(7)\mid(8)(9)\mid(10)].\]
Observe that this partition of the cycles is not a _null_ cycle partition because in the part \(B_{1}=\{(1\,2)\}\), the cycle weight is not \(0\pmod{8}\). In the bulleted list in the definition of \(P_{u}(w)\), the first category consists of the parts \(B_{3}=\{(4),(5)\}\) and \(B_{6}=\{(8),(9)\}\), the second category consists of the parts \(B_{1}=\{(1\ 2)\}\) and \(B_{2}=\{(3)\}\), and the third category consists of the parts \(B_{4}=\{(6)\}\), \(B_{5}=\{(7)\}\), and \(B_{7}=\{(10)\}\). Therefore \(S=\{(6),(7),(10)\}\). The desired partition \(Q\) of \(S\) is \([(6)(7)\mid(10)]\) (in this case it is unique). To define \(P_{u}(w)\), we have three steps. The first step creates parts \((4)(5)\) and \((8)(9)\). The second step creates a part \((6)(7)\). In the third step, we have leftover cycles \(S^{\prime}=\{(1\ 2),(3),(10)\}\), and the desired partition \(Q^{\prime}\) is \([(1\ 2)(3)\mid(10)]\). Thus \(P_{u}(w)=[(1\ 2)(3)\mid(4)(5)\mid(6)(7)\mid(8)(9)\mid(10)]\).
In contrast to \(\Pi_{u}(w)\), this partition _is_ a null cycle partition of \(w\). Furthermore the restriction of \(u\) to any part has weight \(0\pmod{p}\) (including possibly weight \(0\)).
The proof of the main theorem is essentially trivial at this point.
**Theorem 1**.: _An element \(w\in G(m,p,n)\) satisfies \([\mathrm{id},w]_{\ell_{R}}=[\mathrm{id},w]_{\mathrm{cdf}}\) if and only if the cycle weights of \(w\) that are not \(0\pmod{p}\) can be partitioned into pairs that sum to \(0\), and any subset of cycle weights that sums to \(0\pmod{p}\) is a disjoint union of some weights that are \(0\pmod{p}\) and some pairs of weights that sum to \(0\)._
Proof.: This follows immediately from Corollary 10 and Proposition 16, once we translate the condition \(\ell_{R}(w)=\operatorname{codim}\operatorname{fix}(w)\) in Proposition 16 to a statement about cycles using Corollary 4.
|
2302.05508 | FairPy: A Toolkit for Evaluation of Social Biases and their Mitigation
in Large Language Models | Studies have shown that large pretrained language models exhibit biases
against social groups based on race, gender etc, which they inherit from the
datasets they are trained on. Various researchers have proposed mathematical
tools for quantifying and identifying these biases. There have been methods
proposed to mitigate such biases. In this paper, we present a comprehensive
quantitative evaluation of different kinds of biases such as race, gender,
ethnicity, age etc. exhibited by popular pretrained language models such as
BERT, GPT-2 etc. and also present a toolkit that provides plug-and-play
interfaces to connect mathematical tools to identify biases with large
pretrained language models such as BERT, GPT-2 etc. and also present users with
the opportunity to test custom models against these metrics. The toolkit also
allows users to debias existing and custom models using the debiasing
techniques proposed so far. The toolkit is available at
https://github.com/HrishikeshVish/Fairpy. | Hrishikesh Viswanath, Tianyi Zhang | 2023-02-10T20:54:10Z | http://arxiv.org/abs/2302.05508v1 | # FairPy: A Toolkit for Evaluation of Social Biases and their Mitigation in Large Language Models
###### Abstract
Studies have shown that large pretrained language models exhibit biases against social groups based on race, gender etc, which they inherit from the datasets they are trained on. Various researchers have proposed mathematical tools for quantifying and identifying these biases. There have been methods proposed to mitigate such biases. In this paper, we present a comprehensive quantitative evaluation of different kinds of biases such as race, gender, ethnicity, age etc. exhibited by popular pretrained language models such as BERT, GPT-2 etc. and also present a toolkit that provides plug-and-play interfaces to connect mathematical tools to identify biases with large pretrained language models such as BERT, GPT-2 etc. and also present users with the opportunity to test custom models against these metrics. The toolkit also allows users to debias existing and custom models using the debiasing techniques proposed so far. The toolkit is available at [https://github.com/HrishikeshVish/Fairpy](https://github.com/HrishikeshVish/Fairpy)
## 1 Introduction
Large Pretrained Models have shown to be powerful tools in capturing the intricacies of languages, interpreting textual data. They have been utilized for generating syntactically & semantically valid, realistic snippets of text that are contextually sound. These models are pre-trained on large corpora of text that are collected from the internet. However, due to the inherent biases present in the training data, the models are shown to exhibit biases against various social groups (Bolukbasi et al., 2016; Kirk et al., 2021; Papakyriakopoulos et al., 2020). This is a serious issue that affects the trustworthiness of NLP models and deters the integration of NLP in day-to-day tasks.
Previous works have identified that pretrained models exhibit biases. Their metrics for detecting biases however are either theoretical or applicable in specific conditions such as gender bias in BERT (Jin et al., 2020). These metrics are however not widely accessible or easy to adapt into development environment and we identify this as a pressing need for NLP developers and present a Python based open-source package Fairpy, that consists of a large set of available bias detection and reduction tools along with interfaces to connect them to many of the open source language models and user defined ones. Furthermore, the toolkit includes some of the common datasets and template corpora to evaluate biases.
The detection methods described in this paper can be placed in two categories - The metrics that calculate biases present in the encoding (Bolukbasi et al., 2016; Papakyriakopoulos et al., 2020), wherein, the internal representations of the model's vocabulary exhibit biases. The second set of metrics determine biases by evaluating the behavior of the model, given an input sentence with some perturbation (Kirk et al., 2021; Zhao et al., 2018; Guo and Caliskan, 2021; Magee et al., 2021; Li et al., 2020). The distribution of the output in the context of a social construct such as race or gender determines whether a model associates certain words with certain social groups.
The mitigation methods are mainly classified into those that involve retraining the model with a balanced corpus (Barikeri et al., 2021; Bartl et al., 2020; **?**), reinforcement learning or transfer learning based approches (Park et al., 2018; Liu et al., 2021; Jin et al., 2020). The second set of methods involve post processing and modifying the embeddings and representations of the tokens in the model (Qian et al., 2019; Faal et al., 2022).
While these metrics and mitigation techniques have been successfully applied to specific language models, many of them fail to work in other models, due to the nature of templates used for predicting biases or due to the internal representations used in
the language models.
Causal language models are typically generative, which means that they use unidirectional attention to predict the next word of the sequence. Templates for these language models are typically regular sentences. However, masked language models use bidirectional attention, which means, they consider both the words parsed so far and the words parsed in the future to predict the current word. These language models are typically tested on templates that have blanks in the middle. Such templates cannot be used in generative language models, thus restricting the versatility of metrics that are used on masked templates. We aim to address this issue in our future work by providing a framework for creating more flexible templates.
To evaluate the effectiveness of these metrics, we perform a comprehensive study of these tools and techniques and provide our findings in the results section. We also decouple these techniques to make them general purpose and applicable to a wider range of language models.
Our main contributions are -
* Decoupling bias detection metrics from language models and specific social situations and making them generalized.
* Presenting python interfaces to allow users to plug-and-play their own models to test these biases and a toolkit that compiles most of the available bias detection metrics and bias mitigation techniques
* Analysis of the settings in which these metrics and techniques work and where they fail.
* Evaluation of the impact of these techniques on all the major models available through the huggingface library.
## 2 Related Work
**Bias Detection Methods for Language Models**(Bolukbasi et al., 2016) presented the seminal work in determining the bias in word embeddings using cosine similarity to determine directional relationships between words. They proposed that the embeddings of the words should be equidistant from the opposing members of sensitive classes.
In their work, Kirk et al. (2021) perform an in-depth evaluation of GPT-2 to detect Intersectional biases. They define metrics to check for biases involving intersectional communities while predicting occupations. They also provide data collection protocols for language models and conclude that GPT-2 shows occupational clustering, associating certain groups with particular occupations. However, their work was limited to data collected in the US.
In their work, Papakyriakopoulos et al. (2020) define three types of biases in word embeddings - Pre-existing biases due to input data, technical biases due to architecture and mathematical constraints and emergent biases that are discovered when the model makes predictions. The embeddings were static and generated with Glove. They applied Mann Whitney U test and Kruskal Wallis H Test on text generated for different social groups. Zhao et al. (2018) provide the WinoBias dataset for determining gender biases in language models. They showed the correlation between pronouns and occupations and also proved that pro-stereotypical entities were predicted with higher accuracy given the context.
Guo and Caliskan (2021) present a comprehensive evaluation of emergent intersectional biases in language models such as BERT, GPT-2, GPT and ElMO. They propose methods to determine biases in contextualized embeddings as opposed to static embeddings, called the CEAT test. They also performed evaluation of models against intersectional social classes. Magee et al. (2021) Discuss ways to define social biases exhibited against individuals with health conditions such as psycho-social, sensory disability, neurodiverse conditions, autism etc. They also discuss preferred phrasing and their associations in these settings. They propose community validation and prompt calibration as ways of detecting these biases.
Li et al. (2020) design ways to find confounding factors affecting bias of large transformer based language models. They test whether the predictions depend on the position of the subject or if the predictions change if an attribute is negated. They also provide a framework to determine biases in underspecified questions and seemingly uncorrelated sentences. The metrics were tested on BERT and its variants.
**Biases in Machine Translation**(Sun et al., 2020) define ways to perform mutation of contextual words (changing boys to girls etc.) in the input templates to see how the non-mutated section of the translated text differs. The output is
tested by measuring it's grammatical fluency. Predictive probability and cosine scores are used to determine the bias in the output. Prates et al. (2020) analyze biases in Google Translate and argue that the upper bound on the biases generated by translation systems should coincide with actual societal biases. Their tests involved translating neutral nouns to languages where those nouns are gendered. However, these concerns are no longer valid since Google Translate changed their translation systems to provide alternatives in ambiguous situations. Stanovsky et al. (2019) Perform the first large scale multi-lingual bias detection of Language models using the WinoBias and WinoBender datasets to determine gender biases in Machine Translation. Their method involved translating words into a different language, followed by aligning the terms with the English counterpart to determine the gender of the translated words. They concluded that the presence of stereotypical gendered adjectives reduced the gender mismatch in translated nouns. In their project, Ahn and Oh (2021) introduce the notion of ethnic bias or the probability of predicting an ethnicity or nationality given the context. This metric was tested only on BERT.
**Bias Reduction in Language Models**Park et al. (2018) Evaluated Bias mitigation strategies on older NLP models such as CNN and GRU. They performed transfer learning on debiased corpora and debiased word embeddings by augmenting the existing corpora. They also defined unintended biases and false positive biases that are exhibited by the models.
Liu et al. (2021) Noted that political and ideological balance is often ignored in training corpora. They identified two types of political triggers - Direct (Democrat vs. Republican) and Indirect (California vs. Florida). They used reinforcement learning to mitigate the effects of such political imbalance. They used Proximal policy Optimal Training on separate conservative and liberal biases. In their paper, Barikeri et al. (2021) present a conversational dataset called RedditBias, which comprises actual human conversations. This dataset can be used to detect four types of biases - gender, race, religion and queerens. They apply four mitigation techniques, Attribute Distance Debiasing, Counterfactual Data Augmentation, Hard Debiasing and Language Model Debiasing.
Jin et al. (2020) Explore the concept of upstream mitigation as a means to debias BERT. They discuss whether it is possible to fine-tune against multiple biases in a single downstream tasks and provide a transfer learning framework to finetune and mitigate a single bias factor in upstream stage. Qian et al. (2019) Provide methods to remove bias in word-level settings, specifically with Glove Embeddings. They propose a loss function to equalize the probability of predicting words of a particular gender. They show that their method successfully equalizes the probability of predicting "congressman" and "congresswoman", while not changing the probability distributions of "congress man" and "congress woman".
Bartl et al. (2020) Experiment with reducing BERT's gender bias by retraining the model with the GAP corpus. They use real world statistics and cross linguistic approaches to build a bias evaluation template dataset. They also show that masked models exhibit bias in contextualized settings.
Faal et al. (2022) Analyze two techniques for detoxifying lanauge models - Data driven and decoding based techniques. Data driven methods involve augmenting data and retraining the model. The decoding based methods involve having a model that decodes the embeddings in an unbiased way as a form of post-processing.
**Toolkits for measuring biases in Language Models**Ribeiro et al. (2020), in their work propose a task agnostic methodology called CHECKLIST to test NLP models for general linguistic capabilities. The tool performs three types of tests - The minimum Functionality test, which comprises of a set of small neutral sentences with simple adjectives. The invariance test performs label preserving perturbations to the template to check if the behavior of the NLP model differs. Lastly, the model uses Directional Expectation Test which adds a sentiment to the template and checks if the model doesn't predict the opposite sentiment. The metrics provided in this work form the basis for bias detection through counterfactual data augmentation but these metrics were not applied to pretrained language models.
Geva et al. (2022) have built an interactive tool called LM-Debugger that illustrates the internal workings of Language Models. They note that most metrics follow probing or perturbation and underscored the need to understand the internal working of these models. At each step a set of tokens are provided and the users are allowed to
choose a token. This choice determines the prediction at the following layers. The trace of the model helps determine how the biases are formed. Nozza et al. (2022) present a social bias detection Engine for measuring social biases and integrating them in development pipeline. They propose a badge system to perform continuous evaluation of biases in CI/CD setting during the development of software that uses language models.
## 3 System Overview
In this section, we discuss the architecture Fairpy, a model that provides flexible interfaces for applying bias detection and mitigation techniques on existing and custom language models. The model has two packages, the bias detection package and the bias mitigation package. The metrics defined in these packages currently support the following language models provided in HuggingFace Library - CTRL, GPT-2, GPT, TransfoXL, BeRT, DistilBeRT, RoBeRTA, XLM, XLNet, AlBeRT. The model also has interfaces to plug in user-defined or custom language models, along with their tokenizers. The Bias Detection package currently provides the following metrics - Hellinger Distance (Beran, 1977), WEAT Score Caliskan et al. (2017), StereoSet Score Nadeem et al. (2020), Honest Score Nozza et al. (2021), Log Probability Nangia et al. (2020) and F1 Score. The Bias Mitigation package currently supports Dropout Debias, NullSpace Projection, Sentence Debias, DiffPruning, Self Debias and Counter Factual Data Augmentation. Furthermore, the model provides functions to define new social constructs such as neo-pronouns to include non-binary individuals. To our knowledge, this is the first toolkit to provide such an interface. The package is available to download at [https://github.com/HrishikeshVish/Fairpy](https://github.com/HrishikeshVish/Fairpy)
## 4 Bias Detection Metrics
This section describes the Bias Mitigation techniques that have been currently included in the toolkit.
### Hellinger Distance
As presented in Zhu et al. (2012), Hellinger distance metric is used to measure the difference between two probability distributions. Let us define two probability distributions P and Q such that P = \(\{p_{i}\}_{i\epsilon[n]}\) and Q = \(\{q_{i}\}_{i\epsilon[n]}\). The Hellinger Distance between them is defined as follows
\[h(P,Q)=\frac{1}{\sqrt{2}}.||\sqrt{P}-\sqrt{Q}||_{2} \tag{1}\]
The Hellinger distance is computed between the embeddings generated by the language model for two different classes. The input corpus is augmented such that the nouns and pronouns are replaced with members of a particular class, for example, replace every instance of "him", "his", "he", "man", "guy" with "her", "hers", "she", "woman", "girl" respectively. The augmented corpus is fed to the language model and the output is extracted. This forms the context for the social class. The dot product of this context with the embedding forms the logits. Softmax is applied on this logits to get the probability distribution. The Hellinger distance between these probability distributions provides a measure of how biased the model is towards a particular gender.
### WEAT Score
Word Embedding Association Test is a method proposed by Caliskan et al. (2017) and is effective in measuring the cosine similarity between contextualized embeddings generated by language models. WEAT score is computed by partitioning the sets of words into two classes, which generally represent the two binary classes such as Male and Female. Let these be represented as M and F, the attribute classes. The target words are partitioned into two sets X and Y. The WEAT score for X and Y, with respect to M and F is given by the following equation
\[s(X,Y,M,F)=\sum_{x\in X}a(x,M,F)-\sum_{y\in Y}a(y,M,F) \tag{2}\]
Figure 1: The flow diagram highlights the overall architecture of the model, with the nodes labelled in red denoting the bias detection techniques and the nodes in green highlighting the bias mitigation techniques
\[a(w,M,F)=avg_{m}cos(\vec{w},\vec{m})-avg_{f}cos(\vec{w},\vec{f}) \tag{3}\]
where \(m\epsilon M\) and \(f\epsilon F\) and a refers to the association between the word and the classes. The effect size is given by
\[\frac{\mu_{x}a(x,M,F)-\mu_{y}a(y,M,F)}{\sigma_{w}a(w,M,F)} \tag{4}\]
This toolkit uses the extension of the WEAT metric called SEAT or Sentence Encoder Association Test, as defined in (May et al., 2019) and implemented in (Meade et al., 2021). The metric uses many sets of attribute classes such as age, gender, race, religion and health. The embeddings for the words of these classes are compared against a comprehensive set of common nouns and adjectives with positive and negative implications.
In the ideal case, the embedding representation of each word in the vocabulary is expected to be equidistant from the two attribute classes. Any deviation suggests bias in one direction. Greater the deviation, greater the bias.
This metric is suitable for classes which are binary. However, effort has been made to extend them to multiclass scenario. One possible way that has been explored in this toolkit is to break down multiclass labels into binary. For example, while comparing nationalities, the classes would be Romanian vs Non-Romanian.
### StereoSet Score
StereoSet, proposed by (Nadeem et al., 2020) is a comprehensive dataset of English sentences and a set of tools for measuring biases. The model measures the likelihood of predicting words in sentences with masks in two settings - intersentence and intrasentence. The model measures biases in four social classes - gender, religion, race and profession.
The model comprises two scores - Language Model score and Stereoset score. The Language Model score gives higher score to meaningful associations. This score represents the percentage of associations that are meaningful.
The stereoset score determines the percentage of examples in which the model prefers stereotypical associations over anti-stereotypical associations. The ideal score is 50, where half the examples are stereotyped and the other half aren't.
### Honest Score
Honest metric, which was proposed by (Nozza et al., 2021), is a metric that measures the number of sentence completions that are hurtful in nature. This metric works on both causal models like GPT-2 and masked models like BERT. The prompt for causal models is an incomplete sentence, while for the masked models, it is a sentence with a MASK token which corresponds to the word the will be predicted by the model. The metric inherently uses HurtLex, a multilingual corpus of hurtful words. The metric computes the percentage of predictions that are hurtful. The global Honest score is an average of the percentage of hurtful completions across different classes such as Animals, Crime, Negative Connotations etc. and is given by the following equation
\[\frac{\sum_{sclass}\sum_{t\gets Predictions}isHurtful(t)}{|class|*k} \tag{5}\]
In the above equation, s denotes the sentences belonging to a particular class, t denotes the top K predictions of the Language Model.
### Log Likelihood
Log Likelihood metric (Nangia et al., 2020) is used to measure how the language model changes its prediction when the social context changes. The corpus is initially fed into the model without any modifications and the output of the language model is the prior probability logits. This corpus is later augmented with counterfactual data, such that the only terms changed are the terms that describe the social class (eg: replacing John with Mary). The log likelihood is then the probability of the tokens, conditioned on the augmented words.
\[LogScore=\sum_{i=0}^{|C|}logP(w_{i}\epsilon W|W_{w_{i}},A,\theta) \tag{6}\]
In the above equation, w represents the word or the token in the sentence. A is the set of augmented tokens.
## 5 Bias Mitigation Techniques
In the following sub-sections, we present the bias mitigation methods that are currently supported by Fairpy.
### DiffPruning
Diffpruning is an adversarial training mechanism for mitigating biases in text corpora (Hauzenberger
and Rekabsaz, 2022). It consists of a set of three neural networks, one for predicting the feature vector, one that predicts the domain or the origin dataset of the feature vector and the the third one which predicts the class label for the instance. Each model weight \(\theta\) is parameterized as the sum of pre-trained weights \(\theta_{t}\) and the mask \(\delta\). Finetuning is done by optimizing \(\delta\) with L0 Regularization. The \(\delta\) term is sparsified with the mask z. The entire process is given by the equation
\[\begin{array}{l}min_{w_{\tau},\alpha_{\tau},\beta_{\tau}}\frac{1}{N}\sum_{n= 1}^{N}\mathcal{L}(y_{n},m(x_{n};\theta+z_{\tau}\circ w_{\tau}))\\ +\lambda\sum_{i=1}^{d}\sigma(log\alpha_{\tau,i}-\beta_{\tau,i}log(-\gamma/\zeta )\end{array} \tag{7}\]
### Null Space Projection
Iterative Nullspace projection is a technique used to debias the embedding. Given the embedding matrix and a set of protected attributes, the method aims to remove linear dependency between the two using a linear guarding function Liang et al. (2021). The model is trained to predict the protected attributes from the embedding and this is projected onto the null space of the word embedding to cancel the influence of the embedding on the protected attributes. This toolkit uses an autoregressive version of Iterative Nullspace Projection, as proposed in Liang et al. (2021). This method performs Null space Projection at every time step t to debias the contextual embedding with respect to a protected attribute such as gender.
\[\hat{p_{\theta}}(w_{t}|c_{t-1})=\frac{exp(e(w_{t})^{T}*P*f(c_{t-1}))}{\sum_{w eV}exp(e(w)^{T}*P*f(c_{t-1}))} \tag{8}\]
In the above equation, P is the nullspace of the trained classifier. \(f(c_{t-1})\) is the contextual embedding at the previous time step t-1. Furthermore, the metric uses a parameter \(\alpha\) to determine how to balance the logits of the debiased Language Model with the original one.
\[p_{\theta}(w_{t}|c_{t-1})=\alpha\hat{p_{\theta}}(w_{t}|c_{t-1})+(1-\alpha)p*(w _{t}|c_{t}-1) \tag{9}\]
where, \(p*(w_{t}|c_{t-1})\) is the predicted of the original Language Model.
### Counter Factual Data Augmentation
Counter Factual Data Augmentation or CDA, is a method that aims to change what the Language Model learns by retraining it on a debiased dataset. Rather than changing the architecture of the model or the embedding structure, this method augments the corpus by replacing all instances of words of a social class with another social class. This augmented corpus is used for retraining the model. For this metric, the size and the comprehensive nature of the dataset play a big role in the effectiveness of debiasing. The toolkit currently supports the following datasets - Yelp, Reddit and Wikipedia English.
While augmenting the corpus works best for binary settings, it can be extended into multi-class setting by probabilistically replacing the words of a particular class with a member of any other class. While the toolkit currently has a way to support this, this method has not been tested.
### Self Debias
This technique, put forth by Schick et al. (2021) is a way of preventing the generation of biased text by leveraging the internal knowledge of the language model. The debiasing input contains templates of the following type:
\(D_{in}(x,y)\rightarrow\textit{The following text discriminates}\)
_against people because of y; sentence x_
The distributions \(P(w|x)\) and \(P(w|D_{in}(x,y))\) are calculated. The model is encouraged to produce biased output for the second input. The probability of prediction of unbiased words is left intact but the probability of prediction of biased words is reduced by choosing an appropriate value of \(\alpha\) in the below equation
\[p_{M}(w|x)=\alpha(\Delta(w,x,y)).p_{M}(w|x) \tag{10}\]
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l l l l l l l l} \multirow{2}{*}{**Model**} & \multicolumn{3}{l}{**WEAet Score**} & \multicolumn{3}{l}{**Stereglect Score**} & \multicolumn{3}{l}{**Log Probability**} \\ \cline{2-13} & \multicolumn{2}{l}{**Gender**} & \multicolumn{2}{l}{**Race**} & \multicolumn{2}{l}{**Religion**} & \multicolumn{2}{l}{**Age**} & \multicolumn{2}{l}{**Health**} & \multicolumn{2}{l}{**Race**} & \multicolumn{2}{l}{**Gender**} & \multicolumn{2}{l}{**Religion**} & \multicolumn{2}{l}{**Pord.**} & \multicolumn{2}{l}{**All**} & \multicolumn{2}{l}{**Gender**} & \multicolumn{2}{l}{**Race**} & \multicolumn{2}{l}{**Religion**} \\ \cline{2-13} & **p-val** & **cse** & **p-val** & **cse** & **p-val** & **cse** & **p-val** & **cse** & **p-val** & **cse** & **p-val** & **cse
where \(\Delta(w,x,y)\) is the difference in the probability distributions of the generated text for the regular input and the debiasing template. It is calculated as follows
\[\Delta(w,x,y)=p_{M}(w|x)-p_{M}(w|d_{in}(x,y)) \tag{11}\]
### Dropout Regularization
Dropout regularization is a retraining mechanism where the dropout parameters of the language models are tweaked and the models are retrained. This method was first defined by Webster et al. (2020).
## 6 Empirical Analysis
In this section, we provide an empirical analysis of the bias detection and bias mitigation techniques. The results are presented in Table 1, Table 2 and Table 3. The primary findings of the empirical study were as follows. Most metrics were defined in such a way that the prediction probabilities were defined for a word as opposed to tokens. Some of these metrics failed in case of models whose embeddings split the words into smaller tokens, an example was OpenAI-GPT, which failed to generate the StereoSet score because of the internal representation of the model.
Another case of inconsistency was the final embedding layer of the model. While most models have a final layer that corresponds to the embedding of the output, the way that they are structured and named vary across models, making it hard to provide a single unified interface to perform operations like NullSpace Projection on the models.
Previous works Meade et al. (2021), have not split the dataset into social categories to individually run the metrics against a specific social class. We have made attempts to resolve this issue by splitting StereoSet metric and allowing users to run the metric against a single category like 'race'. This increases the speed and reduces the runtime. Furthermore, the dataset used for measuring WEAT scores - CrowS, has also been split into social groups for faster computations, without compromising on the efficiency or accuracy.
In tables 2 and 3, we present the performance of GPT-2 and BERT after they are debiased with Dropout Regularization based retraining. It can be noted that the WEAT score is the average effect sizes. Both the models show a reduction in the effect size. This implies that the method removes a lot of the gender information from the model. This is contradictory to the results presented in Meade et al. (2021).
The two models see an increase in the stereoSet score, indiciating that the model predicts stereotypes more often than not. However, this is not a very good indication since it is a measure of average number of stereotypical and anti-stereotypical predictions.
For both the models, counterfactually augmented Yelp-small dataset was used to retrain.
## 7 Future Work
The pipeline forms the basis for debiasing and testing language models for bias and the toolkit currently supports a variety of open source metrics and datasets. However, numerous methods are not included in this toolkit due to compatibility issues and lack of availability. The toolkit currently does not support cascading metrics and including hybrid methods that are applied concurrently to the models. This could help us leverage metrics that work in specific social setting. Lastly, the toolkit would greatly benefit from a web interface that would allow users to plug in their models and extract the statistics from a website.
|
2310.05719 | Transformer Fusion with Optimal Transport | Fusion is a technique for merging multiple independently-trained neural
networks in order to combine their capabilities. Past attempts have been
restricted to the case of fully-connected, convolutional, and residual
networks. This paper presents a systematic approach for fusing two or more
transformer-based networks exploiting Optimal Transport to (soft-)align the
various architectural components. We flesh out an abstraction for layer
alignment, that can generalize to arbitrary architectures - in principle - and
we apply this to the key ingredients of Transformers such as multi-head
self-attention, layer-normalization, and residual connections, and we discuss
how to handle them via various ablation studies. Furthermore, our method allows
the fusion of models of different sizes (heterogeneous fusion), providing a new
and efficient way to compress Transformers. The proposed approach is evaluated
on both image classification tasks via Vision Transformer and natural language
modeling tasks using BERT. Our approach consistently outperforms vanilla
fusion, and, after a surprisingly short finetuning, also outperforms the
individual converged parent models. In our analysis, we uncover intriguing
insights about the significant role of soft alignment in the case of
Transformers. Our results showcase the potential of fusing multiple
Transformers, thus compounding their expertise, in the budding paradigm of
model fusion and recombination. Code is available at
https://github.com/graldij/transformer-fusion. | Moritz Imfeld, Jacopo Graldi, Marco Giordano, Thomas Hofmann, Sotiris Anagnostidis, Sidak Pal Singh | 2023-10-09T13:40:31Z | http://arxiv.org/abs/2310.05719v3 | # Transformer Fusion with Optimal Transport
###### Abstract
Fusion is a technique for merging multiple independently-trained neural networks in order to combine their capabilities. Past attempts have been restricted to the case of fully-connected, convolutional, and residual networks. In this paper, we present a systematic approach for fusing two or more transformer-based networks exploiting Optimal Transport to (soft-)align the various architectural components. We flesh out an abstraction for layer alignment, that can generalize to arbitrary architectures - in principle - and we apply this to the key ingredients of Transformers such as multi-head self-attention, layer-normalization, and residual connections, and we discuss how to handle them via various ablation studies. Furthermore, our method allows the fusion of models of different sizes (_heterogeneous fusion_), providing a new and efficient way for compression of Transformers. The proposed approach is evaluated on both image classification tasks via Vision Transformer and natural language modeling tasks using BERT. Our approach consistently outperforms vanilla fusion, and, after a surprisingly short finetuning, also outperforms the individual converged parent models. In our analysis, we uncover intriguing insights about the significant role of soft alignment in the case of Transformers. Our results showcase the potential of fusing multiple Transformers, thus compounding their expertise, in the budding paradigm of model fusion and recombination.
## 1 Introduction
Transformers, as introduced by Vaswani et al. (Vaswani et al., 2017), have profoundly impacted machine learning, establishing a prevailing neural network architecture across various domains. Transformers consistently excel in different fields, including natural language processing (Lin et al., 2022), time series forecasting (Wen et al., 2022), and computer vision (Dosovitskiy et al., 2020). Their success can be attributed to their scaling properties (Kaplan et al., 2020) and efficient utilization of contemporary hardware architectures designed for extensive parallel computing. The unification of a single architecture across tasks facilitates immediate, far-reaching applicability of any analysis that handles general properties of the Transformer architecture.
As large Transformer foundation models (Bommasani et al., 2021) continue to grow in size and complexity, the challenges associated with training, i.e., exponential increase in parameters, and compute for a fixed incremental improvement in performance (Hoffmann et al., 2022; Zhai et al., 2022; Bachmann et al., 2023), become increasingly more perilous. Consequently, achieving state-of-the-art results is often confined to researchers with access to ample GPU resources. To address these issues and strive for more efficient and sustainable performance improvements, we embark on the following more compelling and alternative inquiry:
_Can we combine the capabilities of pre-trained Transformer models?_
Merging multiple Transformer models into a single entity while preserving their unique capabilities can yield several advantages; (a) _Enhanced performance_ by harnessing the collective capabilities of
individual models. (b) _Reduced inference complexity_, as querying a single model replaces the need to query \(n\) models in an ensemble, reducing computational (FLOPs) and storage requirements by a factor of \(n\). (c) _The necessity to train from scratch can be readily eliminated_, leveraging existing public models, already available, and numerous in quantity 1.
Footnote 1: On huggingface there are more than 339,000 models available as the 22\({}^{\text{nd}}\) of September 2023.
A straightforward way of fusing, i.e., merging, models of the same architecture, is to average their weight matrices one-to-one, referred to as 'Vanilla Fusion' (VF). However, this method overlooks potential misalignments between the parameter matrices, arising due to neurons at the same positions, in different models, encoding different information (Godfrey et al., 2022). Instead, we propose to use Optimal Transport fusion (OTFusion) (Singh and Jaggi, 2020), which at its core, aligns the weight or parameter matrices before fusing them.
Thus, by virtue of such an alignment, OTFusion ensures that the fused model effectively integrates the knowledge and capabilities of the individual models to be merged, rather than simply averaging the weight matrices without guaranteeing meaningful information preservation. Additionally, OTFusion accommodates the fusion of models with different widths, and in turn, different sizes, which is fundamentally not possible with VF. This is a crucial feature, as such heterogeneous models are available in plenty, to better unleash the potential of existing pre-trained models. Consequently, OTFusion has been shown to be an effective method for fusing fully connected (Singh and Jaggi, 2020), convolutional (Nguyen et al., 2021) and recurrent neural networks (Akash et al., 2022) on a variety of tasks, heavily outperforming VF.
Yet, despite its wide adoption (Nguyen et al., 2021; Liu et al., 2022; Ainsworth et al., 2022), the layerwise procedure proposed of OTFusion does not fit well with contemporary architectural design, that comprises of constant residual streams, normalization layers, and attention operations. Hence, the primary aim of our work is to develop techniques that help bridge these gaps and successfully generalize fusion to Transformer-based architectures.
**Our contributions are:** (a) We analyze each of the idiosyncratic architectural components in Transformers in thorough detail, with an ultimate aim to best fuse them across different models. Throughout our discussion, we exposit our approach based on the perspective of _flow of the transportation maps2_, that makes for intuitive visualizations and interpretation. (b) We uncover that, surprisingly, OTFusion based on a _hard-alignment underperforms_ in this context, contrary to the case of fully-connected or convolutional architectures; and that, _soft-alignment plays a key role_ in successful one-shot fusion. (c) We showcase the efficacy of our approach by extensive experimentation involving the fusion and finetuning of Vision Transformers (ViTs) across multiple datasets, including CIFAR10, CIFAR100, Tiny ImageNet and ImageNet-1k, as well as BERT (Devlin et al., 2018) models for natural language tasks. Here, we _consistently outperform_ the original _converged_ models across tasks and datasets, by about \(\sim\) _1.0%_, _while significantly reducing computational and storage costs by a factor of \(n\)_.
Footnote 2: This should be reminiscent of the flow of tensors in the computation graph of neural networks, and thus allows one to see a general strategy that can be potentially be adapted for any architecture type.
Overall, our research marks an important stride in advancing model fusion techniques, that help deliver enhanced performance and efficiency for modern Transformer based architectures.
## 2 Related Work
Model combination and ensembling.The combination of multiple models has been a timeless idea in machine learning, from classical works on bagging and boosting (Breiman, 1996) to more contemporary approaches (Mienye and Sun, 2022; Garipov et al., 2018; Jolicoeur-Martineau et al., 2023). The key idea behind these works is to boost model performance, by capitalizing on the unique strengths of each model while mitigating their individual limitations. Or, more technically, one can think of model combination as a way of reducing the variance of the predictors (Geman et al., 1992). However, the main limitation is that such methods require the execution of each (parent) model for the final prediction, with a cost that scales linearly with the number of models
Model Fusion.Model fusion (Wang et al., 2020; Tatro et al., 2020; Singh and Jaggi, 2020; Wortsman et al., 2022; Matena and Raffel, 2022; Ainsworth et al., 2022; Juneja et al., 2022; Nguyen et al.,
2023; Kandpal et al., 2023) has emerged as a particularly notable direction in recent years, gaining significant traction in the machine-learning community. This line of work focuses on building better model combination approaches that account for the network structure and its inherent symmetries. We elaborate on some of these works, which are more relevant to the focus of our paper, below.
Singh & Jaggi (2020) proposes a novel approach based on the OT theory exploiting the Wasserstein distance, where the neuron association allows to fuse pre-existing models with the same depth in a _one-shot_ fashion, thus without requiring retraining. OTFusion outperforms VF and was successfully used for model compression and fusion of CNNs, residual networks, and multilayer perceptrons. The main limitation of OTFusion is that the models require to have the same depth. This was then addressed, to some extent, by Nguyen et al. (2021) via cross-layer alignment, an unbalanced assignment problem solved using dynamic programming where the number of layers of the neural network is balanced before applying layer-wise model fusion. Liu et al. (2022) also built on top of OTFusion, generalizing the work as a graph-matching task, and taking into account the second-order similarity of model weights instead of linear alignment.
The interest in model fusion is growing in the research community, and recent efforts on the topic have shown theoretical insights on fusion, extensions of previous algorithms to new network topologies, and fusion of models performing different tasks. In particular, Akash et al. (2022) adapted OTFusion for recurrent networks, such as RNNs and LSTMs Further, Stoica et al. (2023) propose an algorithm, for convolutional and residual architecures, that aims at finding redundant features within the same model and across the different models to be fused, so as to keep only meaningful and unique features in the fused model.
Fusion with a focus on Transformers.Wortsman et al. (2022) consider fusing Transformer models that have a common backbone network that is pre-trained on the same dataset, but that are fine-tuned, say, with different hyperparameters. Owing to this the models remain sufficiently close in the parameter space, which precludes the need to align them, and lets them employ just vanilla fusion (one-to-one averaging of the parameters) while still obtaining a gain in performance.
However, arguably, the more empowering capability is to _fuse transformer networks that are potentially much more distant in their parameter spaces_ and are diverse in nature. For instance, this arises when the networks have different initializations, or see examples in different batch orderings, or when they have different sizes, and more. This specific problem is tackled in this work, which is, to the best of our knowledge, _the first aiming at fusing transformer architectures by aligning their weights_.
## 3 Background
**Optimal Transport (OT).** OT (Villani et al., 2009) has gained prominence in machine learning for its ability to compare probability distributions effectively, with applications in generative modelling (Arjovsky et al., 2017), class incremental learning (Zhou et al., 2021) and model compression (Li et al., 2021). At its heart, OT aims to find a transport map (TM) \(\mathbf{T}\) signifying how much of a discrete source distribution should be moved towards a discrete destination distribution to align the two. This alignment can be hard (\(\mathbf{T}\) is a permutation matrix and the solution to the Earth-Mover's Distance, EMD, (Rubner et al., 2000) problem) or can be relaxed yielding a soft alignment (solved with the Sinkhorn-Knapp algorithm (Knight, 2008)). The softness of the alignment is controlled by a regularization parameter \(\lambda_{\text{sinkhorn}}\), where lower values result in harder alignment. More details about OT can be found in the Appendix A.1.
**OTFusion.**Singh & Jaggi (2020) applies this theory to align networks in a layerwise fashion, using either weights or activations as underlying distributions. After the alignment of one or more models to an anchor model, these are then averaged. Formally, for a layer \(\ell\) of the model, the transpose of the TM of the previous layer is pre-multiplied with the weight matrix of the current layer: \(\widehat{\mathbf{W}}^{(\ell,\ell-1)}\leftarrow\mathbf{T}^{(\ell-1)^{\top}} \mathbf{W}^{(\ell,\ell-1)}\). The current layer can then be aligned by post-multiplying with the TM of the current layer. \(\widehat{\mathbf{W}}^{(\ell,\ell-1)}\leftarrow\widehat{\mathbf{W}}^{(\ell, \ell-1)}\mathbf{T}^{(\ell)}\). Ainsworth et al. (2022) propose a highly similar approach which, in certain cases, effectively boils down to the same linear programming problem that uncovers (provably and practically) same alignments as OT; thus we continue to base our approach on OTFusion henceforth.
Methodology and Implementation
With a modular architecture like the transformer, it is intuitive to use a divide-and-conquer approach to develop a fusion algorithm. Therefore, we first divide the architecture into its simplest building block -- fully connected layers -- that can be fused by the prevalent OTFusion strategy. The question remains; how to effectively connect these building blocks, especially if heterogeneous? How to hierarchically reconstruct a fully fused transformer ensuring consistency of the single fused blocks?
As we tackle this problem, we will guide our discussion with a transport flow perspective, which allows for an intuitive and effective concatenation of blocks of any sort, and that, therefore, in principle can be applied to every architecture. Henceforth, we will use the notation from Vaswani et al. (2017) for Transformers. We showcase our methodology in the non-masked self-attention case, but our method can generalize to the cross-attention or causal masked attention.
### Transportation Map Flow Graph
In the typical OTFusion application, the TM of the previous layer is simply passed to the next layer. However, in more complex architectures, the incoming TM of a layer can depend on multiple TMs. To formalize and visualize this flow of TMs, we present the _Transportation Map Flow Graph_.
To introduce the concept, we use the flow graph of a residual connection (Fig. 1) as an example. Rectangles represent the neural network layers; red nodes represent any non-learnable computations or permutations inside the network; edges represent the propagation of the TMs. Layers have exactly one incoming and one outgoing edge. Computation nodes always have multiple incoming edges and one outgoing edge, where the outgoing TM must depend on the incoming TMs. The most challenging aspect of applying OTFusion to complex architectures is determining the ideal strategy for propagating TMs through red nodes.
### Transformer Fusion
#### 4.2.1 Residual Connections
In residual connections, the outputs of a current layer and a residual layer are summed up. The TMs coming from these two layers will be different, therefore the ideal TM flow strategy has to be determined. We explored three heuristics to calculate a weighting vector \(\mathbf{\gamma}^{(\ell)}\), where each entry \(\gamma_{i}^{(\ell)}\) scales the corresponding rows of the TMs. After obtaining \(\mathbf{\gamma}^{(\ell)}\) we compute the weighted average as shown in Eq. 1. Find the results in Sec. 5.1.
\[\mathbf{\mathrm{T}}^{(\ell)}_{\text{out}}=\mathbf{\mathrm{T}}^{(\ell)}_{\text{current }}\text{diag}(\mathbf{1}-\mathbf{\gamma}^{(\ell)})+\mathbf{\mathrm{T}}^{(\ell)}_{\text{ residual}}\text{diag}(\mathbf{\gamma}^{(\ell)}) \tag{1}\]
AveragingFor plain averaging, as proposed by Singh and Jaggi (2020), we set \(\forall\,i,\,\gamma_{i}=0.5\). This heuristic does not depend on activations and can therefore be used even in the case of weight-based alignment. However, it introduces the strict assumption that the residual and the current layer TM are of equal importance when aligning the subsequent layer.
Weighted ScalarTo alleviate the equal contribution constraint from the averaging method, we compute a weighting factor \(\forall\,i,\,\gamma_{i}^{(\ell)}=\gamma_{\text{scalar}}^{(\ell)}\) (Eq. 2). We use the activations of the anchor model, over a batch of samples \(S\), because only those carry information about the importance of the current \(f_{\text{current}}^{(\ell)}(\mathbf{\mathrm{x}})\) and the residual branch \(f_{\text{residual}}^{(\ell)}(\mathbf{\mathrm{x}})\) in the anchor model.
\[\gamma_{\text{scalar}}^{(\ell)}=\frac{\sum_{\mathbf{\mathrm{x}}\in S}|f_{\text{ residual}}^{(\ell)}(\mathbf{\mathrm{x}})|}{\sum_{\mathbf{\mathrm{x}}\in S}|f_{\text{ current}}^{(\ell)}(\mathbf{\mathrm{x}})|+\sum_{\mathbf{\mathrm{x}}\in S}|f_{\text{ residual}}^{(\ell)}(\mathbf{\mathrm{x}})|} \tag{2}\]
Weighted MatrixAs opposed to the Weighted Scalar method, here, we calculate a weight vector \(\mathbf{\gamma}^{(\ell)}\) where each entry \(\gamma_{i}^{(\ell)}\) weighs each residual connection separately.
Figure 1: TM flow graph for a residual connection.
We note that Ainsworth et al. (2022) propose to propagate either the identity (\(\mathbf{T}_{\text{out}}=\mathbf{I}\)) or the residual transportation map itself (\(\forall\,i,\,\gamma_{i}^{(l)}=1\)). In the case of hard alignment, these methods perform worse than averaging.
#### 4.2.2 Multi-Head Attention
The attention mechanism (Fig. 2) poses multiple challenges when it comes to TM flow: what are the incoming TMs for \(\mathbf{W}^{Q},\mathbf{W}^{K}\) and \(\mathbf{W}^{V}\)? Which TM is propagated to \(\mathbf{W}^{O}\)? How to handle attention with multiple heads?
The first challenge is conveniently solved by the TM flow graph. We can simply use the TM from the previous layer for each \(\mathbf{W}^{Q}\), \(\mathbf{W}^{K}\) and \(\mathbf{W}^{V}\). This even holds true for multiple heads. The incoming TM of \(\mathbf{W}^{O}\) is more complex to obtain because it depends on the outgoing TMs of \(\mathbf{W}^{Q}\), \(\mathbf{W}^{K}\), and \(\mathbf{W}^{V}\). However, if we constrain both TMs of \(\mathbf{W}^{K}\) and \(\mathbf{W}^{Q}\) to be equal permutation matrices (i.e., hard alignment with \(\mathbf{T}_{Q}=\mathbf{T}_{K}=\mathbf{T}_{QK}\)), we observe that the permutation matrices cancel each other out inside the softmax (see Eq. 3). This shows that the product in the softmax is undisturbed by the alignment and that the TMs of \(\mathbf{W}^{K}\) and \(\mathbf{W}^{Q}\) do not have to be propagated. Thus, only the outgoing TM of \(\mathbf{W}^{V}\) is propagated to \(\mathbf{W}^{O}\).
We also investigate alleviating the constraint of equal TMs for \(\mathbf{W}^{K}\) and \(\mathbf{W}^{Q}\) fusion and the propagation of \(\mathbf{T}_{V}\) in the context of soft alignment.
\[\widetilde{\mathbf{Q}}=\mathbf{QT}_{QK}\quad\text{and}\quad\widetilde{\mathbf{ K}}=\mathbf{KT}_{QK}\quad\text{and}\quad\widetilde{\mathbf{Q}}\widetilde{ \mathbf{K}}^{\top}=\mathbf{QT}_{QK}\mathbf{T}_{QK}^{\top}\mathbf{K}^{\top}= \mathbf{QK}^{\top} \tag{3}\]
Finally, we address the fusion strategy for the multi-head architecture. Attention heads have the property of being permutation invariant with respect to other heads, meaning that one can swap one head with another without disrupting the structure of the attention mechanism. Additionally, there is no intrinsic one-to-one correspondence between the heads of different Transformer models. To incorporate both these observations into our algorithm we propose cross-head alignment. During cross-head alignment, \(\mathbf{W}_{i}^{Q}\), \(\mathbf{W}_{i}^{K}\) and \(\mathbf{W}_{i}^{V}\) are concatenated across the output dimension to form three combined weight matrices. OTFusion can then be directly applied to the concatenated matrices and \(\mathbf{T}_{V}\) can be propagated to \(\mathbf{W}^{O}\).
#### 4.2.3 Layer Normalization, Embeddings and Bias
**The layer normalization** is a learnable neural network parameter and consequently must be fused. It contains only two parameters (\(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\)) per input and there are no interconnections between different inputs and outputs. Therefore, no TM has to be computed for this layer. The parameters are only aligned w.r.t. to the incoming TM. The incoming TM is then propagated to the subsequent layer.
**The VIT embeddings** fusion approach is most effectively conveyed by its TM flow graph, as depicted in Fig. 3. For the concatenation, we notice that the class token is only a small fraction of the full sequence, in other words, for the integrity of the sequence, it is far more important to propagate the TM of the patch embeddings than the one for the class token. After concatenation, the positional embeddings are added. We notice that the addition is the same operation as for residual connections, so we can use one of the three TM flow strategies from Sec. 4.2.1.
**The bias** is only connected to the output of a neural network layer, so we align it using the outgoing TM of the corresponding layer.
Figure 3: ViT embeddings flow graph.
Figure 2: Self-Attention flow graph.
### Alignment Strategies
Soft vs Hard AlignmentSingh and Jaggi (2020) find that OTFusion works best when using the EMD solver which computes permutation matrices as TMs. However, we don't want to limit the search space for optimal alignment to only permutation matrices, as it seems too constraining for complex architectures. We, therefore, explore using the Sinkhorn algorithm and tuning the softness of the TM by optimizing over the Sinkhorn regularizer.
Weights vs. activations alignmentThe weight-based approach introduced by Singh and Jaggi (2020) can be directly applied to Transformers, while the activation-based strategy needs a bit more thought. Transformers operate on sequences of tokens as opposed to simpler architectures that only operate one token at a time. In our activations-based algorithm, we treat every token of the sequence as a possible activation.
Sequence FilteringIn Transformers, it is evident that not every token within a sequence contributes equally to an output. For instance, for an image classification task with ViTs, it is clear that at the end of the encoder chain, all information must have been moved into the class token, while the other tokens of the sequence will not contribute any more to the classification. Our hypothesis is that activations-based alignment performs best if it is performed using only the most important tokens in the sequence. Therefore, we explored filtering out the least relevant information. For datasets where images are centered, we propose window filtering, where only an \(n\) by \(n\) window of patches is selected for every image (window_n). Additionally, we explored what happens if only the class token is used to perform the activations-based alignment (only_cls).
## 5 Experiments and Results
We evaluate the quality of our approach with two prominent transformer-based architectures: the ViT (Dosovitskiy et al., 2020) and BERT (Devlin et al., 2018). Our focus is to assess the performance and robustness of our proposed fusion techniques in both image and NLP domains. These models offer a direct comparison as they share the same encoder-only architecture.
We conducted our experiments on multiple well-known image classification datasets: CIFAR 10, CIFAR100, Tiny ImageNet, and ImageNet-1k. We used Hugging Face both for the implementation of the ViT and for retrieving the datasets. Besides the image classification tasks, we showcase our fusion strategy on the BERT model for an NLP task. We train from scratch multiple BERT models on the masked language modeling (MLM) task presented in Devlin et al. (2018) over a subset of the Wikipedia dataset, publicly available on the Hugging Face Hub3.
Footnote 3: [https://huggingface.co/datasets/wikipedia/viewer/20220301.simple](https://huggingface.co/datasets/wikipedia/viewer/20220301.simple)
Model TrainingFirst, we train individual models from scratch on each dataset until _convergence_. We ensure model diversity by initializing each model with different seed values and different batch randomization. This results in unique models with similar performance but with a large diversity in their parameter space, enough to allow for a consistent performance gain when ensembled, as well as for a dramatic drop in performance if fused with a naive approach such as VF. This diversity offers a challenging fusion problem requiring a non-trivial alignment strategy, and thus effectively recreates a plethora of other scenarios (e.g. models trained on different (sub)datasets). Details and training parameters of all models can be found in Appendix B.
Model FusionWe assessed the proposed fusion strategies, and their combination thereof, on the CIFAR 10 dataset (refer to the ablation studies in Section 5.1). We measure the performance through the so-called _one-shot_ capability, namely the performance of the fused model, without any retraining, on the same task and metric of the parents. This capability is the first important proxy of the capacity of the fusion algorithm to align and then fuse the parent models. The optimal fusion strategy identified on the CIFAR 10 task is then applied to the other tasks and architectures. For each task and alignment strategy (i.e. weights-based and activations-based) we optimize the Sinkhorn regularizer separately (see Fig. 10). The fusion step runs in just seconds on a general-purpose CPU.
FinetuningBesides the _one-shot_ performance, similarly to Singh and Jaggi (2020); Nguyen et al. (2021), we evaluate the effect of finetuning the fused model. The resulting performance is compared against the single parent models at _convergence_ (and thus do not benefit from finetuning), their
ensembling, and the VF model that also went through a round of finetuning. Both our fused model and the VF model are optimized separately over a common set of reasonable hyperparameters.
NoteIn every result table or caption we encode the model dimension as (_hidden-layer dimension/intermediate-layer dimension/number of encoders_). Additionally, we report the relative computational burden (latency and FLOPs) below each result table entry.
### One-shot Experiments
We optimize the fusion strategy on CIFAR 10, searching the configurations previously introduced. In contrast with the observations of Singh and Jaggi (2020) with non-transformer architectures, we observe that a soft-alignment (Sinkhorn) strategy consistently outperforms hard-alignment (EMD). The value of the Sinkhorn regularizer is chosen to maximize the one-shot accuracy (separately for activations- and weights-based alignment). The optimal strategy for handling the residual connections has proven to be the _averaging_ policy. Activations-based alignment with the 6x6 window filtering (_window_6_) approach performs best among other filtering strategies and weights-based alignment.
In Tab. 1, we present the _one-shot_ performance for the best configuration of fusion with the weights-based alignment and the activations-based alignment, both in the scenario with two models and with five models together. VF dramatically drops at random accuracy, while our fusion methodologies are able to preserve most of the capabilities of the individual models. In particular, we achieve the **best accuracy with our soft, activations-based fusion**.
Fig. 4 visualizes a two-dimensional slice of the accuracy landscapes of the anchor model and the two fused models, OT and VF. The visualization is based on the procedure outlined in (Garipov et al., 2018): computing the accuracy on linear interpolations of the parameters along two axes defined by the three models, with one of them (here, the anchor model) serving as the origin. The plot shows the OT model being in the same basin as the anchor one, while the VF model is separated by a barrier from such basin. This representation effectively underscores the superior performance of our algorithm in comparison to VF, emphasizing its ability to facilitate more dependable knowledge transfer.
Ablation StudiesIn this paragraph, we study the effect of the different OTFusion hyperparameter choices on the _one-shot_ performance on the CIFAR10 dataset for two-models fusion. From Fig. 4(a), it is evident that alleviating the constraint of hard alignment (EMD) allows for better performance retention. We attribute this observation to the flexibility of soft alignment which better accommodates the highly complex nature of the transformer, as multi-head self-attention. We observe a bell-shaped curve with a maximum for a non-zero regularization, thus demonstrating that the optimal alignment is neither hard nor merely soft. We can therefore optimize this parameter with an inexpensive sweep. Furthermore, as shown in Fig. 4(b), the soft alignment for the activations-based fusion is much more
\begin{table}
\begin{tabular}{l c|c c c c|c} \hline \hline Dataset & \begin{tabular}{c} Individual \\ Models \\ \end{tabular} & \begin{tabular}{c} VF \\ (ours) \\ \end{tabular} & \begin{tabular}{c} OT-uts \\ (ours) \\ \end{tabular} & \begin{tabular}{c} OT-acts \\ EMD (ours) \\ \end{tabular} &
\begin{tabular}{c} Gain over \\ VF \\ \end{tabular} \\ \hline \hline CIFAR10 & [92.34, 92.31] & 7.59 & 57.23 & **60.87 \(\pm\) 0.44** & 24.50 \(\pm\) 5.66 & +53.28 \\ \hline CIFAR10 & [92.34, 92.31, 92.28, 92.04, 91.47] & 9.47 & 44.46 & **46.56 \(\pm\) 0.71** & 43.28 \(\pm\) 2.81 & +37.09 \\ \hline \hline \end{tabular}
\end{table}
Table 1: _One-shot_ accuracies on the CIFAR10 dataset for the individual parent models, VF, weights-based soft-alignment fusion (\(\lambda_{\text{sinkhorn}}=0.06\)), activations-based soft alignment (\(\lambda_{\text{sinkhorn}}=0.08\)) fusion, and activations-based hard-alignment (EMD) fusion. Activations-based is reported with mean and standard deviations over different random seeds. For our best-performing method, we add the absolute increase over VF.
Figure 4: Two-dimensional slice of the accuracy landscapes of the anchor and one-shot OT and VF fused models.
stable than hard alignment (EMD) for different seeds of data, suggesting that hard alignment is much more impacted by the activations.
### Finetuned Performance
As a last stage of the experimental setup, we finetune the fused models. The performance, as well as the retraining curves, offer an important insight into the quality of the fusion algorithm. While the _one-shot_ performance can be heavily impacted by even only a single problematic layer, the capacity of the fused model to effectively, rapidly, and easily recover the performance of the parents allows for a deeper insight into the quality of the fusion across the whole architecture.
We show the finetuning results on the widely adopted datasets CIFAR100, and ImageNet-1k (results on Tiny ImageNet in the Appendix). We first employ our fusion approach on the ViTs trained on the CIFAR100 dataset. As mentioned, we separately optimize the fused model on a common set of hyperparameters, in this case a learning rate (LR) in \(\{10^{-3},10^{-4},10^{-5}\}\) and the number of epochs in \(\{10,20,100,200\}\). In Tab. 2 we observe that **both our soft-alignment strategies** (i.e. with weights- and activations-based alignment) **are capable of outperforming the
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & Ind. Models & Ens. & Ft. VF & Ft. OT-wts & Ft. OT-acts \\ \hline CIFAR100 & [64.94, 64.66] & 68.04 & 64.91 (+0.03) & **65.80** (+0.86) & 65.35 (+0.41) \\ & \(\times 1\) & \(\times 2\) & \(\times 1\) & \(\times 1\) & \(\times 1\) \\ \hline CIFAR100 & [64.94, 64.66, 64.44, 64.38, 64.34, 64.07] & 70.71 & 63.19 (+0.75) & **65.98** (+1.04) & 65.25 (+0.31) \\ & \(\times 1\) & \(\times 6\) & \(\times 1\) & \(\times 1\) & \(\times 1\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Post-finetuning accuracies on the CIFAR100 dataset for the individual parent models, their ensemble, VF, weights- and activations-based soft alignment. Model dimension: (384/1536/7).
Figure 5: (a) Sinkhorn regularizer effect on _one-shot_ performance; (b) stability with different seeds for activations-based fusion over a different number of samples; (c) performance with different activations-filtering strategies for a different number of samples; (d) different transport map policies for residual connections over a different number of samples.
**converged parents**, with the gain that increases with the number of parent models. This suggests a successful knowledge transfer of the parents into the fused model. While the obtained accuracy lacks behind the ensembling performance, in our scenario there is no computational overhead, while the cost of the ensembling model grows linearly with the number of models.
In Tab. 3 we present further results on the challenging and widely-adopted ImageNet-1k dataset. The results are consistent with those found in the CIFAR100 case, strengthening the _general applicability_ of our methods, and its _scalability to larger models and more challenging datasets_. We also stress the fact that, especially with this difficult dataset, even after finetuning, VF fails to recover a comparable accuracy, converging to suboptimal performance.
In this work, we focused on the vision application of the Transformer architecture, but our method is agile to architectural changes, and we demonstrate its wide applicability to the BERT model. Although preliminary explorations of our fusion strategy on the BERT model show some differences with respect to the ViT case (more details on this are provided in Appendix C), the results are on par with those presented above. In particular, the fused and finetuned model, outperforms both parents and VF on the widely adopted _GLUE_ benchmark (Wang et al., 2018). The results are presented in Tab. 17 of the App. D.
Our methodology, as opposed to VF, works out of the box with models having different widths (heterogeneous fusion). _We find a consistent absolute increase in test accuracy over the performance of the smaller anchor network_, thus implying successful knowledge transfer (Tab. 4). These results showcase that our method is an effective and _efficient alternative to knowledge distillation_.
## 6 Discussion
The fusion methodology for transformer models proposed in this paper is easily adapted to different architectural variants and is readily applicable to models of different widths. However, heterogeneous fusion of networks of different depths is a common limitation of the predominant fusion methods Ainsworth et al. (2022); Singh & Jaggi (2020) which are inherently based on a sequential layerwise alignment. Consequently, we too inherit a similar limitation when expanding fusion to the case of Transformers. Overall, this is undoubtedly a fascinating research challenge to extend Transformer fusion (or, broadly speaking, fusion at large) to heterogeneous depth settings which, however, is outside the scope of the current work.
**In summary** we showcased how distinct independently trained transformer networks can be combined through the lens of Optimal Transport. Utilizing a novel graph interpretation of the transportation map flow, we developed an algorithm for fusing multiple transformer networks that extends the existing fusion techniques and that specifically caters to the idiosyncrasies of the transformer architecture. We also uncovered an intriguing benefit of using soft alignment when fusing Transformers, which had been under-utilized in the past. Overall, we showed that our technique can retain most of the performance of the converged parent models in _one-shot_, and even outperforms them after finetuning, across multiple vision and NLP tasks proving the scalability and wide applicability of our methods thereby providing a highly efficient and promising alternative to ensembling. Finally, our algorithm successfully applies to the fusion of models of different sizes, too, efficiently transferring knowledge from larger to smaller Transformers, and thus offering an effective alternative to distillation.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Anchor & Larger & Ens. & Ft. OT-WTS \\ \hline \(63.18\) & \(64.94\) & \(67.66\) & \(\mathbf{64.11}\)\((\mathbf{+0.93})\) \\ \(\times 1\) & \(\times 4\) & \(\times 5\) & \(\times 1\) \\ \((192/15367)\) & \((384/153667)\) & & \((192/15367)\) \\ \hline \(64.07\) & \(64.79\) & \(67.94\) & \(\mathbf{64.88}\)\((\mathbf{+0.81})\) \\ \(\times 1\) & \(\times 2.3\) & \(\times 3.3\) & \(\times 1\) \\ \((384/15367)\) & \((576/2304/7)\) & & \((384/15367)\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results for heterogeneous fusion on the CIFAR100 dataset. Note that VF cannot be applied for this type of fusion because the parent models have different widths.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & Ind. Models & Ens. & Ft. VF & Ft. OT-WTS \\ \hline ImageNet-1k & \([75.33,\,74.88]\) & \(76.56\) & \(67.83\)\((\mathbf{+0.50})\) & \(\mathbf{75.80}\)\((\mathbf{+0.47})\) \\ \(\times 1\) & \(\times 2\) & \(\times 1\) & \(\times 1\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracies on the ImageNet-1k dataset after finetuning for the individual parent models, their ensemble, VF, and weights-based soft alignment. Model dimension: (384/1536/12).
## Acknowledgements
Sidak Pal Singh would like to acknowledge the financial support from Max Planck ETH Center for Learning Systems.
|
2305.09395 | 2D electromagnetic simulations of RF heating via inductive coupling in
the SPIDER device | SPIDER is the prototype ion source of MITICA, the full-size neutral beam
heating system conceived for the ITER tokamak. It includes eight drivers to
heat and sustain the inductively coupled plasma (ICP). Owing to their near
cylindrical symmetry, the coupling between the radio-frequency (RF) active
currents and the source plasma is studied using a 2D electromagnetic approach
with simplified expressions for the plasma electrical conductivity taken from
the literature. The power absorbed by the plasma and the effect of the induced
plasma currents in lowering the inductance of the driver are based on data from
the dedicated S16 experimental campaign (y.~2020) of SPIDER: plasma electron
densities on the order of $10^{18}$ m$^{-3}$, electron temperatures $\sim 10$
eV; neutral gas pressure $\sim 0.3$ Pa and up to $50$ kW of net power per
driver. It is found that the plasma conductivity cannot be explained by the
friction forces associated to local collisional processes alone. The inclusion
of an effective collisionality associated to non-local processes seems also
insufficient to explain the experimental information. Only when the electrical
conductivity is reduced where the RF magnetic field is more intense, can the
heating power and driver inductance be acceptably reproduced. We present the
first 2D electromagnetic ICP calculations in SPIDER for two types of plasma,
without and with the addition of a static magnetic field. The power transfer
efficiency to the plasma of the first drivers of SPIDER, in view of these
models, is around 50% | D. LΓ³pez-Bruna, P. Jain, M. Recchia, B. Zaniol, E. Sartori, C. Poggi, V. Candeloro, G. Serianni, P. Veltri | 2023-05-16T12:29:23Z | http://arxiv.org/abs/2305.09395v1 | # 2D electromagnetic simulations of RF heating via inductive coupling in the SPIDER device
###### Abstract
SPIDER is the prototype ion source of MITICA, the full-size neutral beam heating system conceived for the ITER tokamak. It includes eight drivers to heat and sustain the inductively coupled plasma (ICP). Owing to their near cylindrical symmetry, the coupling between the radio-frequency (RF) active currents and the source plasma is studied using a 2D electromagnetic approach with simplified expressions for the plasma electrical conductivity taken from the literature. The power absorbed by the plasma and the effect of the induced plasma currents in lowering the inductance of the driver are based on data from the dedicated S16 experimental campaign (y. 2020) of SPIDER: plasma electron densities on the order of \(10^{18}\) m\({}^{-3}\), electron temperatures \(\sim 10\) eV; neutral gas pressure \(\sim 0.3\) Pa and up to 50 kW of net power per driver. It is found that the plasma conductivity cannot be explained by the friction forces associated to local collisional processes alone. The inclusion of an effective collisionality associated to non-local processes seems also insufficient to explain the experimental information. Only when the electrical conductivity is reduced where the RF magnetic field is more intense, can the heating power and driver inductance be acceptably reproduced. We present the first 2D electromagnetic ICP calculations in SPIDER for two types of plasma, without and with the addition of a static magnetic field. The power transfer efficiency to the plasma of the first drivers of SPIDER, in view of these models, is around 50%.
## 1 Introduction
The heating and sustainment of the plasma in the source of the SPIDER (Source for the Production of Ions of Deuterium Extracted from a Radio frequency
plasma) device [1, 2, 3] is realized by means of eight heaters based on the inductive coupling between radio frequency waves and the plasma (ICP). Each heater, commonly called "driver", consists of a cylindrical chamber with one open end facing the plasma expansion region. Here, the hot plasma generated in the driver diffuses and its temperature decreases so as to enhance the survival probability of negative ions prior to their extraction into an energetic beam. The ICP is, consequently, an important element in the simulation of the plasma in the driver and expansion regions. Previous works in SPIDER have been dedicated to obtain numerical tools to estimate the fraction of the net input power that is absorbed by the plasma, namely the power transfer efficiency, depending on main external controllers of the drivers like the power output from the generator, the value of the RF and the gas pressure in the chamber of the drivers [4, 5]. In this case, a basic zero-dimensional transport problem is solved in order to provide some meaningful feedback between the power delivered to the plasma and its characteristics, which in turn determine the induction process through the plasma conductivity. With the present paper we start a program of complementary calculations of the inductive coupling between the active RF current and the plasma considering spatial distributions of electron density and temperature. As a first step, we solve the equation for the induced electric field in the plasma region assuming perfect cylindrical symmetry, which renders the problem bi-dimensional (2D). Since one important objective is to provide a heating module for the available 2D fluid transport code [6], we impose 2D plasmas with parameters taken from the experimental campaigns of the SPIDER device without caring for the transport problem.
Electromagnetic calculations are based on well-known equations. Once the boundary conditions are established, the essential difficulty in order to gain information out of the calculations consists of the properties of the medium, i.e., the conductivity of the plasma inside the driver. This is a theoretical problem. On the other hand, the possibility to confront the consequences of a model conductivity with the experimental data requires that the thermodynamic properties of the plasma be known to an acceptable extent. This is an experimental problem. There are many works devoted to the complicated theoretical problem (see [7] and references therein), which is out of the scope of this paper. On the other hand, previous works for SPIDER have made use of simplified expressions for the plasma conductivity that seem to give a fair account of the induction process. The development of reliable calculation tools is a mandatory step before the physics of electrical conduction can be tackled. Therefore, in this work we use experimental information to study the ICP according to the same electrical conductivity models in order to check their adequacy to represent the behaviour of SPIDER plasmas, thus setting the basis for further studies.
After this introduction, Section 2 gives a brief description of SPIDER and the 2D electromagnetic calculations. In Section 3 we present the experimental information that will be used later on, which concerns the basic experimental knobs (gas pressure and species, current in the RF coils, RF frequency and delivered power) and the distributions of electron density and temperature inside the drivers. Section 4 is dedicated to briefly recall the models of plasma
conductivity that will be used in the calculations presented in Section 5. The work finishes with a discussion (Section 6) based on calculations of the absorbed power in plasmas without and with filter magnetic field, and a summary (Section 7). Details of the induction equation are given in Appendix A.
## 2 2D electromagnetic model of the drivers
### Basic geometry and parameters
The plasma source in SPIDER is based on the prototype ELISE developed at IPP Garching [8]. It is composed of four pairs of drivers of circular cross-section designed to operate at 200 kW to give a total maximum power of 800 kW. Figure 1 shows a schematic view of one of the eight identical drivers. The filling gas (H\({}_{2}\) or D\({}_{2}\)) is injected through the rear plate inlets at a pressure of \(\sim 0.3\) Pa and is heated mainly using induction coils wound around each driver. The coil is separated from the driver chamber by an alumina insulator that covers the cylindrical side of the Faraday shield. The latter is a copper structure that protects the insulator from sputtering, while permitting the RF field penetration in the plasma region through eighty shaped incisions. The RF-sustained plasma diffuses out to the expansion chamber and towards the plasma grid (PG) surface, where a Cesium coating decreases the work function in order to favour the creation of negative ions [9]. Just before the PG there is a biased plate divided in five segments. Its adjustable potential allows for absorbing variable quantities of electrons with the aim of optimizing the electric field distribution in the region where the negative ions must be produced. See
Figure 1: Side view if a driver of the SPIDER device showing the gas inlets and rear plate at the left side, the lateral Faraday shield structure that covers the plasma chamber (in green) and the section of the RF winding.
Refs. [1, 10, 2] for more details about the experimental device.
Due to technical reasons, the power delivered by each generator during the 2020 campaigns was limited to 100 kW, about half the design value; i.e., the power was limited to 50 kW per driver. Table 1 collects the main parameters associated to the drivers of SPIDER. In next campaigns the precise geometry will be slightly different to the one used so far: the driver case will be made of quartz with a reduced spacing between RF winding and driver [11]. In addition, the coil will make exactly eight turns around the driver, instead of the present 8.5 turns. At present, in consideration of the comparison with data of the 2020 campaigns, we use the values in Table 1 and define a model driver approximating its geometry by a cylinder sector in a radial-axial plane of dimensions \(r_{\rm p}\times\Delta z_{\rm p}\). The RF winding is simplified to a set of eight filamentary circular wires located 1.7 cm away (in radius) of the plasma region and centered along the axis. As we shall see, a modification of the vacuum field created by these wires can reasonably describe the induction process. The frequency is fixed to \(f=1\) MHz.
The design of SPIDER includes controllable currents in the PG and nearby conducting bus-bars to produce a static filter magnetic field, which is dedicated to reduce both the temperature and the amount of plasma electrons just before the PG. In principle, the filter field is designed to act mainly in the expansion region, but there is still a non negligible stray magnetic field inside the drivers, especially near the opening to the expansion region. The design of the filter-field circuitry has been improved so as to make the field approximately perpendicular to the axis of the cylindrical cavity of the drivers. The values of the filter magnetic field near the exit of the driver (the opening to the expansion region) is not identical for all drivers because there is some dependence on their vertical position, being somewhat more intense at the top and bottom of the drivers layout. The magnetic field intensity decreases towards the driver interior down to \(\approx 2/3\) of the value at the exit [12]. The operation of the SPIDER source is often done with PG currents on the order of the kA, producing fields in the mT
\begin{table}
\begin{tabular}{l l} driver chamber (plasma) radius & \(r_{\rm p}=0.137\) m \\ driver depth & \(\Delta z_{\rm p}\equiv z_{\rm end}-z_{\rm ini}=0.149\) m \\ RF winding width & \(W_{\rm b}=0.096\) m \\ RF winding radius & \(R_{\rm b}=0.154\) m \\ feeding current amplitude & \(I_{\rm RF}\sim 200\) A \\ feeding current frequency & \(f\approx 10^{6}\) Hz \\ power & 10β100 kW \\ driver vacuum impedance & \(R_{\rm d}\sim 2\)\(\Omega\); \(L_{\rm d}\sim 10\)\(\mu\)H \\ electron temperature & \(T_{\rm c}\sim 10\) eV \\ electron density & \(n_{\rm e}\sim 10^{18}\) m\({}^{-3}\) \\ gas pressure (H\({}_{2}\)) & \(\sim 0.3\) Pa \\ Filter magnetic field & \(0\sim 4\) mT \\ \end{tabular}
\end{table}
Table 1: Main parameters of the drivers in the SPIDER device.
range. Experiments have also been done without the filter field, which provide the reference plasmas to study magnetic field effects. At the same time, even in the absence of filter field, the amplitude of the induced magnetic fields can easily reach values of several mT in the main volume of the drivers.
### Electromagnetic model
#### 2.2.1 Equation and boundary conditions
Given the relatively low frequency of the RF feeding currents and the fact that we are solving for the induced electric field \(\mathbf{E}\) in the plasma domain, where there are no active currents, the electromagnetic problem reduces to solving the homogeneous equation
\[\nabla^{2}\mathbf{E}-\imath\omega\mu_{0}\sigma\mathbf{E}=0. \tag{1}\]
The angular frequency \(\omega\) comes from considering the current in the RF winding as a single harmonic, \(I_{\mathrm{RF}}\propto\cos(\omega t)\). Aside from geometrical data, the input for the calculations consists of the amplitude of this current and the characteristics of the medium, which in this work is represented by a scalar conductivity, \(\sigma\). The assumed cylindrical symmetry of the problem allows us to reduce the three equations associated with \(\mathbf{E}\) to one equation for the azimuthal component \(E_{\theta}\). A gives the details.
Equation 1 represents a typical boundary-value problem. The solution in vacuum is known for a filamentary circular loop (see, for instance, Refs. [13, 14]) and can be obtained numerically as a function of the elliptic integrals \(K\) and \(E\),
\[G(k)=\frac{(2-k^{2})K(k^{2})-2E(k^{2})}{k}, \tag{2}\]
with the variable
\[k^{2}=\frac{4R_{\mathrm{b}}r}{(R_{\mathrm{b}}+r)^{2}+(z-z_{\mathrm{b}})^{2}}.\]
Here \(R_{\mathrm{b}}\) and \(z_{\mathrm{b}}\) represent, respectively, the radius from the axis and the constant-\(z\) plane where a given loop lies. We note that \(K(0)=E(0)\) and the function \(G\) at the axis (\(r=0\Rightarrow k^{2}=0\)) is proportional to \(K(0)-E(0)\). The corresponding limit [15] is
\[\lim_{k\to 0}G(0)=\lim_{k\to 0}2k\frac{K(0)-E(0)}{k^{2}}=\lim_{k\to 0}2k\frac{\pi}{4}=0. \tag{3}\]
With these functions, the vector potential produced in space by one filamentary circular coil of radius \(R_{\mathrm{b}}\) carrying a current \(I_{\mathrm{RF}}\) is
\[A_{\theta}(r,z)=\frac{\mu_{0}I_{\mathrm{RF}}}{2\pi}\sqrt{\frac{R_{\mathrm{b} }}{r}}G(k). \tag{4}\]
Since \(E_{\theta}=-\imath\omega A_{\theta}\), the vacuum field in presence of \(N_{\mathrm{b}}\) current loops of radius \(R_{\mathrm{b}_{i}}\) and centered in \(z_{\mathrm{b}_{i}}\) is the addition of the fields produced by each of them,
\[E_{\theta}^{\mathrm{V}}(r,z)=-\imath f\mu_{0}\sum_{i=1}^{N_{\mathrm{b}}}I_{ \mathrm{b}_{i}}\sqrt{\frac{R_{\mathrm{b}_{i}}}{r}}G(r,z,R_{\mathrm{b}_{i}},z _{\mathrm{bi}}), \tag{5}\]
where \(f=\omega/2\pi\). The radii \(R_{\mathrm{b}_{i}}=R_{\mathrm{b}}\) and the currents \(I_{\mathrm{b}_{i}}=I_{\mathrm{RF}}\) are common to the loops of the RF winding with which we describe the true spiral winding in figure 1, while the values \(z_{\mathrm{b}i}\) change for each of the \(N_{\mathrm{b}}\) loops. According to the design of the drivers in SPIDER, we take \(N_{\mathrm{b}}=8\). Expressions 4 and 5 are valid due to the relatively small \(\omega\) and the small dimensions of the driver, negligible with respect to the RF wavelength.
Strictly speaking, the boundary conditions should be taken far enough from the system, where it can be said that the fields are zero to good approximation, and then solve for all the current densities in the calculation domain including all passive conductors and the RF winding itself. In order to allow for a calculation in the plasma domain alone, appropriate boundary conditions must be found. There are four segments where boundary conditions apply, see figure 1(a). At \(r=0\) (driver axis) we take the condition \(E_{\theta}=0\) pertinent to the cylindrical symmetry of the problem. At the back side of the driver, \(z=z_{\mathrm{ini}}\), there is a molybdenum-coated copper disk. The large difference between the conductivities in the metalic disk and the plasma, together with the known condition of equal field components tangent to the boundary, allows us to set the field in this boundary to zero. There are two remaining boundaries, the side facing the expansion zone and the line at the cylindrical surface of the Faraday shield. The theoretical solution in vacuum, Eq. 5, is useful to obtain adapted boundary conditions in the absence of plasma. In the presence of plasma, however, the currents induced in it will further modify the boundary values, which calls for an iterative process.
Void driverLet us begin with the boundary conditions for a void driver, i.e., for the driver without plasma but considering the passive metallic structures. We start by considering the vacuum solution, Eq. 5, as a shape function that
Figure 2: (a) Boundary conditions at the four sides of the rectangular grid used in the induced electric field calculations. Two options are showed for the opening to the expansion region. (b) Imaginary part of the induced electric field without plasma from a finite-element-method calculation that includes passive elements (black line) and from the modified Eq. 7 for the vacuum theoretical field, Eq. 5 (red dashes).
respects the more intense field near the RF winding; but use a global constant factor \(f_{\rm c}<1\) to reduce its maximum value. Since the nature of Eq. 5 imposes \(f_{\rm c}E_{\theta}^{\rm V}(r_{\rm p},0)\neq 0\), the condition on the cylindrical lateral surface is forced to zero at \(z=z_{\rm ini}\) using a smooth step function, \(\tanh[(z-z_{\rm ini})/2\delta]\), with \(\delta=0.01\) m. Likewise, the field near the opening to the expansion region is very small due to the presence of another metallic casing, which is different from the pure vacuum field due to the circular wires alone. Here again, we set a smooth step function to force near zero values by this metallic border. Together with the former one, we have opted for the following shaping function
\[s(z)=\tanh\left(\frac{z-z_{\rm ini}}{2\delta}\right)\times\frac{1}{2}\left[1+ \tanh\left(-\frac{z-0.0417+\delta}{2\delta}\right)\right]. \tag{6}\]
Figure 2b represents, with a black line, the out-of-phase (imaginary) part of the induced electric field calculated with a 2D Finite-Element-Method for the geometry of the drivers of SPIDER without plasma, where the highly conducting metallic passive parts of the driver are taken into account and the domain includes the RF coils. With red dashes we represent the function
\[E_{\theta}^{\rm void}(r_{\rm p},z)=f_{\rm c}s(z)E_{\theta}^{\rm V}(r_{\rm p},z) \tag{7}\]
with \(f_{\rm c}=0.38\) to show that our description of the boundary field without plasma at \(r=r_{\rm p}\) is a reasonable choice. Both calculations have been done using a same current in the winding, which can be taken here as a scaling factor because the problem, so far, is linear. The in-phase (real) part of the field Eq. 7 is zero according to Eq. 5, which can be taken also as a good approximation to the very small values obtained with the FEM calculation.
It must be noted that the boundary values shown in figure 2b have also an experimental counterpart: the inductance of the driver in the absence of plasma, \(L_{\rm d}\). The FEM calculation gives a driver inductance \(L_{\rm d}^{\rm FEM}=9.4\ \mu\)H, in quite good agreement with the measured \(L_{\rm d}=9.6\ \mu\)H [16]. This value cannot be attained with the 2D electromagnetic code because of the limited calculation domain. It is possible, however, to define a reduced value using the flux linked by the RF coils considering only the plasma region. In vacuum we obtain \(L_{\downarrow}=6.2\ \mu\)H. Since, according to figure 2b, this value of \(L_{\downarrow}\) must be practically the same using the FEM calculation, we can use it as an estimate of the inductance compatible with \(L_{\rm d}^{\rm FEM}\) and, consequently, with \(L_{\rm d}\). In what follows we shall refer to this reduced value \(L_{\downarrow}\).
For compatibility with the boundary condition at the cylindrical surface of the Faraday shield, we must have \(E_{\theta}^{\rm void}(r_{\rm p},z_{\rm end})=0\) at this edge of the opening to the expansion region. A possibility is to take a null field all along this region of the boundary, \(E_{\theta}^{\rm void}(r,z_{\rm end})=0\). There are two arguments in favor of this choice. First, according to the FEM calculation, the void-driver field at this boundary is comparatively small. Second, not only the theoretical vacuum field \(E_{\theta}^{\rm V}(r,z_{\rm end})\) is also considerably reduced in this boundary region, but the iterative process described below tends to make it evanesce. This has been verified considering only one step function in \(z=z_{\rm ini}\), that is,
\(z_{\rm ini}/2\delta\)], thus leaving the other end with the value \(E_{\theta}^{\rm void}(r,z_{\rm end})=f_{\rm c}E_{\theta}^{\rm V}\), as also indicated in figure 2. Since, as mentioned, this option renders \(E_{\theta}^{\rm void}(r,z_{\rm end})\) quite small, and this boundary field decreases even further in presence of the plasma, we have simplified the calculations that follow with the fixed boundary condition \(E_{\theta}^{\rm void}(r,z_{\rm end})=0\). In summary, the boundary conditions applied to the void driver are
\[E_{\theta}^{\rm void}(0,z) = 0 \tag{8}\] \[E_{\theta}^{\rm void}(r,0) = 0\] (9) \[E_{\theta}^{\rm void}(r_{\rm p},z) = f_{\rm c}s(z)E_{\theta}^{\rm V}(r_{\rm p},z)\] (10) \[E_{\theta}^{\rm void}(r,z_{\rm end}) = 0, \tag{11}\]
with \(s(z)\) as in Eq. 6 and \(f_{\rm c}=0.38\) to match the void driver inductance.
Iterations with plasmaOnce the boundary conditions for the void driver have been set, we can use them as initial guess when there is a plasma that reacts with induced currents. Here it is very important to keep in mind that the experimental inductance of the drivers decreases very little in presence of the plasma (Sec. 3.2). It could be argued that, in consequence, an iterative process that refines the boundary conditions according to the plasma response is barely necessary. This would be true if we knew a precise formula for the plasma conductivity, which is not the case. On the contrary, we seek a description of the plasma that, based on experimental profiles, gives a plasma reaction that is compatible with the experimental knowledge of RF power delivered and approximately constant inductance. Such is the purpose of the iterative process. The essential information that will be gained from the calculations consists of the absorbed power and the decrement of the inductance (directly related with the value of the electric field at \(r=r_{\rm p}\)) in presence of the plasma.
The iterative process begins with the boundary conditions Eq. 8-11. The driver inductance nears the void-driver value \(L_{\rm d}\), to which we have associated a calculated \(L_{\downarrow}\) considering only the plasma region. Successive steps in the iterative process are as follows:
1. The initial \(E_{\theta}(r,z)\) provides a distribution of plasma current densities \(J_{\theta}(r,z)=\sigma(r,z)E_{\theta}(r,z)\) that, given its symmetry, can be considered as a set of current-carrying loops located at positions \((R_{{\rm b}_{i}},z_{{\rm b}_{i}})\) for each \(i\)-th loop, from which a contribution Eq. 5 can be calculated at the boundary points \((r_{\rm p},z)\) and, eventually, \((r,z_{\rm end})\).
2. The new boundary values consist of the initial boundary field corrected by the contribution from the plasma current loops.
3. The process is repeated until the correction to the boundary \(E_{\theta}\) is considered negligible.
Let us change slightly the notation to better describe the iterative process. We use the symbol \(\partial\) for the boundary of the calculation domain, where the
electric field (we recall it only has azimuthal component), \(E_{\partial}\), depends on the currents (electric fields) in the whole plasma, \(E({\bf x})\). We symbolize this by writing \(E_{\partial}[E({\bf x})]\). The process starts with a value \(E_{\partial}^{\rm void}\). The field \(E({\bf x})\) is calculated in consecutive iterations: the \(j\)-th one yields \(E^{j}({\bf x})\) and a corresponding \(E_{\partial}^{j+1}=E_{\partial}^{\rm void}+\Delta E_{\partial}^{j}\), where \(\Delta E_{\partial}^{j}[E^{j}({\bf x})]\) is the contribution of the calculated currents to the boundary that will be used in the next iterative step. Note that it does not make sense calculating \(E_{\partial}^{j+1}=E_{\partial}^{j}+\Delta E_{\partial}^{j}[E^{j}({\bf x})]\) because, in such case, the convergence is obtained when \(E_{\partial}^{j+1}\approx E_{\partial}^{j}\), which implies \(\Delta E_{\partial}^{j}[E^{j}({\bf x})]\to 0\) as the iterations grow. A field that does not modify the boundary values through the associated currents is null unless the conductivity is negligible. Therefore, we impose
\[E_{\partial}^{j+1}=E_{\partial}^{\rm void}+\Delta E_{\partial}^{j}[E^{j}({\bf x })], \tag{12}\]
so the electric field at the boundary is updated by adding the plasma response to the fixed vacuum field. This process converges when the field at the boundary is such that \(E_{\partial}\approx E_{\partial}^{\rm void}+\Delta E_{\partial}\). Observe that \(\Delta E_{\partial}\) gives a "negative" contribution as it opposes the vacuum field. Therefore, the condition can be expressed by saying that the converged field is the vacuum one minus the effect of the currents provoked by the converged field itself.
Since the power is a volume integral of \(|E({\bf x})|^{2}\), the convergence of the electric field provides a convergence of the ohmic power towards a value that can be used to define a criterion for convergence. When the relative change of ohmic power is below some value, say 1%, the iterative process stops. The convergence is not guaranteed if the conductivity values are too high because an excessive response of the plasma, i.e., too high induced currents in the first step, could overcome the boundary values giving rise to an unphysical amplification of the currents. With acceptable values of the conductivity, the iterations converge in a few steps. In such case, the electric field in the open side is reduced to negligible values if it is let to evolve as indicated in the previous paragraph. For this reason, it is justified using also a null value of the electric field in the expansion region side of the boundary.
#### 2.2.2 Currents, power and inductance
As before, we drop the sub-index \(\theta\) for the only component of both, electric field and current density. The latter in this problem has a harmonic character, \(J(r,z;t)=\sigma E(r,z)\exp(\imath\omega t)\) with a complex conductivity \(\sigma=|\sigma|\exp(\imath\phi)\). Thus, we have a current density
\[J(r,z;t)=|\sigma(r,z)|E(r,z)e^{\imath[\omega t+\phi(r,z)]} \tag{13}\]
with a complex field amplitude \(E(r,z)=|E(r,z)|\exp(\imath\varphi)\). Consequently, the current density
\[J(r,z;t)=|\sigma(r,z)||E(r,z)|e^{\imath[\varphi(r,z)+\phi(r,z)]}e^{\imath \omega t} \tag{14}\]
is oscillating with a phase \(\imath[\omega t+\phi(r,z)+\varphi(r,z)]\) and an amplitude \(|J|=|\sigma(r,z)||E(r,z)|\). Each current loop gives rise to a contribution to the boundary
\[E_{\partial,I(r,z)}\equiv-\imath fG(k)I(r,z)e^{\imath[\varphi(r,z)+\phi(r,z)]}e ^{\imath\omega t}, \tag{15}\]
where Eq. 14 indicates that the current in each loop of infinitesimal cross section \(\mathrm{d}r\mathrm{d}z\), including its spatial phase, is
\[\mathrm{d}I(r,z)=|\sigma(r,z)||E(r,z)|e^{\imath[\varphi(r,z)+\phi(r,z)]} \mathrm{d}r\mathrm{d}z. \tag{16}\]
These currents can be used to obtain the ohmic power dissipated by the plasma due to the induced field. The one-period average for a given circuit under harmonic drive is
\[\langle P\rangle=\frac{1}{2}\Re\{I^{*}V\}, \tag{17}\]
where \(V\) represents the induced voltage related with the circulation of the electric field, \(I^{*}\) is the complex conjugate of the current and \(\Re\) takes the real part of its argument. For each loop of plasma current, owing to the symmetry of the problem, we have a complex voltage \(V(r,z)=-2\pi rE(r,z)\). The complex conjugate of Eq. 16 can be used for the current.
The self-induction coefficient is, by definition, \(L=\mathrm{d}\Phi/\mathrm{d}I_{\mathrm{RF}}\), where the magnetic flux is the circulation of the magnetic vector potential and \(I_{\mathrm{RF}}\) is the current in the external winding. The flux through one circular loop of radius \(r\) is
\[\Phi=\oint A\mathrm{d}l=2\pi rA=\imath\frac{r}{f}E, \tag{18}\]
where again we take advantage of the symmetry of the problem and we have substituted \(A=(\imath/\omega)E\).
We have modified the theoretical vacuum field in order to have a good approximation for the void driver, but restricted to the calculation domain, up to \(r=r_{\mathrm{p}}<R_{\mathrm{b}}\). Consequently, we can only provide the reduced inductance, \(L_{\downarrow}\). Then, our approximation to the inductance for a set of \(N_{\mathrm{b}}\) loops centered at \(z_{\mathrm{b}i}\) is
\[L_{\downarrow}=\sum_{i=1}^{N_{\mathrm{b}}}\frac{r_{\mathrm{p}}}{I_{\mathrm{ RF}}f}\Re\{\imath E(r_{\mathrm{p}},z_{\mathrm{b}i})\}, \tag{19}\]
which includes the plasma effects after the iteration process. Here we remind that, even if \(L_{\downarrow}\) is an under-estimate of the inductance, we only need to know the modification of this value in comparison with equivalent void-driver inductance, \(L_{\downarrow}^{\mathrm{void}}\), obtained when \(E^{\mathrm{void}}\) is substituted in Eq. 19.
## 3 Experimental data
### Electron density and temperature
For the calculations that follow we take data from the S16 campaign of the SPI-DER device, which was devoted to characterize the plasmas inside the drivers
and the adjacent expansion region. Most of the information that we use below was obtained by inserting probes in these regions [17]. Different probes, often located in different drivers, have fixed radial locations (our \(r\)-coordinate). Movable probes along the axial direction were used to scan plasma parameters in our \(z\)-coordinate, see figure 2. The parameter space for these experiments is very large, including external knobs like power, gas pressure, bias voltage, filter magnetic field intensity etc. Therefore, due to the natural limitations of the experimental campaign, the available data are also limited in terms of complete \((r,z)\) maps of electron density and temperature.
We describe with Gaussian functions the electron density and temperature profiles obtained from measurements in two operating conditions: without (\(I_{\rm PG}=0\)) and with (\(I_{\rm PG}=2.6\) kA; \(B_{\rm f}\approx 4\) mT) filter magnetic field [9], where \(I_{\rm PG}\) is the current in the plasma grid circuit providing the filter field, \(B_{\rm f}\). Figure 3 shows, with symbols, the electron densities and temperatures obtained along the driver axis as a function of the distance from the back side of the driver (see figure 2a). The dashed lines are the functions that will be used in the calculations. The variation along the \(z\)-axis is acceptably well defined for both types of discharge. The plasma profiles for the \(I_{\rm PG}=2.6\) kA case have also some data along the radial direction (see figure 13, to be discussed later). Thus, if \({\cal G}_{y}(\Delta_{y},y_{c})\equiv\exp[(y-y_{\rm c})/2\Delta_{y}^{2}]\) represents a \(\Delta_{y}\)-width, \(y_{\rm c}\)-centered Gaussian function on the variable \(y\), we represent the density and temperature spatial distributions of this case with the 2D functions
\[\left.\begin{array}{c}n_{e}(r,z)=[3.2{\cal G}_{r}(0.05,0){\cal G}_{z}(0.1,z_{ \rm ini}+0.07)+0.2]\times 10^{18}\ {\rm m}^{-3}\\ T_{e}(r,z)=17{\cal G}_{r}(0.103,0){\cal G}_{z}(0.1,z_{\rm ini}+0.07)\ {\rm eV}, \end{array}\right\} \tag{20}\]
where the centering of \({\cal G}_{z}\) imposes the peaking near the center of the driver in the axial dimension. All distances are given in meters.
Unfortunately, the \(I_{\rm PG}=0\) case does not have information on the radial profiles, but only along the \(z\)-coordinate. This case is, however, important precisely because there is no contribution from the static filter magnetic field,
Figure 3: Probe data for the electron density (squares) and temperature (tringles) taken along the driver axis from SPIDER discharges without (a) and with (b) filter magnetic field.
Figure 4: Spectroscopy data from a SPIDER discharge with varying plasma grid current (\(I_{\rm PG}\)), which controls the intensity of the filter magnetic field. (a) Time evolution of \(I_{\rm PG}\) (triangles) and H\({}_{\alpha}\) signals collected from the core (squares, see text) and radial edge (dots) plasma regions during the \(I_{\rm PG}\) ramp. (b) Ratio between core and edge intensities of H\({}_{\alpha}\) light as \(I_{\rm PG}\) is changed. (c) Ratio between the \(\beta\) and \(\gamma\) lines βBalmer ratioβ in core (squares) and edge (dots) chords.
Figure 5: Probe data of the electron density (a) and temperature (b) at different radial locations near the center of the driver (\(z=0\)) for different PG currents (symbols). The corresponding Gaussian functions used to represent the profiles are shown with lines. Their value at \(r=0\) (\(10^{18}\) m\({}^{-3}\), eV), width \(\Delta\) (cm) and possible offset \(c\) (\(10^{18}\) m\({}^{-3}\)) are indicated in the labels.
thus becoming the natural control case to study the effects of this field on the plasma conductivity. Here we have used supplementary information from other discharges at low \(I_{\rm PG}\) and from spectroscopic measurements. Figure 4 (a) shows the evolution of \(I_{\rm PG}\) during the time of the discharge, along with the variation of H\({}_{\alpha}\) in core and edge chords. The two core lines correspond to different measurements, from a photodiode that collects H\({}_{\alpha}\) light after the corresponding filter (empty squares) and from a spectrometer (filled squares), and are shown to give an indication of the consistency of the measurements. According to this figure, the H\({}_{\alpha}\) light emission increases with the \(I_{\rm PG}\) current (intensity of the filter magnetic field) in the core of the driver, but not in the edge region (\(r\approx 0.7r_{\rm p}\)), thus increasing the peaking factor (b) in agreement with the peaking of the electron density measured with probes. At the same time, the Balmer ratio (c) of the edge emission stays approximately constant during the \(I_{\rm PG}\) scan. Simultaneous constant edge H\({}_{\alpha}\) light and Balmer ratio are hard to explain if the density and temperature of the edge region are not approximately constant during the \(I_{\rm PG}\) scan. Therefore, both electron density and temperature are assumed to remain approximately unchanged (other parameters fixed, as is our case) in the outer region as the filter field changes. Figure 5 shows experimental data from plasmas at the lowest \(I_{\rm PG}\), together with the Gaussian functions used to represent the radial profiles. The error-bars in the figure were set at 30% on the density, mostly given by the uncertainty on the effective mass of the positive ions (between 1 and 2 amu in the driver); and 10% on the temperature, to give an estimate of the data dispersion. We note that only the \(I_{\rm PG}=1.2\) kA case has data near the edge radial region. The function adopted for the \(I_{\rm PG}=0\) kA case has been obtained for comparison with the other two cases shown and using the above qualitative information from spectroscopy. In consequence, now we use
\[\left.\begin{array}{c}n_{e}(r,z)=[1.1{\cal G}_{r}(0.05,0){\cal G}_{z}(0.09,z_ {\rm ini}+0.07)+0.12]\times 10^{18}\ {\rm m}^{-3}\\ T_{e}(r,z)=11{\cal G}_{r}(0.18,0){\cal G}_{z}(0.60,z_{\rm ini}+0.07)\ {\rm eV}, \end{array}\right\} \tag{21}\]
which are the functions represented for the \(I_{\rm PG}=0\) case in figure 5. Note that an almost flat \(T_{e}\) is taken in the \(z\) dimension, in agreement with the experiments (figure 3). This is a notable change between discharges without and with filter magnetic field: the latter concentrate the hot electrons near the center of the driver region. Other values are common: \(P_{\rm g}=0.34\) Pa of neutral gas pressure, 1 MHz for the RF and a one-driver nominal power \(P_{\rm RF}=50\) kW.
### Electrical parameters
The numerical problem to be solved is based on the knowledge of two electrical parameters: the input power and the inductance. The measured impedance in the drivers of the SPIDER device without plasma is considered to have negligible capacitive reactance, \(Z=R_{\rm d}+i\omega L_{\rm d}\) with resistive part \(R_{\rm d}\sim 2\ \Omega\) and inductive part \(L_{\rm d}\sim 10\ \mu\)H [5]. We have seen (Sec. 2.2.1) that \(L_{\rm d}\) is used to define initial boundary conditions in solving Eq. 1. Recent studies [16] with improved electrical diagnostics indicate that a representative value of the vacuum
impedance of the drivers in SPIDER is \(L_{\rm d}\approx 9.6\)\(\mu\)H. In addition, and in agreement with previous results, the variations of the driver inductance have been found on the order of 1% in different operating conditions of gas pressure, RF power, plasma-grid current etc. This small variation is the main information for our purposes.
Given the fixed RF frequency, the only electrical input in the calculations is the amplitude of the current in the RF winding, \(I_{\rm RF}\). This value cannot be taken directly from the electrical measurements at the output of the RF generators. Aside from the RF power, \(I_{\rm RF}\) is a consequence of the estimate of plasma equivalent resistance and depends, consequently, on the plasma response. Recent studies [16] provide this estimate for shots at nominal power 45 kW and gas pressure \(P_{\rm g}=0.4\) Pa, similar to our conditions, in a range of plasma-grid currents from \(I_{\rm PG}=1\) to \(I_{\rm PG}=2.5\) kA. Based on this reference, and recalling the two types of plasma described in Sec. 3, we have fixed \(I_{\rm RF}/\sqrt{2}=\sqrt{P_{\rm RF}/R_{\rm d}}\) using \(R_{\rm d}=2.1\)\(\Omega\) for our case with magnetic filter field, \(I_{\rm PG}=2.6\) kA. The case without filter magnetic field is obtained by extrapolation of the trend found, \(R_{\rm d}=1.7\)\(\Omega\). Therefore, we set the following values: \(I_{\rm PG}=0\) A (no filter field), \(I_{\rm RF}=242\) kA; and \(I_{\rm PG}=2.6\) A (with filter field), \(I_{\rm RF}=218\) A.
Note that, since we are fixing \(I_{\rm RF}\) according to the estimates of the plasma equivalent resistance for a given input power (50 kW), this work is different to the reference [5] in the sense that \(I_{\rm RF}\) is an input and the absorbed power is an output. In other words, the present 2D electromagnetic calculations provide values of the driver efficiency taking \(I_{\rm RF}\) and the net power as inputs.
## 4 Plasma conductivity
A convenient way to obtain the conductivity of a medium characterized by fluid equations consists of writing the linear momentum balance for the electron fluid and solving for the velocity. Here it is assumed that the currents are due to the reaction of the electron fluid alone. The definition of the current density allows then obtaining an expression for the conductivity. This can be easily done in planar geometry for a harmonic drive when advection, diamagnetic forces and magnetic field effects are neglected. Considering a drag force as proportional to the electron fluid velocity via a collision frequency \(\nu\), one obtains the well known formula (see, for instance, [18]),
\[\sigma=\frac{n_{e}e^{2}}{m_{e}(\nu+\imath\omega)}, \tag{22}\]
where \(m_{e}\), \(e\) and \(n_{e}\) are, respectively, mass, charge magnitude and density of the electron species. The generic meaning of \(\nu\) as quantifier of the drag allows to consider an effective collisionality that takes into account processes of possibly different nature. In [19, 4], cross-section data for the local collisional processes between electrons and neutrals were used to obtain reaction rates and then collisionalities for the elastic, \(\nu_{en}^{p}\), and inelastic by ionization, \(\nu_{en}^{i}\), processes. With the addition of the electron-ion elastic collisions, \(\nu_{ei}\), we would have
\(\nu_{en}^{p}+\nu_{en}^{\rm i}+\nu_{ei}^{\rm i}+\nu_{ei}\) in formula 22. In the present work we have taken the total cross-section for collisions (elastic and inelastic) between the electrons and Hydrogen neutrals published in [20]. The reaction rate that corresponds to this cross-section, \(\sigma_{en}^{\rm tot}\), is obtained considering, as usual, a Maxwellian distribution of energies,
\[k_{en}^{\rm tot}=\sqrt{\frac{8}{\pi m_{e}T_{e}^{3}}}\int_{0}^{\infty}{\cal E }\sigma_{en}^{\rm tot}e^{{\cal E}/T_{e}}{\rm d}{\cal E}. \tag{23}\]
With the values thus obtained in a range of temperatures (energies), \(0.01\leq T_{e}\leq 100\) eV, we have obtained the fit that will be used for convenience in the calculation of the local collisionalities involving electron-neutral collisions. Adjusting the reaction rates to a fourth order polynomial,
\[\log k_{en}^{\rm tot}=\sum_{j=0}^{4}c_{i}(\log T_{e})^{j}, \tag{24}\]
the set of coefficients is \((c_{0},c_{1},c_{2},c_{3},c_{4})=(-12.9736,0.5714,-0.4728,0.1419,-0.0153)\).
The total local collisionality between electrons and neutrals, \(\nu_{en}^{\rm tot}=n_{\rm g}k_{en}^{\rm tot}\) where \(n_{\rm g}\) is the neutral gas density, is added to the known electron-ion collision frequency to obtain a total local collisionality
\[\nu_{\rm local}=\nu_{en}^{\rm tot}+\nu_{ei}. \tag{25}\]
Figure 6 shows the reaction rates Eq. 23 for the electron-neutral cross-sections \(\nu_{en}^{p}\) and \(\nu_{en}^{\rm i}\). Their sum and its 4th order polynomial fit show that Eq. 24
Figure 6: Reaction rates obtained from the cross-sections associated to several collisional processes of electrons in a Hydrogen gas: elastic collisions, \(\sigma_{en}^{p}\); ionizing collisions, \(\sigma_{en}^{\rm i}\); their sum (symbols) along with its polynomial fit Eq. 24; and a similar fit to the recommended total cross-sections published by Yoon et al. [20].
correctly accounts for the set of numerical integrals. For comparison, we also show our chosen fit to the total electron-neutral cross-sections provided in Ref. [20], see coefficients above, which includes many more collisional processes. As can be seen, reducing the collisions between electrons and Hydrogen neutrals to the sum \(\nu_{en}^{p}+\nu_{en}^{i}\) gives a rather good approximation in the range of temperatures of interest to these studies, \(\sim 10\) eV.
Many works have indicated that collisionless heating can become, depending on the plasma conditions, the main heating mechanism in inductively coupled plasmas. This type of heating, often called "stochastic heating", is produced when the electrons explore zones of considerably different electric field during one RF period. This can only happen if, in turn, other collisional processes have low enough frequencies, as it can easily happen at low neutral gas pressures. In terms of conductivity, this kind of heating can be associated to an equivalent collisionality, \(\nu_{\rm st}\), which is obtained by equating the collisionless heating power to an effective collisional heating power. The calculations are involved and, strictly speaking, should consider the non-locality by evaluating the surroundings of each point. In this way, the corresponding conductivity should be expressed via integral expressions (e.g. [21, 22]). As a first approximation to the problem, here we adopt the formulation based on the same expressions for the skin depth and equivalent collisionality used in [23, 19, 4, 5]. These are based on the limits of \(\nu_{\rm st}/\omega\) developed in [24] to obtain \(\nu_{\rm st}\) as a function of the skin depth, \(\delta\), and the ratio \(\alpha\) between the transit time of a thermal electron throught the skin depth and the RF period. The parameter \(\alpha=4\delta^{2}\omega^{2}/\pi v_{\rm th}^{2}\ll 1\) always in our case, where \(v_{\rm th}\) is the thermal speed. Therefore we iterate the collisionality starting with \(\nu_{0}=\nu_{\rm eff}^{\rm local}=\nu_{en}^{\rm tot}+\nu_{ei}\) to obtain \(\delta_{0}\equiv\delta(\nu_{0})\), from here \(\alpha_{0}\equiv\alpha(\delta_{0})\) and
Figure 7: Collision frequencies obtained for a Hydrogen gas pressure \(P_{\rm g}=0.34\) Pa as a function of the electron temperature (a) and density (b). The local collisionalities (thin lines) are added to obtain an effective collisionality (thick lines) to be compared with the collisionality associated to stochastic heating (dashed lines).
then the successive \(k\)-th iteration values
\[\nu_{k}=\frac{1}{2\pi}\frac{v_{\rm th}}{\delta_{k-1}}.\]
The process converges quite rapidly, in some five iterations depending on parameters. The resulting collisionalities are shown in figure 7 with dashed lines and compared with the local values. It can be clearly appreciated that the collision frequencies associated to stochastic heating are dominant except at the lowest electron temperatures. The figures have been obtained using the gas pressure of the SPIDER experiments from which we have obtained the plasma profiles (Sec. 3). It should be kept in mind that these evaluations are considered as having the appropriate order of magnitude. More detailed calculations might be necessary for an eventual comparison with kinetic codes or with detailed experimental data.
It must be warned that a main-ion density close to the electron density gives ion plasma frequencies quite comparable to the RF frequency in SPIDER discharges; and even higher, depending on plasma parameters. In consequence it should be kept in mind that this work concentrates on the very relevant electron dynamics, which gives rise to the conductivity models exposed above. Further refinements might require considering ion dynamics as well.
## 5 Calculations
Eq. 1 is written in cylindrical coordinates (Appendix A) and approached numerically using the MATLAB(r) suite for convenience. Several checks, like comparing with known theoretical solutions in simple vacuum cases where the boundary conditions are well defined, have been done prior to production calculations. For example, we have verified the fields of an infinite solenoid, or Eq. 5 for one coil. Incidentally, the results with several formulae for the conductivity have been also compared with a FORTRAN code [6] that uses a completely different numerical scheme, and with a finite-element-method code solving Eq. 34 in the same 2D geometry. In all cases the comparisons are satisfactory.
In the next sub-sections we describe a stepwise addition of ingredients in order to investigate their need to make the calculations approach the experimental information of input power and inductance in presence of the plasma. We recall that the input power is 50 kW and the inductance without plasma is about 9.6 \(\mu\)H, decreasing very slightly in presence of the plasma, on the order of a few percent. According to Eq. 19, the reduced inductance that corresponds to the void driver is \(L_{\downarrow}^{\rm void}=6.2\)\(\mu\)H. As mentioned in 3.2, we fix the amplitude of the RF coil current to 242 A for \(I_{\rm PG}=0\) plasma profiles, and 218 A for the \(I_{\rm PG}=2.6\) kA case.
### Conductivity with local collisions and stochastic heating effects
In order to start with a minimum amount of ingredients, several calculations have been dedicated to investigate at what extent the conductivity formula 22 might explain the experimental information using only the local collisional processes, \(\nu_{\rm local}\) (Eq. 25). The corresponding conductivities are well above a thousand S/m in all the plasma central part, and some hundred S/m near the Faraday shield lateral wall (plasma radius). This is due to the comparatively low values of \(\nu_{\rm local}\), see figure 7. Using the \(I_{\rm PG}=0\) profiles and the corresponding operational conditions of \(I_{\rm RF}\), and neutral gas pressure and temperature, the iterative process diverges due to the excessive plasma response. Halving artificially the electron density, the ohmic power absorbed by the plasma is still above the nominal net power and the inductance decreases excessively, around a 50%. This model for the conductivity is clearly inadequate for these plasmas.
Adding the stochastic heating contribution through the equivalent collisionality, \(\nu_{\rm st}\), takes the calculations closer to the expected values according to the experimental information. Figure 7 shows that \(\nu_{\rm st}\gg\nu_{\rm local}\) in practically all relevant combinations of electron density and temperature. Therefore, we now include \(\nu_{\rm st}\) to try variations in the most uncertain parameters in search for a possible match within experimental uncertainty.
Two global parameters that are subject to experimental uncertainty are the neutral gas pressure and temperature. The gas pressure can decrease notably where the electron pressure is high. According to the measurements near the driver axis, we have an electron pressure \(n_{e0}[{\rm m}^{-3}]T_{e0}[{\rm J}]\approx 1.8\) Pa, larger than the gas pressure \(P_{\rm g}=0.34\) Pa. Therefore, a considerable depletion of neutrals is expected in the center of the driver. For simplicity, we take a homogeneous neutral gas pressure but change its value as a whole in the scans. With respect
Figure 8: Scans of the ohmic power and the reduced inductance, Eq. 19, on the values of homogeneous gas temperature (a) and pressure (b) considering only local collisions and stochastic heating in the formulation of the plasma conductivity. The plasma profiles correspond to \(I_{\rm PG}=0\) in figures 3 and 5. The expected values are shown in shaded areas with corresponding colors.
to the temperature, we likewise consider a homogeneous value compatible with the experimental indications. Ongoing studies based on emission spectroscopy indicate that the gas temperatures, under reasonable assumptions, have a thermal component around 1000 K not strongly dependent on filter magnetic field or nominal power [25].
The results of the neutral gas temperature and pressure scans are shown in figure 8. We find that, with the conductivity Eq. 22 based on \(\nu=\nu_{\rm local}+\nu_{\rm st}\), not only the values of ohmic power and driver inductance are far from the experimental ones for all values in the scans, but also the trends are opposite: when one parameter (e.g. the gas pressure) makes the power tend -although very weakly- towards the experimentally acceptable range, then the inductance moves away from it; and viceversa. The uncertainty in these two parameters, neutral gas pressure and temperature, cannot explain the mismatch between the experimental and calculated absorbed power and inductance.
Another possibility within experimental uncertainty pertains the electron density and temperature profiles. According to emission spectroscopy data, the electron density and temperature in the outer half (in radius) should not be too different from the values used in the calculations of figure 8, respectively around \(3\times 10^{17}\) m\({}^{-3}\) and 9 eV. In the particular case of the electron density, its decrement near the plasma radius should be partly due to plasma compression despite the fact that the power remains at half the design value of 100 kW per driver. Considering the possibility that the electron density near the border falls more than estimated with the radiative model, we have played with the profiles by changing the shape functions, Eq. 21. Figure 9 shows three profiles with decreasing edge values and the corresponding calculated values of power and reduced inductance. The ordinate is shown in logarythmic scale to highlight the differences. The delivered power to the plasma remains high in two of
Figure 9: Density profiles and corresponding values of plasma absorbed power (kW) and inductance (\(\mu\)H) using parameters of \(I_{\rm PG}=0\) (no filter magnetic field) discharges. Reduced inductance without plasma: \(L_{\downarrow}=6.2\)\(\mu\)H.
the cases, and the plasma inductance decreases too much in relation with the experiments. The latter is true also for the most favourable case where we have taken the density to almost negligible values near the plasma edge.
The results shown in figures 8 and 9 indicate that the inclusion of the stochastic formulation [24] gives conductivities still too large, resulting in ohmic powers generally above the nominal input power, \(P_{\mathrm{ohm}}>P_{\mathrm{RF}}\). The inductance falls always too short of the expected values in the sense that it decreases excessively from the void-driver value. In essence, the reason is that the \(\lesssim 50\) kW of absorbed power cannot be achieved unless the 2D distributions of induced electric field and plasma current density are far from overlapping. Figure 10 shows respective contour maps of the amplitude of the induced electric field and current density for the favourable calculation of figure 9, i.e., the one yielding the smallest absorbed power. Since the conductivities grow rapidly with density, the current density peaks quite close to the RF coils where the induced electric field is still large. This provokes a large ohmic power density \(\mathbf{J}\cdot\mathbf{E}\) and large net azimuthal currents, near 200 A in this case. The effect on the boundary conditions is large, hence the decrement of the inductance.
So far, 2D electromagnetic calculations suggest that the plasma current density must peak closer to the axis of the driver. Given the fact that the active currents are outside the plasma domain, this can only happen if the conductivities near the cylindrical side of the Faraday shield are still considerably reduced. Let us assume that the RF magnetic field can have a strong impact in the plasma conductivity due to the reduced mobility of the electrons. The amplitude of this field is, obviously, larger near the RF coil, which renders this contribution a good candidate to explain the experimental results in plasmas without static magnetic field. As mentioned in the Introduction, the problem of the electrical conductivity of RF plasmas is open to research and generally
Figure 10: 2D distribution of the magnitude of the induced electric field, \(|E_{\theta}|\), and the plasma current density, \(J_{\mathrm{p}}\), for the case of \(P_{\mathrm{ohm}}=37\) kW in figure 9.
resolved numerically except in high-symmetry configurations. We have verified that reducing artificially the plasma conductivity in the regions of high \(B=|{\bf B}|\) yields acceptable values of deposited power and inductance reduction. Therefore, we have taken a simple formulation of the \(B\)-field dependent electrical conductivity [26] that plays this role.
### Inclusion of RF magnetic field effects
When the RF magnetic field is included in the conductivity, Eq. 1 becomes non-linear. The azimuthal component of the electric field is linearly related with the same component of the vector potential in our problem. In cylindrical coordinates we have two non-null components of the magnetic field, \(B_{r}=-\partial_{z}A_{\theta}\) and \(B_{z}=(1/r)\partial_{r}(rA_{\theta})\). The effect of the RF magnetic field on the plasma conductivity can be obtained following a procedure similar to the one leading to Eq. 22, except that now the Lorentz force density is fully considered for the electron fluid. An estimate of the effect of the induced magnetic field on the plasma conductivity in cylindrical coordinates is provided in [26]. Considering the limit \(\omega^{2}\ll\nu^{2}\), the expression is
\[\sigma_{\rm tus}=\sigma_{\rm dc}\left[\frac{1}{(1+\Omega_{e}^{2}/\nu^{2})^{1/2 }}-\imath\frac{\omega/\nu}{(1+\Omega_{e}^{2}/\nu^{2})^{3/2}}\right], \tag{26}\]
where \(\Omega_{e}=eB/m_{e}\) and the "direct current" conductivity is \(\sigma_{\rm dc}\equiv e^{2}n_{e}/(m_{e}\nu)\). Note that, in terms of this definition, formula 22 becomes
\[\sigma=\sigma_{\rm dc}\frac{\nu}{\nu+\imath\omega} \tag{27}\]
and does not tend to Eq. 26 when \(\Omega_{e}\to 0\), except in the mentioned limit \(\omega^{2}\ll\nu^{2}\). For our present purpose, we can use the model Eq. 26 even if it does no match Eq. 27 when the effect of the magnetic field is neglected.
We have mentioned that the problem with a conductivity that depends on the induced electric field is non-linear, but the iterative procedure used to obtain the converged boundary conditions serves also to update the plasma conductivity with the successively calculated electric fields (hence vector potentials and magnetic fields). Figure 11 shows the result on \(I_{\rm PG}=0\) plasmas. In (a) we can see that the electric field penetrates more inside the plasma region than in the case without magnetic field effect (compare with figure 10), but the current density peaks much closer to the driver axis (b). In consequence, (i) the absorbed power falls to \(P_{\rm ohm}=20\) kW, well below the nominal \(P_{\rm RF}=50\) kW; and (ii) the reduced inductance barely decreases, to \(L=6.16\)\(\mu\)H, less than \(1\%\) of the corresponding void driver value. Both results are compatible with the experiments. Figure 11 (c) shows the magnitude of the RF magnetic field, where it can be appreciated that it reaches values \(\approx 10\) mT near the RF winding. This is to be compared with the \(\approx 3\) mT achievable inside the driver with the external static filter magnetic field. From the perspective of the model conductivity, the oscillating RF magnetic field alters the conductivity so as to become the dominant effect in our plasma conditions.
### Inclusion of a static magnetic field
We present here a calculation based on plasma parameters obtained with the application of the filter magnetic field, \(B_{\rm f}\). This is intended to be the main mode of operation of the sources of SPIDER. The electron density and temperature distributions are based on experimental data as before, and they are set in the calculations via Eq. 20, that is, with profiles corresponding to \(I_{\rm PG}=2.6\) kA discharges. This current is close to the maximum available in the plasma grid during the 2020 campaigns, and therefore represents about the maximum static magnetic field attainable in the present experiments. Following previous works [5], we include the static magnetic field using the same formulation for the conductivity in presence of the RF magnetic field, Eq. 26, but now substituting the cyclotron frequency by an "equivalent" value such that \(\Omega_{\rm eq}^{2}=\Omega_{e}^{2}+\Omega_{\rm f}^{2}\), where \(\Omega_{\rm f}=eB_{\rm f}/m_{e}\) is the cyclotron frequency that corresponds to the static filter magnetic field [27]. Note that this is also a simplified formulation where only the modulus, not the vectorial character, of the magnetic field is taken into account. The consideration of an order-two tensorial conductivity would require three-dimensional calculations.
The filter magnetic field is approximately perpendicular to the axis of the drivers [12] and its modulus increases from the back side towards the opening to the expansion region. The field is similar, but not identical, in all drivers due to the obliged proximity of the circuitry of the driving current \(I_{\rm PG}\). A representative expression for the filter magnetic field magnitude, depending on its maximum \(B_{0}\), can be taken as
\[B_{\rm st}(r,z)=\frac{B_{0}-B_{0a}}{2}\left(1-\tanh\frac{\delta_{B}-z}{\delta _{B}}\right)+B_{0a}, \tag{28}\]
where the offset \(B_{0a}\) and the width \(\delta_{B}\) are chosen to approximate the experimental data. Here we take \(B_{0a}=1.8\) mT and \(\delta_{B}=0.08\) m for a magnetic field
Figure 11: Amplitude of the azimuthal induced electric field (a), current density (b) and modulus of the RF magnetic field (c) in a calculation with \(I_{\rm PG}=0\) kA plasma profiles, see figures 3 and 5.
\(B_{\rm f}=4\) mT, which gives a maximum of 3.6 mT at the opening to the expansion region.
In figure 12 we show the 2D maps of the conductivity (top) and the imaginary part of the induced electric field (bottom) for two calculations based on the \(I_{\rm PG}=2.6\) kA plasma profiles. Both calculations include the effect of the RF magnetic field on the conductivity, but the static magnetic filter-field is only considered in the left panels, as indicated. Since the models used take into account only the magnitudes of the magnetic field, the effect of the filter magnetic field on the absorbed power is rather weak once the RF field has been considered. Indeed, even though the plasma conductivity increases considerably near the driver axis when the static filter-field is not considered, the induced electric field (only the dominant out-of-phase part is shown) remains practically unchanged.
## 6 Discussion
Section 5 has been devoted to justify the main elements that yield acceptable estimates of absorbed power and decrement in driver inductance with plasma in conditions of SPIDER discharges. Using experimental data from discharges _without_ filter magnetic field, we have found that there is some essential ingredient in the plasma conductivity that should reduce it mainly where the oscillating RF magnetic field is larger. We have reproduced this behaviour with the simple formulation Eq. 26. We have also tried a formulation for the static
Figure 12: 2D distributions of plasma conductivity (top) and imaginary part of the induced electric field \(E_{\theta}\) (bottom) based on plasma data from SPIDER discharges with \(I_{\rm PG}=2.6\) kA, including the effect of the static filter magnetic field (left) or excluding it (right).
filter magnetic field (figure 12). Here we find a weak additional effect because the conductivity is reduced mainly where the induced electric field is already very low. However, from the experimental viewpoint it is clear that the filter magnetic field makes a notable difference in plasma parameters. Therefore, the static field must be considered in transport calculations. In the calculations that follow we retain all the ingredients tested so far.
The deposited power is sensitive to the electron density and temperature values around mid-radius, where the induced electric field is still intense (here the reduced conductivity associated to the RF magnetic field is responsible for the large field penetration) and the plasma conductivity starts having large values so as to provoke intense current densities. For this reason, we present the final results giving margins according to the limit profiles presented in figure 13, where the radial functions of the reference profiles for each case, without and with magnetic filter field, are shown with black discontinuous lines and the probe data are shown with symbols. Since the uncertainty of data is larger in the radial dimension, we have performed several calculations changing the form functions for the radial functions in Eqs. 20 and 21, as shown in the figure. The purpose is to inform on the sensibility of the assumed model conductivity to the experimental uncertainty. The values of absorbed power and inductance are labelled in corresponding colours for the extreme profiles considered in this study. The power remains, in all cases, below the nominal \(P_{\mathrm{RF}}=50\) kW per driver. The inductances decrease less than a \(3\%\), also in agreement with the experimental findings [16], except for the \(32\) kW case with \(I_{\mathrm{PG}}=2.6\) kA. The obtained transfer efficiencies range, in the \(I_{\mathrm{PG}}=0\) kA case, from \(P_{\mathrm{ohm}}/P_{\mathrm{RF}}=16\)\(\mathrm{kW}/50\)\(\mathrm{kW}\approx 35\%\) to \(24\)\(\mathrm{kW}/50\)\(\mathrm{kW}\approx 50\%\); while in the \(I_{\mathrm{PG}}=2.6\) kA case, they range approximately from \(45\%\) to \(65\%\). As mentioned, however,
Figure 13: Extreme electron density (dashed lines) and temperature (dotted lines) profiles at \(z=0\) used in calculations without (a) and with (b) filter magnetic field. The corresponding calculated values of absorbed power and reduced inductance are indicated for the low (blue) and high (red) profiles. Experimental data are shown with squares for the density and triangles for the temperature. The reference profiles Eq. 20β21 are shown with black lines.
this latter case might be overestimated because the inductance is expected to decrease somewhat less. Globally, we find power transfer efficiencies around a 50%, apparently higher in presence of the filter magnetic field.
It is worth making a comment on the power transfer efficiency. We observe that the entire problem scales with \(I_{\rm RF}\) through the boundary conditions (Eq. 5), and the calculated absorbed power changes with \(I_{\rm RF}^{2}\). Based on the equivalent resistances found in [5] we could allow for a change of, say, a 10% with respect to the values we have set in order to obtain \(I_{\rm RF}\). Let us call \(R_{\rm eq}\) our choice of equivalent resistance and let \(R_{\rm eq,1}\) be a 10% different equivalent resistance giving rise to a new current \(I_{\rm RF,1}\). For a fixed nominal power, the ratio \(I_{\rm RF}^{2}/I_{\rm RF,1}^{2}=R_{\rm eq,1}/R_{\rm eq}\), which would give the same ratio among absorbed powers. The range of calculated powers in figure 13 could then be shifted up or down on the order of the same percentage, too little to make a considerable difference with respect to the conclusions about the importance of the RF magnetic field in the conductivity of the plasmas in the SPIDER sources.
Finally, we would like to underline the fact that the calculations here presented, based on experimental data, do not pretend to prove the validity of the conductivity models, but just give a first approximation to the necessary ingredients. Indeed, the model for the stochastic conductivity is based on the evaluation of a skin depth that is not necessarily consistent with the evaluated penetration of the electric field; and the penetration of the induced electric field is a consequence of the smaller conductivity provoked by the RF magnetic field (Eq. 26), but this should be taken as a practical formulation rather than a physical model [28]. The same applies to the model for the static filter magnetic field [27], which considers the modulus while the direction of the field is obviously important. With these considerations in mind, the present models for the 2D electromagnetic calculations in the drivers of the SPIDER device give acceptable values of the ICP deposited power and current density distributions, which makes them suitable for a first formulation to be considered in a 2D transport code. Even if it is acknowledged that the conductivity models for the magnetic field effects can be questioned from the physics perspective (like the 2D restriction of the calculations) and detailed modelling is probably mandatory [29], any future formulation of the electrical conductivity should not give too different numbers from those here found because we are using plasma experimental data as input. At his respect, the present work points to RF magnetic field effects on the electrical conductivity of the plasma as a necessary ingredient in the physics of ICP in SPIDER discharges.
Further improvements in calculations for SPIDER plasma sources (and, correspondingly, for the ITER device) should also benefit from ongoing self-consistent calculations (RF inductive heating plus transport) [6, 30]. There is also ongoing research to extend the present calculations to three dimensions, including non axi-symmetric components like the Faraday shield [31]. Such 3D electromagnetic calculations of the induction process might be used to investigate more detailed models, for instance including anisotropic or non local models of the electrical conductivity. This would be interesting to check at what extent the present 2D calculations can be considered as a practical approximation of
the plasma-RF coupling in other applications.
## 7 Conclusion
This work documents the first electromagnetic simulations of the coupling between the RF field and the plasma inside the drivers of the SPIDER device. Initial boundary conditions are set so as to reproduce the boundary vacuum fields and, consequently, the experimental driver inductance without plasma. An iterative process provides the reduction of the driver inductance in presence of the plasma. Using experimental information about the electron density and temperature inside the drivers, the calculated heating power and the reduced inductance can be compared with their experimental counterparts depending on the formulation provided for the plasma conductivity. In this way, it is found that local collisional processes and, likely, non-local processes associated to stochastic heating, cannot explain the experimental conditions. The effect of the magnetic field is taken from simple analytical formulations that use only its modulus. Despite their simplicity and questionable physics, they happen to make the 2D electromagnetic calculations compatible with the experimental knowledge of input power and inductance reduction with plasma. Therefore, the RF magnetic field is posed as a necessary ingredient to understand the ICP in discharges created without and with static filter magnetic field in the SPIDER device. Aside from physical interpretations, this paper informs on what kind of change in the electrical conductivity is necessary in our plasmas so as to acceptably approach the experimental values. From this perspective, the present 2D electromagnetic calculations confirm the suitability of the conductivity models used in previous studies [5], and can be taken as a practical starting tool in 2D fluid-transport codes for this device [6, 30].
## Acknowledgement
This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. This work has been carried out within the framework of the ITER-RFX Neutral Beam Testing Facility (NBTF) Agreement and has received funding from the ITER Organization. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.
Induction equation
In general, the vortex source density of the magnetic field is
\[\nabla\times{\bf B}=\mu_{0}{\bf J}+\mu_{0}\partial_{t}(\epsilon{\bf E}),\]
where \({\bf J}={\bf J}_{\rm b}+{\bf J}_{\rm p}\) are the material current densities, respectively in the windings and in the plasma; and \(\epsilon({\bf r},t)\) is the permittivity that we take as a scalar function in order to simplify. Disregarding the electrical circuits that feed the RF winding, we associate \({\bf J}_{\rm b}\) to a filamentary current in the winding. We take the time dependence of this current as an ideal harmonic
\[I_{\rm RF}=\Re\{I_{\rm RF}e^{\imath\omega t}\} \tag{29}\]
where \(I_{\rm RF}\) is real, and the winding as a set of circular loops encircling the driver.
From the curl of the induction law, \(\partial_{t}\nabla\times{\bf B}=-\nabla\times\nabla\times{\bf E}=\nabla^{2}{ \bf E}-\nabla(\nabla\cdot{\bf E})\), one obtains a vector differential equation
\[\nabla^{2}{\bf E}-\nabla\left(\frac{\rho}{\epsilon}\right)=\mu_{0}\partial_{t }{\bf J}+\mu_{0}\partial_{tt}(\epsilon{\bf E}). \tag{30}\]
Here we have substituted the divergence of \({\bf E}\), so \(\rho\) is the charge density. We are interested in spatial scales much larger than the Debye length and assume, in addition, negligible capacitive coupling. In these conditions we can drop the term \(\nabla(\rho/\epsilon)\).
We exploit the cylindrical symmetry of the device to simplify the system of equations. We remind here the cylindrical expression for the Laplacian of a vector,
\[\nabla^{2}{\bf E}=\left(\begin{array}{c}\nabla^{2}E_{r}-\frac{2}{r^{2}} \partial_{\theta}E_{\theta}-\frac{E_{r}}{r^{2}}\\ \nabla^{2}E_{\theta}+\frac{2}{r^{2}}\partial_{\theta}E_{r}-\frac{E_{\theta}}{ r^{2}}\\ \nabla^{2}E_{z}\end{array}\right),\]
and of a scalar function
\[\nabla^{2}E_{j}=\frac{1}{r}\partial_{r}(r\partial_{r}E_{j})+\frac{1}{r^{2}} \partial_{\theta\theta}E_{j}+\partial_{zz}E_{j}.\]
Let us assume beforehand that there is indeed cylindrical symmetry. Then there are no dependencies on \(\theta\) and the three equations involved in Eq. 30 reduce to one equation for the only component of the electric field,
\[\frac{1}{r}\partial_{r}(r\partial_{r}E_{\theta})+\frac{1}{r^{2}}\partial_{ \theta\theta}E_{\theta}+\partial_{zz}E_{\theta}-\frac{E_{\theta}}{r^{2}}-\mu_ {0}\partial_{tt}(\epsilon E_{\theta})=\mu_{0}\partial_{t}J_{\theta} \tag{31}\]
A rapid dimensional analysis shows that, due to the relatively low angular frequency \(\omega\), the displacement currents can be neglected in the plasma region, \(\partial_{tt}\to 0\). Therefore, associating the plasma response \(J_{\rm p\theta}\) to the induced electric field via a scalar conductivity,
\[J_{\theta}=J_{\rm b\theta}+J_{\rm p\theta}=J_{\rm b\theta}+\sigma E_{\theta},\]
the expression 31 reduces to
\[\frac{1}{r}\partial_{r}(r\partial_{r}E_{\theta})+\partial_{zz}E_{\theta}-\frac{E_ {\theta}}{r^{2}}-\mu_{0}\partial_{t}(\sigma E_{\theta})=\mu_{0}\partial_{t}J_{ \mathrm{b}\theta}. \tag{32}\]
Recalling Eq. 29, the solution can take the form
\[E_{\theta}(r,z;t)=\tilde{E}(r,z)e^{\imath\omega t} \tag{33}\]
with a complex spatial part, \(\tilde{E}\in\mathbb{C}\), which is the component that the code must solve for. The imaginary part of \(\tilde{E}\) explains the (position dependent) phase of the azimuthal electric field with respect to the current in the RF coil. Writing the solution as \(\tilde{E}=|\tilde{E}|e^{\imath\varphi}\), we have
\[E_{\theta}(r,z;t)=|\tilde{E}(r,z)|e^{\imath[\omega t+\varphi(r,z)]}\]
and the real field amplitude is \(|\tilde{E}|\).
The differential equation 32, once discretized using some numerical differences method, becomes a linear system of equations on the discretized variable \(E_{\theta}\). Thus, the equation
\[\partial_{r}(r\partial_{r}E_{\theta})-\frac{E_{\theta}}{r}+r\partial_{zz}E_{ \theta}-\imath\omega r\mu_{0}\sigma E_{\theta}=\mu_{0}r\partial_{t}J_{ \mathrm{b}\theta} \tag{34}\]
becomes the linear system
\[\mathbf{A}\cdot\mathbf{E}=\mathbf{b},\]
where the coefficients \(A_{i}^{j}\) of \(\mathbf{A}\) depend on the numerical stencil for the Laplacian and the induced currents (\(\propto E_{\theta j}\)), while the elements \(b_{j}\) of \(\mathbf{b}\) are related with the boundary conditions. In our case we solve the homogeneous Eq. 34 because the calculation domain does not include the external currents \(J_{\mathrm{b}\theta}\), but the boundary conditions depend on them and impose fixed values that move from the left-hand-side to \(\mathbf{b}\).
|
2302.02267 | Machine Learning Methods for Evaluating Public Crisis: Meta-Analysis | This study examines machine learning methods used in crisis management.
Analyzing detected patterns from a crisis involves the collection and
evaluation of historical or near-real-time datasets through automated means.
This paper utilized the meta-review method to analyze scientific literature
that utilized machine learning techniques to evaluate human actions during
crises. Selected studies were condensed into themes and emerging trends using a
systematic literature evaluation of published works accessed from three
scholarly databases. Results show that data from social media was prominent in
the evaluated articles with 27% usage, followed by disaster management, health
(COVID) and crisis informatics, amongst many other themes. Additionally, the
supervised machine learning method, with an application of 69% across the
board, was predominant. The classification technique stood out among other
machine learning tasks with 41% usage. The algorithms that played major roles
were the Support Vector Machine, Neural Networks, Naive Bayes, and Random
Forest, with 23%, 16%, 15%, and 12% contributions, respectively. | Izunna Okpala, Shane Halse, Jess Kropczynski | 2023-02-05T00:14:07Z | http://arxiv.org/abs/2302.02267v1 | # IEEE Copyright Notice
###### Abstract
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Cite as:
O. Izunna, H. Shane, and K. Jess. "Machine Learning Methods for Evaluating Public Crisis: Meta-Analysis," 2022 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 2022.
BibTeX:
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[39]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
[39]
# Machine Learning Methods for Evaluating Public Crisis: Meta-Analysis
1st Izunna Okpala
_School of Information Technology_
_University of Cincinnati_
[email protected]
2nd Shane Halse
_School of Information Technology_
_University of Cincinnati_
[email protected]
3rd Jess Kropczynski
_School of Information Technology_
_University of Cincinnati_
[email protected]
###### Abstract
This study examines machine learning methods used in crisis management. Analyzing detected patterns from a crisis involves the collection and evaluation of historical or near-real-time datasets through automated means. This paper utilized the meta-review method to analyze scientific literature that utilized machine learning techniques to evaluate human actions during crises. Selected studies were condensed into themes and emerging trends using a systematic literature evaluation of published works accessed from three scholarly databases. Results show that data from social media was prominent in the evaluated articles with 27% usage, followed by disaster management, health (COVID) and crisis informatics, amongst many other themes. Additionally, the supervised machine learning method, with an application of 69% across the board, was predominant. The classification technique stood out among other machine learning tasks with 41% usage. The algorithms that played major roles were the Support Vector Machine, Neural Networks, Naive Bayes, and Random Forest, with 23%, 16%, 15%, and 12% contributions, respectively.
Crisis informatics, Disaster management, Machine Learning, Learning Algorithms, Meta Analysis
## I Introduction
Over the last decade, the scientific community like IEEE and the Information Systems for Crisis Response and Management (ISCRAM) have contributed many studies that utilize real-time information sources to support situation awareness during large-scale events [1, 2]. The Machine learning field has advanced on how they automate processes to filter large volumes of data [3]. This study explores a variety of machine learning solutions utilized in scholarly articles to understand human actions towards crises. It was also informed by studies that addressed disaster management, health, politics, and other forms of crisis that utilized data beyond social media with evidential proof from many scholarly articles available in major academic databases that focused on analyzing human actions. While the first interactive medium for an individual that has no control over the mainstream media is the social media platform, local sources for tracking crises exist. People tend to report incidents, or debate about the ongoing incident via a social network that is familiar to them, or verifiable local reporting agencies and news media [4, 5]. Scientific researchers have taken advantage of the mass surge of social media data [6], and local reports to carry out machine learning procedures like predictions. The objective of this paper is to examine prevalent machine learning methods utilized by academics for managing crises, how those methods are implemented, and the source of the data.
As noted earlier, historical and real-time data play an important role in managing crisis, or in our case, evaluating crisis [7]. In order to plan for, mitigate, and avoid future crises, it is recommended to investigate historical or existing solutions. The concept of human action is predicated on some causative factors i.e. before someone acts, there is a cause [8]. The human environment is the main factor that drives crisis, and how humans manage environmental resources plays a huge role in crisis occurrence. Some of the concepts that aid in determining human actions from a data source are sentiments, perceptions, and/or attitudes [9]. While perception uniquely identifies opinion-based thoughts and impressions [10, 11], and can simply be distinguished from sentiments and attitudes, sentiments emphasizes emotions [12, 13], and attitudes leads to actions [7]. This study seeks to demonstrate with the help of peer-reviewed articles, the machine learning methods that are most prevalent in today's world for evaluating or predicting crisis. The focus would be on public or human actions towards crisis with key concepts like attitudes and/or perceptions.
### _Research Questions_
1. _What are the dominant machine learning methods for managing crisis?_
2. _What are the keywords frequently used in scientific studies addressing crisis?_
## II Background
Statistical methods were among the first tools used to evaluate crises [14]. In 1932, when Patrick [15] completed a multivariate analysis on some organizations, financial stakeholders began developing models to assess the likelihood of a crisis in their organization. Since then, academics have devised a number of quantitative ways to detect and evaluate crises. Some quantitative analysis like the t-test has been successful in quantifying ratios [16]. Altman [17] devised a score that was used to categorize observations into good and bad. Multiple Discriminant Analysis (MDA) also played a role in some advanced analyses to compress variance between datasets [18]. Despite their widespread use in both academia and industry, these types of models have proven to be about numbers and quantities, necessitating the need for improvements that
span beyond numbers [19]. To address the constraint of these models, various research that employs pattern matching approaches has been substantially researched in the field of machine learning [20]. Several of which have proved machine learning models' ability to deal with unbalanced datasets [21], pictorial data and text data [22]. Even the difference between parametric and non-parametric methods for analyzing risks can be detected [23].
In addressing the research questions, the authors explored some background literature related to auto-coding, pattern matching, and text analysis. Most articles tackled issues of crisis management using machine learning, data transformation/scaling, and natural language processing. Machine learning has helped IT practitioners perform tasks in a very short amount of time [24]. It appears to be a quick option for identifying disruption events, getting authentic feeds or detecting periodic incidents in real-time [25]. Learning such patterns was also a major game-changer, as the approach tries to map patterns of interest or similarities in a given dataset [26], while also showing the capacity to learn and produce accurate results [27]. The pattern that is key to understanding human actions are cues demonstrating preference. This preference can be in the form of emotions, opinions, viewpoints, or specific annotations that help in explaining why people act the way they do [28]. When evaluating or predicting actions, some key concepts to take note of are perceptions, attitudes, sentiments, etc. The term "perception" can be misconstrued to mean the same thing as attitudes or even sentiments. In simpler terms, sentiments are concerned with people's feelings about an event, i.e., positive and negative events [13], perception and attitudes are concerned with people's perspective towards an event [11, 28]. Emotions can aid in understanding perception, but the distinction is that perceptions or attitudes can be formed on the basis of facts and not only emotions [29]. While attitudes are reactionary (they can produce immediate action), perceptions are internal cues suggesting future actions or attitudes [30]. Focusing on perceptions, the formal definition is the process of organizing and interpreting sensory inputs to make sense of events [11].
To detect or predict a crisis, there is a need to make sure that the dataset used for analysis is actionable. Actionable data is characterized by information that can be acted upon or data that provides sufficient insight about the future [31] i.e., the data gives insight into actions that informs valuable decisions. In other words, it is more than just data kept in data warehouses. They have undergone analytical and data manipulation and are presented in a clear, intelligible, and frequently visually appealing manner [32]. It enables researchers to spot mistakes or potential crises and capitalize on new opportunities, improve future actions, and make faster and more informed decisions for the future [33].
### _Machine Learning for Crisis Detection and Management_
There are multiple explanations as to why a crisis incident should be analyzed with machine learning. One reason is to prevent such occurrences in the future, another is to study the pattern by which people engage on such occasions in a timely fashion. The COVID-19 epidemic was the major global crisis between 2020 - 2021 [34], and many academics have attempted to evaluate live or historical data as to how the surge is escalating, including the metrics that supposedly caused the escalation. Other researchers have looked at why some people would desire to get vaccinated [11] and why others would not. Beyond the COVID-19 pandemic, research on crisis informatics tackles all forms of crisis, like disaster management, 911 or 311 incidents, political movements, natural disasters, and various forms of assault like rape, to name a few. Given that our study focuses on crisis situations, machine learning has shown promise in the scientific community, as evident in the number of articles that apply automated means for detecting, predicting, or averting crisis events. Almost all the major unsupervised and supervised algorithms like the neural network, support vector machine, Naive Bayes algorithm, K-means clustering, K nearest neighbor, decision trees, and gradient boost algorithms have been applied in various capacities. Several crisis events can be averted or addressed in a timely manner when trained models with high accuracy are used, especially in cases where human involvement in solving the problem is near impossible or time-consuming.
Averting dangers has been made easier and more expressive with the help of several data sources and machine learning methods. The Twitter platform is one such medium from which actionable data can be derived. Thousands of academics have explored the platform since it can be used to generate insights during crisis [35]. Additionally, when dealing with data from varied sources, the issue of data structure limits some procedures, but with the proper application of machine learning tuning or data transformation, the issue can be mitigated. Some other data sources however are dedicated to archiving data for certain topics, such as the health crisis - World Health Organization data (WHO) [36], financial crisis - World Bank open data [37], imminent disasters in government - data.gov [38], or child mortality and maternal mortality - UNICEF [39].
Given a reliable and actionable data source, machine learning thrives at conducting computational tasks that would ordinarily take human intelligence a significant amount of time to handle. Machine learning has evolved over time in such a way that any computer device with memory can be taught to follow specific patterns [40]. The traditional approach to learning, in which humans are trained with specialized material and tested to determine their mastery of a topic, gave rise to the concept of machine learning. It describes a machine's ability to have some type of intelligence and readiness to learn from experience. The game of checkers is one example of machine learning through experience [41]. Beyond the checkers game, machines have been trained to differentiate between authentic news and fake news as well as spam emails from authentic emails using the BERT model [42]. The value of implementing such machine-ready systems makes the prediction of crisis events seamless.
## III Methodology
The methodology employed in this study is the meta-review technique. It identifies the feasible variables in a cluster of articles with a common interest. This approach is often applied to literature published in a particular language; in our case, the review focused on academic articles written in English and published in ISCRAM, ScienceDirect, and the IEEEXplore databases. These databases were selected because they have subsections that addressed crisis management using machine learning. That is not to say that some other databases with an emphasis on crisis do not exist; we mainly focused on three for this study. The search and selection criterion illustrated in Figure 1 shows the various building blocks and the flow of data across them. The search terms are essentially the keywords needed to be used in a query for specific databases. More emphasis on the search term was made in the search technique section. The AND and OR operators were used in the query structure because they help to sieve out the articles that do not fall within our query parameter. Figure 2 expanded the inclusion and exclusion criteria shown in Figure 1 to visualize the different components of the inclusion and exclusion mechanisms, and how data were truncated or reduced in the process to get a final result.
### _Search technique_
The search technique was critical in this evaluative review to ensure integrity. The paper was found using an automated search in a variety of different electronic databases. Table I shows the three scientific databases explored.
With a structured search pattern, this study aimed at getting only relevant articles targeted to answer our research question i.e., at the search stage, we sieved literature from relevant sources with the selection of appropriate keywords. The articles featured keywords like; machine learning, crisis, and disaster. The steps for keyword preparation are as follows:
1. Determine the search terms in relation to the research questions.
2. Ensure that alternative spellings, antonyms, and synonyms of the search term are identified as well.
3. Perform Boolean operations (AND, OR) on the search terms.
4. Identify the dates for the search query.
The following keywords and operators, which are reflective of our research questions appear in the paper: (Disaster) OR (Crisis) AND (Machine Learning).
The search was conducted based on the database's preference pertaining to query structure. The ISCRAM was queried with the Contextual Query Language (CQL) specific to ISCRAM with the code in Table II. IEEEXplore and Science Direct were less complicated with the help of their web-based portal for inputting the search parameters with the "AND" and "OR" operators as shown in Table II.
Additionally, Our search was restricted by publication year, i.e. between 2010 and 2021 (recent publications), as well as categories, which included peer-reviewed journal and conference papers only.
### _Data Selection_
The 55 articles reviewed were carefully selected using the step-by-step approach shown in Figure 1. The years from 2010 to 2021 as stated earlier were used primarily to reflect current trends and progression in machine learning practice with regard to crises. The initial response from the various databases using the query format shown in Table II were 2,274 articles from Science Direct, 16,236 articles from IEEEXplore, and 76 articles from ISCRAM. Given the volume of articles, the number of publications were reduced to emphasize the significance of our research area (crisis). We specifically chose just the journal and conference articles from the computer science field for the Science Direct database because this discipline was clearly prominent with more contributions in the area of machine learning. This resulted in a total of 186 publications. We employed the same strategy for the IEEE search, but the advanced search parameters were different. The initial result of 16,236 from IEEEXplore was reduced to 5,770 articles by limiting the topic categories to disasters and choosing only conference and journal articles. When the computer science discipline was applied we got a total of 476 articles.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Index & Query type & Query \\ \hline \hline
1 & IEEEXplore & βCrisis AND Machine learning OR Disasterβ \\ \hline \hline
2 & ScienceDirect & βCrisis AND Machine learning OR Disasterβ \\ \hline \hline
3 & ISCRAM (CQL) & βall abstract machine learning crisis disasterβ \\ \hline \end{tabular}
\end{table} TABLE II: How the Databases were queried
Fig. 1: Data search and selection Algorithm
\begin{table}
\begin{tabular}{|c|c|c|} \hline DB1 & IEEEXplore Digital Library & https://ieeexplore-ieec-org/ \\ \hline \hline DB2 & ScienceDirect Library & [https://www.sciencedirect.com/](https://www.sciencedirect.com/) \\ \hline \hline DB3 & ISCRAM Digital Library & [http://idl.iscram.org/](http://idl.iscram.org/) \\ \hline \end{tabular}
\end{table} TABLE I: Scope of the search.
The ISCRAM result did not require further reduction because we received only 76 articles. Reading through all of the resulting articles would be nearly impossible and time-consuming. Therefore, we attempted to utilize a systemic approach already available from the various servers to select the first 30 relevant articles rather than random sampling. The relevance reflects the articles that have more contributions to the body of knowledge and the frequency of citations. We ended up with 90 to balance the equation between the three databases. Further selection was carried out by means of manually reading the 90 articles as demonstrated in Figure 2 to ensure that each article made use of a machine learning method, had a crisis response matching, and appropriate research questions that addressed crises. To do this, we employed the criteria below:
1. Include: based on the abstract that demonstrates a well-defined methodology
2. Include: based on a conclusion that identifies at least one of the search terms as well as a metric that shows evidence for the study
3. Exclude: based on a methodology that did not exemplify machine learning.
4. Exclude: based on research that has no strong validation or premise for validation.
Table III illustrates the components of each manuscript that were extracted. This not only demonstrates the connection to the research questions but also provides a mechanism to confine data extraction to only the fields that are relevant. The title, year, database, machine learning techniques, and research questions covered are among such fields.
### _Data Extraction_
The researchers manually coded the data in order to answer our two research questions. The labeling was done in four batches; methods, tasks, keywords, and algorithms, respectively. According to the selection criteria, all the publications considered for this study focused on crisis response and the application of machine learning technologies.
## IV Result
From the analysis, majority of the articles reviewed supported the classification machine learning task more than the regression or clustering tasks. Figure 2(b) shows that 41% of the reviewed papers made use of the classification task, while regression and clustering accounted for 18% and 16%, respectively. As shown in Figure 2(a), the supervised machine learning method garnered 69% when mapped together with unsupervised and active learning, with 27% and 3%, respectively. This does not imply that one method is superior to the other, rather it demonstrates a preference across scientific communities. The method preference of supervised learning can be attributed to the availability of training data peculiar to crisis from **CrisisNLP**[43], [https://crisisLex.org](https://crisisLex.org) - CrisisLexT26, SoSItalyT4, and BlackLivesMatterU/T1 [44]. Sometimes, the format in which crisis data is communicated may not be easily suited as a corpus for training a model. In this case, the unsupervised learning approach comes in handy [45]. Some of the studies examined had hybrid approaches where supervised and unsupervised methods were used depending on the availability of training data for some subset of their analysis. Active learning seems not to be largely applied in crisis research. One factor that could have influenced this is that active learning is a form of semi-supervised learning that engages outside sources to label a dataset. Because of that, the practice of semi-supervised learning was not noticed on a full scale. Following the distribution timeline of methods, it is clear that there is an upsurge in the use of supervised machine learning as the years go by across the three databases, as shown in Figure 3.
The methods in Figure 2(a) were further broken down into classification, clustering, regression, NLP, and sentiment analysis in Figure 2(b). The NLP and sentiment analysis were identified separately in this figure because they can be a subclass of either classification, clustering, or regression tasks. It should be noted that sometimes they exist on their own
Fig. 3: Graph representation of the ML methods and tasks
Fig. 2: Inclusion and Exclusion flow diagram
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Index & Fields & Description \\ \hline \hline DI-1 & Title & The title of the article \\ \hline \hline DI-2 & Year & The publication year of the article \\ \hline \hline DI-3 & Database & The source of the publication \\ \hline \hline DI-4 & Techniques & The machine learning approach used in the article \\ \hline \hline DI-5 & Research & The area of interest covered by the research(Computer Science) \\ \hline \end{tabular}
\end{table} TABLE III: Data Item collection form.
(as transformers), not following the algorithmic process in the three classes mentioned above.
The consistency in the usage of classification further strengthens the earlier statement about supervised learning being predominant amongst researchers. The next in line was clustering where k-nearest neighbors (kNN), K-Means, and some tasks with random forest and decision trees algorithms conform as shown in Figure 5 and Figure 6. The Regression tasks like linear and logistic regression shows that they are still relevant in today's research gaining 4% and 10% respectively. The analysis also shows that the implementation of machine learning methods for crisis management or evaluation reached its highest in the years 2020 and 2021. Again, the increase in the usage of machine learning classification techniques can be linked to the availability of training datasets, the ease with which the algorithms can be implemented, and the understandability of training data labels. The use of complex technologies can also be a contributing factor since such analysis may take human agents a considerable amount of time to analyze. The need for automated data processing, predictions, or analysis is something that has come to stay to aid humans.
The distribution of keywords in Figure 4 yielded some noteworthy results. The terms "COVID, disaster management, and social media" appeared more frequently in the reviewed publications, and they peaked in the year 2020. There was a clear pattern in the frequent mentions of social media and how they are used to aggregate crisis data. Our analysis also identified "social media" as the key term that received the most attention in the years 2019, 2020, and 2021, indicating that social media is a useful tool for gathering information about crises in the modern era. We can link the mention of a health crisis and COVID in 2021 to the recent COVID-19 pandemic and the way researchers are deploying intelligent systems to aggregate data across various media. Disaster management was also used as a keyword in several articles but was scarce in the year 2021. The mention of text classification, domain adaptation, and topic modeling was noted as well, and these relate to sub-methods in analyzing crises. The topic modeling which was scarcely present but important in the machine learning community was notably used in some research. This can be connected to our earlier statement that an NLP task can act as a transformer in an unsupervised environment (e.g., top modeling makes use of unsupervised methods).
Another significant discovery from this study is that when we further break down the methods to their algorithmic standards or the terminologies that indicate how they function, our earlier resolve could be strengthened. The SVM algorithm appears to dominate the reviewed articles, followed by the neural network. This also strengthens our earlier statement that the classification task and the supervised method were predominant. The strength of SVM can be uncovered in structural risk minimization, efficient memory management during training, and high dimensional spaces needed in datasets. Memory management is an issue for deep learning algorithms, which have more accuracy than SVM depending on the volume of data, but it seems the benefits of SVM appeal more to researchers in the crisis domain. The neural network was also prominent among the articles studied. Amongst the variations of neural networks present in crisis research are convolutional neural networks and long-term memory (LSTM) had more traction. The Naive Bayes, random forest, decision trees, and logistic regression also show promise as identified in Figure 6.
crisis management and machine learning. Crises can emerge in different forms, e.g., health crises, natural disasters, economic crises, food crises, and political events, among others. These variations introduce complexity in developing automated methods to manage crises as well as track emerging events to allow for fast decision-making. The methodology was explicit enough in describing the meta-review processes and how we collected and evaluated different publications that addressed machine learning for crisis management. The concept of machine learning from the literature review and result sections is demonstrative of how many ML techniques and algorithms can be used to manage crises. It is evident from our results that all the algorithms had their fair representation in terms of the value they added. The machine learning field is continuously evolving as it tries to help in the analysis of large chunks of data, easing the tasks of data scientists in an automated process and changing the way data extraction and interpretation work.
All the reviewed articles produced results based on the structure of the problem they addressed, the type of crisis tackled, the source of the data, and the volume of the data. The percentage distribution, as shown in Figure 2(a), describes the preferred machine learning tasks. NLP tasks such as sentiment analysis, which may be classified as supervised or unsupervised learning, were highlighted as critical in automating crisis management. The classification, clustering, and regression tasks were highly preferred, with classification topping the list due to the availability of training datasets and easy implementation of the algorithms through pre-packaged libraries in popular programming languages like Python, R, and Java. Furthermore, there is a general recognition of a reproducibility crisis in science right now. Machine learning techniques are often simpler to perform analysis with. The reproducibility crisis is the increasing number of study findings that aren't replicated when a different group of researchers performs the same experiment. This problem has ramifications in a variety of sectors where machine learning is utilized to make discoveries.
### _Limitations_
This study did not review literature from all the databases of scientific studies; instead, we culled articles from three databases that met the inclusion and exclusion criteria. As a result, our reviewed studies reflect research on public actions, crises, and machine learning from three venues. The results of this study may have been influenced by the search strategy employed in the paper, the researcher's biases, the unequal distribution of published journals or conference proceedings, and data extraction misrepresentation.
Both automated and manual search techniques were used in this study. Hundreds of data points were found as a result of the first iteration as seen in Figure 2. The content of the research papers was used to inform the manual search procedure after the initial search. The possible studies were chosen and analyzed by three researchers. It is possible that relevant studies were skipped in the search results. As a result, the scope of this review may be constrained. Consequently, the validity of this study is limited to the 55 key papers included in this evaluative review.
## VI Conclusion
Our findings show that a significant proportion of articles (41%) used classification over regression or clustering, owing to the availability of training data/corpus and pre-packaged machine learning libraries. 69% of the articles made use of the supervised machine learning method (RQ1), showing preference across scientific communities in dealing with crises. Consequently, 27% of the studies made use of the unsupervised learning technique, while the remaining 4% used active learning methods. To address RQ2, our analysis revealed the machine learning methods and prevalent keywords used in the reviewed articles. It suggests that the SVM, Neural Networks, Naive Bayes, and Random Forest algorithms, amongst others, are popular among researchers in the crisis management domain (RQ2). Also on RQ2, the keyword crisis informatics garnered great interest in the scientific literature explored. Some interesting projections like health and disaster management rose by the year 2020 and social media received the most attention in 2019, 2020, and 2021, implying that social media is beneficial to gathering crisis-related data in modern times.
|
2303.15432 | Visualizing the atomic-scale origin of metallic behavior in Kondo
insulators | A Kondo lattice is often electrically insulating at low temperatures.
However, several recent experiments have detected signatures of bulk
metallicity within this Kondo insulating phase. Here we visualize the
real-space charge landscape within a Kondo lattice with atomic resolution using
a scanning tunneling microscope. We discover nanometer-scale puddles of
metallic conduction electrons centered around uranium-site substitutions in the
heavy-fermion compound URu$_2$Si$_2$, and around samarium-site defects in the
topological Kondo insulator SmB$_6$. These defects disturb the Kondo screening
cloud, leaving behind a fingerprint of the metallic parent state. Our results
suggest that the mysterious 3D quantum oscillations measured in SmB$_6$ could
arise from these Kondo-lattice defects, although we cannot rule out other
explanations. Our imaging technique could enable the development of
atomic-scale charge sensors using heavy-fermion probes. | Harris Pirie, Eric Mascot, Christian E. Matt, Yu Liu, Pengcheng Chen, M. H. Hamidian, Shanta Saha, Xiangfeng Wang, Johnpierre Paglione, Graeme Luke, David Goldhaber-Gordon, Cyrus F. Hirjibehedin, J. C. SΓ©amus Davis, Dirk K. Morr, Jennifer E. Hoffman | 2023-03-27T17:55:20Z | http://arxiv.org/abs/2303.15432v1 | # Visualizing the atomic-scale origin of metallic behavior in Kondo insulators
###### Abstract
A Kondo lattice is often electrically insulating at low temperatures. However, several recent experiments have detected signatures of bulk metallicity within this Kondo insulating phase. Here we visualize the real-space charge landscape within a Kondo lattice with atomic resolution using a scanning tunneling microscope. We discover nanometer-scale puddles of metallic conduction electrons centered around uranium-site substitutions in the heavy-fermion compound \(\mathrm{URu}_{2}\mathrm{Si}_{2}\), and around samarium-site defects in the topological Kondo insulator \(\mathrm{SmB}_{6}\). These defects disturb the Kondo screening cloud, leaving behind a fingerprint of the metallic parent state. Our results suggest that the mysterious 3D quantum oscillations measured in \(\mathrm{SmB}_{6}\) could arise from these Kondo-lattice defects, although we cannot rule out other explanations. Our imaging technique could enable the development of atomic-scale charge sensors using heavy-fermion probes.
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint: APS/123-QED
+
Footnote β : preprint preprint: APS/123-QED
doped samples [26; 27; 28]. More recently, the existence of local metallic puddles around Gd dopants in Sm\({}_{1-x}\)Gd\({}_{x}\)B\({}_{6}\) was inferred from electron spin-resonance measurements [29]. Meanwhile, an increased concentration of Sm vacancies in Sm\({}_{1-x}\)B\({}_{6}\) was shown to globally inhibit the development of the hybridization gap [30], eventually leading to bulk conduction [12; 31]. All of these findings suggest that Sm-site defects manifest as Kondo holes in SmB\({}_{6}\), yet their key signature--the accompanying charge oscillations relating to the parent metallic Fermi surface [22]--remain undetected by any microscopic probe.
3Directly imaging the metallic puddles around Kondo holes is difficult, because the inherent screening strongly renormalizes the bare charge distribution. However, there are a few promising approaches [32; 33; 34]. The most common is to decorate the tip of a Kelvin probe force microscope with a single atom or molecule [35; 36]. This technique was used to image the charge variations within an adsorbed molecule [37]. However, it becomes inaccurate for small tip-sample separations because of the influence of short-range forces [38; 39], complicating further
Figure 1: **Expected disruption of the screening cloud around Kondo holes.** (**A**) In a uniform Kondo lattice, magnetic moments at each site (gray arrows) are coherently screened by itinerant conduction electrons (blue cloud) to form a spinless ground state of heavy fermions (orange line), characterized by the wavevector \(k_{\rm F}^{\rm b}\). (**B**) If one moment is removed to create a Kondo hole, the conduction electrons previously screening it can redistribute themselves. (**C**) The redistributed screening cloud causes oscillations of the local conduction electron density \(n_{\rm c}({\bf r})\), interaction strength \(\nu({\bf r})\), and magnetic susceptibility \(\chi_{\rm m}({\bf r})\) at the conduction-band wavevector \(k_{\rm F}^{\rm c}\), as shown schematically. (**D**) In a uniform Kondo lattice, the Kondo resonance creates a peak-dip feature in the calculated \({\rm d}I/{\rm d}V\), caused by the quantum interference between tunneling into the conduction band and the \(f\)-electron states with respective amplitudes \(t_{\rm c}\) and \(t_{f}\). The energy position of the peak (black triangles) shifts linearly according to the local conduction-electron density \(n_{\rm c}\). (**E**) The calculated rectification \(R(V)=|I(+V)/I(-V)|\) acquires a strong peak because \({\rm d}I/{\rm d}V\) is asymmetric around the Fermi level \(E_{F}\) (which occurs at \(V=0\)). The \(R(V)\) peak amplitude depends on the \({\rm d}I/{\rm d}V\) peak energy. These changes are almost linear over the small range of local doping expected around a Kondo hole (inset). (**F**) The calculated oscillations in \({\rm d}I({\bf r},V)/{\rm d}V\) at the Fermi level around a Kondo hole match the hybridized Fermi surface (\(k_{\rm F}^{\rm b}\), orange line in inset). (**G**) In contrast, the calculated \(n_{\rm c}({\bf r})\) varies according to the circular wavevector of the unhybridized Fermi surface (\(k_{\rm F}^{\rm b}\), blue line in inset) as it mainly reflects the disturbance to the screening cloud. (**H**) Calculated \(R({\bf r},V)\) is dominated by unhybridized electrons for biases within the hybridization gap. The calculations in (D)-(H) are based on a Kondo-Heisenberg model with nearest-neighbor hopping strength \(t\), Kondo coupling \(J=2t\), antiferromagnetic exchange \(I=0.002t\), and tunneling amplitudes \(t_{f}/t_{\rm c}=-0.025\). In (D) and (E), the hybridization strength is fixed at \(\nu=0.1t\) and the antiferromagnetic correlation strength is fixed at \(\chi=0.0003t\). The Fermi wavelength is \(\lambda_{\rm F}^{\rm c}=8a_{0}\) in (F)-(H), where \(a_{0}\) is the lattice spacing.
improvements to its spatial resolution [40]. Meanwhile, a scanning tunneling microscope (STM) routinely achieves the sub-nanometer spatial resolution, cryogenic temperatures, and sub-meV energy resolution required to access atomic charge distributions, but existing methods to extract the electrostatic potential from the STM vacuum decay length contain significant artifacts [41]. Consequently, simultaneously achieving the high charge precision and high spatial resolution required to measure the charge environment around a Kondo hole is not possible using existing techniques.
### Measuring local charge density in a Kondo lattice
###### Acknowledgements.
To visualize the conduction-electron density \(n_{c}(\mathbf{r})\) in a Kondo lattice (and hence local charge \(-en_{c}(\mathbf{r})\), where \(-e\) is the electron charge), we first show theoretically that \(n_{c}(\mathbf{r})\) determines the energy position of the Kondo resonance \(\tilde{\varepsilon}_{f}(\mathbf{r})\), which forms near the Fermi level as the magnetic \(f\) moments are screened by conduction electrons. Then, we establish an experimental metric capable of detecting the sub-meV variations in \(\tilde{\varepsilon}_{f}(\mathbf{r})\) around a Kondo hole. Our technique takes advantage of how the many-body Kondo resonance responds to local doping. In the Abrikosov fermion representation for local moments, \(\tilde{\varepsilon}_{f}\) is the Lagrange multiplier that enforces uniform \(f\)-electron density, typically \(n_{f}=1\) at each site. As additional charge carriers \(\Delta n_{c}\) enter a uniform Kondo lattice, the hybridized Fermi surface reshapes to accommodate them, leading to a corresponding change in \(\tilde{\varepsilon}_{f}\) in order to maintain \(n_{f}=1\) (see black triangles in Fig. 1D, and Fig. S2E). The magnitude and direction of the shift in \(\tilde{\varepsilon}_{f}\) depend on the details of the band structure. But the relationship between \(n_{c}\) and \(\tilde{\varepsilon}_{f}\) is linear over a wide range of band parameters and charge doping (see Fig. S2), implying that the charge density at position \(\mathbf{r}\), can usually
Figure 2: **Thorium dopants induce Kondo-hole behavior in URu\({}_{2}\)Si\({}_{2}\).** (**A**) Schematic band structure of URu\({}_{2}\)Si\({}_{2}\) showing the onset of heavy fermion bands (gray solid lines) at temperatures below \(T_{\rm o}=17.5\) K, as itinerant conduction electrons (blue line) hybridize with a renormalized \(5f\) level (gray dashed line), reducing the Fermi wavevector from \(k_{\rm F}^{\rm c}\) to \(k_{\rm F}^{\rm c}\). (**B**) Experimental measurement of an asymmetric Fano lineshape in the tunneling conductance at temperatures below \(T_{\rm o}\) on the U termination (gray curve). This feature shifts towards the Fermi level near a thorium dopant (black triangles), consistent with an expected change in local charge density. (**C**) For a fixed bias, the \(R(V)\) peak amplitude (black triangle) is highly sensitive to the d\(I\)/d\(V\) peak position. The spectra in (B) and (C) are averaged over the 18 well-isolated thorium dopants marked in (D). (**D**) The measured \(R(\mathbf{r},V)\) exhibits clear oscillations that manifest as a ring in (**E**) the 4-fold-symmetrized Fourier transform. (**F**) These oscillations match the high-temperature Fermi wavevector of \(2k_{\rm F}^{\rm c}\approx 0.3\) (\(2\pi/a\)), both above and below \(T_{\rm o}\). (**G**) In contrast, a conventional d\(I\)/d\(V\) measurement couples to the temperature-dependent Fermi surface, which changes dramatically from 18.6 K to 5.9 K. For clarity, the 18.6 K data have been scaled in (F) and offset in (G).
be inferred by measuring \(\tilde{\varepsilon}_{f}(\mathbf{r})\). In fact, the linear dependence of \(\tilde{\varepsilon}_{f}(\mathbf{r})\) on \(n_{c}(\mathbf{r})\) was recently verified experimentally by micron-scale angle-resolved photoemission (ARPES) measurements in Eu-doped SmB\({}_{6}\)[42].
**6** In STM measurements, the Kondo resonance normally appears as a peak-dip feature in the tunneling conductance \(\mathrm{d}I/\mathrm{d}V\)[43] (where \(I\) is the sample-to-tip tunneling current at applied sample bias \(V\)), because of the presence of multiple tunneling channels [44; 45] (see calculation in Fig. 1D). In simple cases, \(\tilde{\varepsilon}_{f}\) can be estimated by fitting \(\mathrm{d}I/\mathrm{d}V\) to a Fano-like model [46; 47]. However, the exact value of \(\tilde{\varepsilon}_{f}\) depends on the model used, so this approach is not immediately suitable for detecting the small, sub-meV energy shifts in \(\tilde{\varepsilon}_{f}(\mathbf{r})\) expected around a Kondo hole. Instead, we track the ratio of forward-to-backward tunneling current, i.e. the local rectification \(R(\mathbf{r},V)=|I(\mathbf{r},+V)/I(\mathbf{r},-V)|\). This ratio is insensitive to STM setup artifacts, and it was previously used to track charge inhomogeneity from the spectral weight transfer at high biases in hole-doped cuprates [48]. Here, we focus on low biases, typically \(V\lesssim 10\) mV, where the small shifts in \(\tilde{\varepsilon}_{f}(\mathbf{r})\) generate large variations in \(R(\mathbf{r},V)\) owing to the energy asymmetry of \(\mathrm{d}I/\mathrm{d}V\) about the Fermi level at \(V=0\) (see calculations in Fig. 1, D and E, and Fig. S2). To demonstrate this effect locally, we self-consistently calculated \(\mathrm{d}I(\mathbf{r},V)/\mathrm{d}V\), \(n_{c}(\mathbf{r})\), and \(R(\mathbf{r},V)\) around a Kondo hole in a metallic Kondo lattice, as shown in Fig. 1, F to H. The calculated \(\mathrm{d}I(\mathbf{r},V)/\mathrm{d}V\) at \(V=0\) tracks the local Fermi-level density of states, so it reveals the hybridized Fermi surface of heavy fermions with a wavevector \(2k_{\mathrm{F}}^{\mathrm{b}}\). In contrast, both \(n_{c}(\mathbf{r})\) and \(R(\mathbf{r},V)\) are dominated by static oscillations at the unhybridized wavevector \(2k_{\mathrm{F}}^{\mathrm{c}}\), associated with the Friedel-like redistribution of the Kondo screening cloud. The correlation between \(n_{c}(\mathbf{r})\) and \(R(\mathbf{r},V)\) establishes \(R(\mathbf{r},V)\) as a qualitative probe of local charge, except at very short distances from a Kondo hole (\(|\mathbf{r}|\sim a\)) likely because \(n_{f}=1\) is not enforced at that site.
Figure 3: **Kondo holes nucleate metallic puddles in SmB\({}_{6}\).** (**A**) Schematic band structure of SmB\({}_{6}\) showing the hybridization between conduction electrons (blue dashed line) and localized \(4f\) moments (gray dashed line), which leads to an inverted band structure (gray solid line) hosting emergent heavy Dirac surface states with a reduced Fermi wavevector (orange). (**B**) STM topography of the \((2\times 1)\)-reconstructed Sm surface of lightly Fe-doped SmB\({}_{6}\). Both the Fe dopant and Sm vacancy in this image are expected to act as Kondo holes because they each displace a \(4f\) moment. (**C-D**) Near the Fe dopant, the measured \(\mathrm{d}I/\mathrm{d}V\) peak changes energy position (black triangles), leading to large variations in \(R(\mathbf{r},V)\) peak amplitude. The spectra in (C) and (D) have been offset for clarity. (**E**) \(R(\mathbf{r},V)\) in the same area as shown in (B) contains clear oscillations around the two impurities. (**F**) Linccut of \(R(\mathbf{r},V)\) along the white dashed line in (B). (**G**) \(R(\mathbf{r},V)\) oscillations appear as a sharp ring in the 2-fold-symmetrized Fourier transform (taken from a larger \(65\times 80\)-nm\({}^{2}\) area for enhanced \(\mathbf{q}\) resolution), which matches the unhybridized \(5d\) Fermi surface inferred from ARPES experiments [54] (dashed line). The surface reconstruction creates a sharp peak in \(R(\mathbf{r},V)\) at \(Q_{\mathrm{Bragg}}=(0,\pi/a)\).
### Kondo holes in URu\({}_{2}\)Si\({}_{2}\)
7To test our technique, we first studied the Kondo metal URu\({}_{2}\)Si\({}_{2}\) with 1% thorium dopants, which are known to induce Kondo-hole behavior [49; 24]. Previous STM measurements mapped a metal-like Fermi surface in URu\({}_{2}\)Si\({}_{2}\) for temperatures above \(T_{\rm o}=17.5\) K, consisting of a single conduction band with wavevector \(k_{\rm F}^{\rm c}\approx 0.3\ \pi/a\), where \(a\) is the lattice constant ([46], see Fig. 2A). The onset of coherent heavy fermion bands below \(T_{\rm o}\)[50] is accompanied by the appearance of a peak-dip feature in d\(I\)/d\(V\), i.e. the Kondo-Fano resonance (see Fig. 2B). Close to a thorium dopant, this feature shifts upwards in energy, towards the Fermi level. This energy shift--and even the barely perceptible shifts 2 nm away from the dopant--are easily detected in the amplitude of \(R({\bf r},V)\) (see Fig. 2C). For biases within the hybridization gap \(|V|<\Delta/e\approx 5\) mV (where \(\Delta\) is the gap magnitude), \(R({\bf r},V)\) displays widespread spatial oscillations emanating from thorium dopants, as shown in Fig. 2, D to F. Their wavevector of \(0.29\pm 0.01\ (2\pi/a)\) agrees perfectly with the hybridization oscillations previously measured around Kondo holes in this compound [24]. It matches the URu\({}_{2}\)Si\({}_{2}\) parent metallic Fermi surface detected above \(T_{\rm o}\) from our measured quasiparticle interference patterns in d\(I({\bf r},V)\)/d\(V\) at \(V=0\), but it is distinct from the heavy bands that we measured below \(T_{\rm o}\) (see Fig. 2G). As a final check, we independently extracted \(\tilde{\varepsilon}_{f}({\bf r})\) by fitting d\(I({\bf r},V)\)/d\(V\) curves to a Fano model (see Fig. S3). The excellent agreement between \(\tilde{\varepsilon}_{f}({\bf r})\) and \(R({\bf r},V)\) corroborates the existence of charge oscillations at \(2k_{\rm F}^{\rm c}\) in URu\({}_{2}\)Si\({}_{2}\), indicating that some electrons retain their itinerant character around Kondo holes, even below \(T_{\rm o}\).
### Metallic puddles in SmB\({}_{6}\)
8In our Kondo insulating SmB\({}_{6}\) samples, any atomic defect that replaces a Sm atom to alter the \(4f\) moment could generate metallic puddles like those seen in URu\({}_{2}\)Si\({}_{2}\). We searched for these puddles in flux-grown samples lightly doped with Fe, which contain two clear Sm-site defects: Sm vacancies and Fe substitutions (see Fig. 3B). We focused on the \((2\times 1)\) Sm termination, as its charge environment most closely represents that of the bulk [51]. As in URu\({}_{2}\)Si\({}_{2}\), we noticed that the d\(I\)/d\(V\) peak attributed to the Kondo resonance changes its energy position near candidate Kondo holes (Fig. 3C), strongly impacting the \(R({\bf r},V)\) peak amplitude (Fig. 3D). Similar shifts in d\(I\)/d\(V\) peak position were previously linked to the buildup of charge around boron clusters on the Sm \((1\times 1)\) termination [52]. For biases within the hybridization gap \(|V|<\Delta/e\approx 10\) mV, \(R({\bf r},V)\) reveals prominent oscillations around Sm-site defects (see Fig. 3, E and F). These oscillations create a sharp ellipse in the Fourier transform of \(R({\bf r},V)\), as shown in Fig. 3G. The wavevectors of the \(R({\bf q},V)\) ellipse are larger than those of the surface state detected by quasiparticle interference imaging [53] and they do not disperse for biases within the hybridization gap, indicating a different origin (see Fig. S4). On the other hand, the size and shape of the
Figure 4: **Kondo holes backscatter heavy Dirac fermions.** (**A**) Topography of an SmB\({}_{6}\) region that contains 15 well-isolated Kondo holes (position indicated by red and green triangles) on several \((2\times 1)\)- or \((1\times 2)\)-reconstructed domains (dotted lines). (**B**) For energies within the Kondo-insulating gap, the Fourier-transformed d\(I\)/d\(V\) along \(q_{y}\) (perpendicular to Sm rows) contains a linearly dispersing signal (black dashed line) corresponding to quasiparticle interference from backscattered heavy Dirac fermions. The Fourier transform from the \((1\times 2)\) domains was rotated by 90\({}^{\circ}\) before being averaged with that from the \((2\times 1)\) domains. (**C**) The intensity of backscattering from topological states, calculated from Fourier-filtering d\(I\)/d\(V\) at the \(y\) component of the backscattering wavevector \(q_{y}=2k_{y}^{\rm ss}\), is strongly peaked around each Kondo hole. This map is computed only for ordered patches of the sample, as marked in (A), and excludes step edges.
ellipse matches the unhybridized \(5d\) band found by extrapolating ARPES data [54] to the Fermi level (i.e. it matches the SmB\({}_{6}\) metallic parent state), after accounting for band folding on the (\(2\times 1\)) surface (see Fig. 3G and Fig. S5). Our observation of this \(5d\) wavevector within the Kondo insulating gap is direct evidence of atomic-scale metallicity around Kondo holes. This metallicity is supported by the large residual d\(I\)/d\(V\) at \(V=0\) mV that we measured around Kondo holes (green curve in Fig. 3C), indicating a sizable Fermi-level density of states even when the metallic surface states are suppressed [55]. We confirmed this discovery by checking for \(R({\bf r},V)\) oscillations around a third type of Kondo hole, Gd dopants, as detailed in Fig. S6.
### Magnetic fluctuations at Kondo holes in SmB\({}_{6}\)
**9** Our \(R({\bf r},V)\) maps show the real-space structure of the metallic puddles around Kondo holes in SmB\({}_{6}\). For these puddles to contribute to the measured de Haas-van Alphen oscillations in magnetization, they must have a finite magnetic susceptibility. Several Sm-site defects are already suspected to be locally magnetic based on their impact on bulk susceptibility [26, 27, 56, 7] and their influence on the topologically emergent surface states [53, 55]. In general, topological surface states can provide a test of local magnetism because they are protected against backscattering from non-magnetic defects, but not from magnetic defects that locally break time-reversal symmetry [57]. This additional magnetic backscattering was previously imaged around Fe dopants in two Bi-based topological insulators [58, 59]. Here we visualize the intensity of magnetic fluctuations at SmB\({}_{6}\) by identifying spatial regions where its surface states backscatter. For biases within the hybridization gap, we measured large-area d\(I({\bf r},V)\)/d\(V\) maps that contain clear quasiparticle interference patterns at the backscattering wavevector \({\bf q}\equiv{\bf k}_{\rm f}-{\bf k}_{\rm i}=2{\bf k}^{\rm ss}\) (Fig. 4B), consistent with our previous report [53]. We determined the spatial origin of this signal by Fourier-filtering d\(I({\bf r},V)\)/d\(V\) at the wavevector \(2{\bf k}^{\rm ss}\) to create an image of the local backscattering strength (Fig. 4C). Most of the peaks in this image align with the positions of Sm vacancies or Fe dopants, indicating that these Kondo holes harbor the necessary magnetic fluctuations to backscatter topological states.
### Discussion and Outlook
**10** The charge puddles around Kondo holes present an alternative yet compelling origin for many of the strange observations of metallic behavior in SmB\({}_{6}\). First, the detection of de Haas-van Alphen (magnetic) oscillations without accompanying Shubnikov-de Haas (resistivity) oscillations [8, 9, 10] is expected for electrically isolated metallic puddles, provided they do not meet the percolation threshold (which could be unreachable [17]). Second, the large Fermi surface size and light effective mass extracted by bulk probes [9, 10, 11] is in excellent agreement with our observation of itinerant \(5d\) electrons. Third, the magnetic length of the high-frequency (large-\(k_{F}\)) quantum oscillations that onset above 35 T [9, 10] is comparable to the \(R({\bf r})\) decay length of \(\gamma=2.6\) nm, such that a Landau orbit could fit inside a metallic puddle. Additionally, many of the metallic properties were detected in floating-zone-grown samples [5, 9, 10, 11], which are known to have higher concentrations of Sm vacancies than samples grown with an aluminum flux [60]. Floating-zone samples also contain a higher concentration of dislocations [31], which may similarly disrupt the Kondo screening cloud and thus further enhance the quantum oscillation amplitude beyond that expected from Sm vacancies alone. In contrast, the quantum oscillations completely disappear in flux-grown samples once embedded aluminum is removed [8].
**11** Atomic-scale charge inhomogeneity has a profound impact on many interacting quantum materials, but it has typically not been possible to measure. In Kondo-lattice systems, \(R({\bf r},V)\) provides a peek at the ground-state charge landscape, which is strongly perturbed by Kondo holes. These Kondo holes nucleate nanometer-scale metallic puddles that could explain many of the strange phenomena detected by bulk probes. More broadly, the sensitivity to local charge within a Kondo lattice may enable atomic-scale charge imaging using STM tips decorated with a Kondo impurity [61] or fabricated from heavy-fermion materials [62].
**12 Acknowledgements** We thank An-Ping Li, Brian Skinner, Christian Wagner, Felix Lupke, Stefan Ulrich, Yun Suk Eo, and Zachary Fisk, for helpful conversations. We thank Anjan Soumyanarayanan, Michael Yee, and Yang He for their help measuring Gd-doped SmB\({}_{6}\). **Funding:** This project was supported by the Gordon and Betty Moore Foundation's EPiQS Initiative through grants GBMF4536, GBMF9071, and GBMF9457. The experiments at Harvard were supported by the US National Science Foundation grant DMR-1410480. The data interpretation received support from AFOSR grant FA9550-21-1-0429. The work of E.M. and D.K.M. was supported by the US Department of Energy, Office of Science, Basic Energy Sciences, under Award DE-FG02-05ER46225. C.E.M. is supported by the Swiss National Science Foundation under fellowship P400P2_183890. Work at the University of Maryland was supported by AFOSR FA9550-22-1-0023. Research at McMaster University was supported by the Natural Sciences and Engineering Research Council. J.C.S.D. acknowledges support from the Science Foundation of Ireland under Award SFI 17/RP/5445, and from the European Research Council
under Award DLV-788932. This project received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement 893097. **Author contributions:** H.P., C.E.M., Y.L., P.C., and M.H.H. carried out the STM experiments. S.S., X.W., J.P., and G.L. synthesized the samples. E.M. and D.K.M. developed the theoretical model. D.G.G., C.F.H., and J.C.S.D. contributed to the understanding of the results. H.P. and J.E.H. analyzed the data and wrote the manuscript with contributions from E.M., C.F.H., J.C.S.D., and D.K.M. **Competing interests:** The authors have no competing interests. **Data and materials availability:** All data and analysis presented in this paper are deposited in Zenodo [63].
|
2305.01911 | PODTherm-GP: A Physics-based Data-Driven Approach for Effective
Architecture-Level Thermal Simulation of Multi-Core CPUs | A thermal simulation methodology derived from the proper orthogonal
decomposition (POD) and the Galerkin projection (GP), hereafter referred to as
PODTherm-GP, is evaluated in terms of its efficiency and accuracy in a
multi-core CPU. The GP projects the heat transfer equation onto a mathematical
space whose basis functions are generated from thermal data enabled by the POD
learning algorithm. The thermal solution data are collected from FEniCS using
the finite element method (FEM) accounting for appropriate parametric
variations. The GP incorporates physical principles of heat transfer in the
methodology to reach high accuracy and efficiency. The dynamic power map for
the CPU in FEM thermal simulation is generated from gem5 and McPACT, together
with the SPLASH-2 benchmarks as the simulation workload. It is shown that
PODTherm-GP offers an accurate thermal prediction of the CPU with a resolution
as fine as the FEM. It is also demonstrated that PODTherm-GP is capable of
predicting the dynamic thermal profile of the chip with a good accuracy beyond
the training conditions. Additionally, the approach offers a reduction in
degrees of freedom by more than 5 orders of magnitude and a speedup of 4
orders, compared to the FEM. | Lin Jiang, Anthony Dowling, Ming-C. Cheng, Yu Liu | 2023-05-03T05:59:23Z | http://arxiv.org/abs/2305.01911v1 | PODTherm-GP: A Physics-based Data-Driven Approach for Effective Architecture-Level Thermal Simulation of Multi-Core CPUs
###### Abstract
A thermal simulation methodology derived from the proper orthogonal decomposition (POD) and the Galerkin projection (GP), hereafter referred to as PODTherm-GP, is evaluated in terms of its efficiency and accuracy in a multi-core CPU. The GP projects the heat transfer equation onto a mathematical space whose basis functions are generated from thermal data enabled by the POD learning algorithm. The thermal solution data are collected from FEniCS using the finite element method (FEM) accounting for appropriate parametric variations. The GP incorporates physical principles of heat transfer in the methodology to reach high accuracy and efficiency. The dynamic power map for the CPU in FEM thermal simulation is generated from gem5 and McPACT, together with the SPLASH-2 benchmark as the simulation workload. It is shown that PODTherm-GP offers an accurate thermal prediction of the CPU with a resolution as fine as the FEM. It is also demonstrated that PODTherm-GP is capable of predicting the dynamic thermal profile of the chip with a good accuracy beyond the training conditions. Additionally, the approach offers a reduction in degrees of freedom by more than 5 orders of magnitude and a speedup of 4 orders, compared to the FEM.
Thermal simulation, Proper orthogonal decomposition, Data driven learning method, Multi-core CPUs.
## I Introduction
Thermal issues have been the bottleneck of performance improvements for high-performance microprocessors due to drastic minimization of the semiconductor technology nodes and introduction of multi-core architectures in the last several decades, which have resulted in significant enhancement of the power density in the processors [1]. High temperature gradients and hot spots not only impair performance of processors but also degrade their reliability [2, 3]. Thermal management and thermal-aware design exploration of the high-performance processors [4, 5] have been the effective approaches to minimize these thermal issues to improve the performance and reliability of the processors. For instance, as found in [6], the average performance is further improved by 8.9% for the heterogeneous multi-core processor through an adaptive thermal management framework, compared to ARM's DVFS (Dynamic Voltage Frequency Scaling)-based intelligent power allocation. These effective thermal managements however require an effective thermal simulation tool. For real time applications, such as run-time thermal aware task scheduling, a very efficient thermal simulation with a high accuracy is desirable.
For thermal simulations of semiconductor chips, many approaches have been developed for different applications. Some of the approaches focus on the accuracy of the thermal simulation to capture hot spots, for instance the direct numerical simulations (DNSs) based on the finite difference, finite volume or finite element method (FDM, FVM or FEM, respectively). These DNS methods are however computationally intensive and in general prohibitive for thermal simulations at the architecture level. Several other approaches therefore have been proposed in situations where the efficiency plays an important role, including the thermal circuit model, the Green's function method, machine learning based methods, etc. All these approaches achieve higher efficiency than DNSs by sacrificing accuracy and/or resolution with approximations that impose severe limitations. For instance, efficient thermal circuits are realized at the cost of very low resolution and inaccurate solutions. When using the thermal circuit model or the machine learning based method, if one intends to maintain resolution as fine as DNSs, intensive computational efforts become similar to DNSs. Additionally, it is difficult to apply the Green's function method to 3D dynamic thermal simulations. In recent years, the data-driven approach based on proper orthogonal decomposition (POD), together with the guidance of physical principles, has become increasingly attractive for thermal simulation of semiconductor chips due to its ability to achieve high accuracy, resolution and efficiency simultaneously [7, 8, 9]. These thermal simulation approaches are further discussed in Sec. II.
In this work, an architecture-level thermal simulator has been developed based on the POD-Galerkin (PODTherm-GP) methodology for 3D dynamic thermal simulation of a multi-core CPU. The early concept of this methodology was briefly illustrated in [7]. This POD-Galerkin modeling technique offers an accurate and efficient prediction of the thermal profile in the multi-core processor without a priori assumptions. The POD projects the thermal problem from a physical domain of the multi-core CPU onto a functional space, whose basis functions (or POD modes) are trained by thermal data generated by DNSs of the multi-core CPU. In our study, the thermal data is collected from FEniCS, an open-source computing platform for solving partial differential equations (PDEs) using the FEM [10]. To provide realistic heat |
2304.07091 | The role of object-centric representations, guided attention, and
external memory on generalizing visual relations | Visual reasoning is a long-term goal of vision research. In the last decade,
several works have attempted to apply deep neural networks (DNNs) to the task
of learning visual relations from images, with modest results in terms of the
generalization of the relations learned. In recent years, several innovations
in DNNs have been developed in order to enable learning abstract relation from
images. In this work, we systematically evaluate a series of DNNs that
integrate mechanism such as slot attention, recurrently guided attention, and
external memory, in the simplest possible visual reasoning task: deciding
whether two objects are the same or different. We found that, although some
models performed better than others in generalizing the same-different relation
to specific types of images, no model was able to generalize this relation
across the board. We conclude that abstract visual reasoning remains largely an
unresolved challenge for DNNs. | Guillermo Puebla, Jeffrey S. Bowers | 2023-04-14T12:22:52Z | http://arxiv.org/abs/2304.07091v1 | The role of object-centric representations, guided attention, and external memory on generalizing visual relations
###### Abstract
Visual reasoning is a long-term goal of vision research. In the last decade, several works have attempted to apply deep neural networks (DNNs) to the task of learning visual relations from images, with modest results in terms of the generalization of the relations learned. In recent years, several innovations in DNNs have been developed in order to enable learning abstract relation from images. In this work, we systematically evaluate a series of DNNs that integrate mechanism such as slot attention, recurrently guided attention, and external memory, in the simplest possible visual reasoning task: deciding whether two objects are the same or different. We found that, although some models performed better than others in generalizing the same-different relation to specific types of images, no model was able to generalize this relation across the board. We conclude that abstract visual reasoning remains largely an unresolved challenge for DNNs.
visual reasoning; relational generalization; deep neural networks; guided attention; external memory
## Introduction
Detecting relations is one of the fundamental operations of the visual system. This allows us to form a coherent representation of the environment as sets of relations between objects [14]. It is also the basis of robust object recognition, because representing an object as a set of relations between parts frees us from recognizing it solely on the basis of its superficial features [1]. Furthermore, representing relations between entities forms the basis of the kind of reasoning abilities that set us apart from other species [1]. Given this predominant role across different forms of visual processing, several researchers have attempted to apply deep neural networks to visual reasoning, in particular to the same-different task (i.e., classifying an image with two objects as an example of the categories "same" or "different"). This previous research found that, in contrast with earlier machine learning models, convolutional neural networks (CNNs) can learn to classify images with abstract shapes as same or different (e.g., Messina, Amato, Carrara, Gennaro, & Falchi, 2021; Funke et al., 2021). However, Puebla and Bowers (2022) showed that the representations learned by these models are highly specific: when trained in task #1 of the synthetic visual reasoning test (SVRT, Fleuret et al., 2011, see Original condition Figure 1), CNNs tended to classify correctly images that were superficially similar to the ones they were trained on (e.g., Irregular or Regular conditions in Figure 1) and misclassily images that illustrated the same relation but were more superficially dissimilar (e.g., Lines or Arrows conditions). In the meantime, a number of new DNNs have introduced architectural innovations targeted at achieving relational visual relational reasoning. In this work, we test relational generalization of the same-different task on these models.
## Models
**ResNet50 (He, Zhang, Ren, & Sun, 2016)** We included this model as a baseline deep CNN.
Figure 1: Example images of all datasets tested.
Slot Attention Locatello et al. (2020)This model segregates objects in an image through a key-value attention mechanism that assigns different parts of the image to competing slots.
Recurrent vision transformer (RVIT, Messina, Amato, Carara, Gennaro, & Falchi, 2022)This model applies a standard vision transformer encoder recurrently, that is the model takes as input its own output, for a number of 4 processing steps.
Emergent symbol binding network (ESBN, Webb, Sinha, & Cohen, 2021)This model consist of a recurrent neural network augmented with a key-value external memory. This model aims to bind its memory values (direct representation of the input) and keys (inferred representation of the input's role in the sequence).
Guided Attention Model for (visual) Reasoning (GAMR, Vaishnav & Serre, 2023)This model is composed of three modules. An encoder builds a representation of the input. A recurrent controller guides an attention mechanism to select relevant object representations and write them into an external memory. A graph neural network module computes relations between the objects stored in the external memory.
Object-Centric Recurrent Attention (OCRA, Adeli, Ahn, & Zelinsky, 2022)This model consist of a recurrent encoder that controls an attention window that trades of the area it covers by its resolution. At the same time, a recurrent decoder control a write window that modifies a reconstruction output at each time step. The encoder feeds a two-layer capsule neural network that predicts a class label.
Methods
We trained 10 runs of all the models in task #1 of SVRT until reaching a validation accuracy of approximately 99%. This dataset consist of 28,000 128\(\times\)128 RGB images. We tested all the models in all 14 datasets illustrated in Figure 1 (5,600 images per dataset), with the exception that OCRA was not tested on the Random colors dataset since this model takes only grey scale images as an input.
Results and discussion
As can be seen in Figure 1, all models achieved high accuracy in the test split of task #1 of SVRT except for the ESBN model, which performed at chance in all datasets. Further analysis showed that this model can learn the same-different task only when the objects (presented individually in a sequence of two images) are centered in the image, which severely questions Webb et al. (2021)'s conclusions regarding the relational reasoning capabilities of the model.
Furthermore, Slot Attention, GAMR and OCRA tended to show better generalization on the datasets that were harder for ResNet50. However, no single model achieved high levels of accuracy across all the test datasets, which is what would be expected if a model learned an abstract representation of the relations "same" and "different".
## Acknowledgments
The first author has received funding for this project from the National Center for Artificial Intelligence CENIA FB210017, Basal ANID.
The second author has received funding for this project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 741134) for the second author.
Figure 2: Mean accuracy on 10 runs of each model per condition. Error bars are standard errors of the mean. |
2308.16249 | Random free-fermion quantum spin chain with multi-spin interactions | We study the effects of quenched disorder in a class of quantum chains with
(p+1)-multispin interactions exhibiting a free fermionic spectrum, paying
special attention to the case p=2. Depending if disorder couples to (i) all the
couplings or just to (ii) some of them, we have two distinct physical
scenarios. In case (i), we find that the transitions of the model are governed
by a universal infinite-randomness critical point surrounded by quantum
Griffiths phases similarly as happens to the random transverse-field Ising
chain. In case (ii), we find that quenched disorder becomes an irrelevant
perturbation: the clean critical behavior is stable and Griffiths phases are
absent. Beyond the perturbative regime, disorder stabilizes a line of
finite-randomness critical points (with nonuniversal critical exponents), that
ends in a multicritical point of infinite-randomness type. In that case,
quantum Griffiths phases also appear surrounding the finite-disorder transition
point. We have characterized the correlation functions and the low-temperature
thermodynamics of these chains. Our results are derived from a strong-disorder
renormalization-group technique and from finite-size scaling analysis of the
spectral gap computed exactly (up to sizes ~10^{7}) via an efficient new
numerical method recently introduced in the literature [Phys. Rev. B 104,
174206 (2021)]. | Francisco C. Alcaraz, JosΓ© A. Hoyos, Rodrigo A. Pimenta | 2023-08-30T18:13:35Z | http://arxiv.org/abs/2308.16249v3 | # A random free-fermion quantum spin chain with multi-spin interactions
###### Abstract
We study the effects of quenched disorder in a class of quantum chains with (\(p+1\))-multi-spin interactions exhibiting a free fermionic spectrum, paying special attention to the case \(p=2\). Depending if disorder couples to (i) all the couplings or just to (ii) some of them, we have two distinct physical scenarios. In case (i), we find that the transitions of the model are governed by a universal infinite-randomness critical point surrounded by quantum Griffiths phases similarly as happens to the random transverse-field Ising chain. In case (ii), we find that quenched disorder becomes an irrelevant perturbation: the clean critical behavior is stable and Griffiths phases are absent. Beyond the perturbative regime, disorder stabilizes a line of finite-randomness critical points (with non-universal critical exponents), that ends in a multi-critical point of infinite-randomness type. In that case, quantum Griffiths phases also appear surrounding the finite-disorder transition point. We have characterized the correlation functions and the low-temperature thermodynamics of these chains. Our results are derived from a strong-disorder renormalization-group technique and from finite-size scaling analysis of the spectral gap computed exactly (up to sizes \(\sim 10^{7}\)) via an efficient new numerical method recently introduced in the literature [Alcaraz _et al._, Phys. Rev. B **104**, 174206 (2021)].
## I Introduction
The importance of studying one-dimensional quantum models is invaluable. Due to the peculiarities of the phase space in \(d=1\), many models can be precisely described and, thus, serve as important testbeds for physical insights [1]. Among those models, there is an important class that goes under the general name of free systems. These are systems (of volume \(V\)) whose the exponentially large Hilbert space of dimension \(\sim e^{V}\) can be expressed as a combination of \(\sim V\) quasi-energies of a free particle system. Despite the name, they exhibit non-trivial phenomena, such as phase transitions and zero-energy fractional edge modes, and are, thus, good starting points to understand many complex behaviors of more general systems [2; 3; 4].
Initially, the known free systems models were linked to a Jordan-Wigner transformation which maps interacting spin models into free fermionic particles, i.e., into models which are bilinear in the fermionic operators or fields. Later, it was realized that the existence of this transformation is not a necessary condition [5]. It is possible to construct the eigenspectrum of the spin system from the free-fermion pseudo-energies obtained from the roots of a characteristic polynomial. This polynomial is constructed thanks to the infinite number of conserved charges of the model [6; 7; 8]. Interestingly, these latter free systems, a priori, cannot be written in term of local bilinears of fermion operators.
The first models in this class are the Z\({}_{N}\)-symmetric free parafermionic models [9; 10; 5; 11] and the three-spin interacting Fendley model [6]. Actually, it has been found that Fendley model belongs to a large family of spin chains with multi-spin interations and Z\({}_{N}\)-symmetry [7; 8]. For \(N>2\), the spectrum is non-Hermitian and have a free parafermionic form. Furthermore, it was shown that the free-particle pseudo-energies of some of the above models are also the pseudo-energies of a multi-spin U(1) symmetric XY model [12; 13]. Additional developments related to Fendley model can be found in Refs. [14; 15; 16; 17].
Recently, it was shown that, in general, the roots of the characteristic polynomial yielding the free-fermionic spectrum can be efficiently obtained numerically [12]. As a consequence, the finite-size gaps of the spin system can be computed straightforwardly for quite large system sizes. Furthermore, the numerical cost for exactly computing the finite-size gap with machine precision (and, hence, the associated dynamical critical exponent) is minimum at criticality increasing only linearly with the chain size. While this may seems innocuous for translation-invariant systems where analytical results can often be obtained, it is of great applicability to quenched disordered systems where analytical results are scarce and exactly numerical results are plagued by numerical instabilities inherent to the extreme slow critical dynamics of these systems [18]. This brings us to the topic of our research. What are the effects of quenched disorder on these generalized free-fermionic systems?
It is well-known that the effects of quenched disorder (i.e., static random inhomogeneities) in strongly interacting systems can lead to interesting new phenomena. For instance, even the small amount of inhomogeneities can change the singular behavior of a critical system [19; 20], change the sharp character of a transition by smearing [21; 22; 23], or even destroy the phase transition itself [24; 25]. For reviews, see, e.g., Refs. [26; 27].
In the (free-fermionic) transverse-field Ising chain, quenched disorder completely changes the critical behavior of the clean (homogeneous) system as expected from the Harris criterion [19]. The conventional critical dynamical scaling of the clean system \(\tau\sim\xi^{z}\) (with \(\tau\) and \(\xi\) being, respectively, time and length scales, and \(z=1\) being the dynamical exponent) is replaced by an activated dynamical scaling \(\ln\tau\sim\xi^{\psi}\) (with universal, i.e., disorder-independent, tunneling exponent \(\psi=1/2\)), which is now recognized as a hallmark of the so-called infinite-randomness quantum criticality [28; 29]. Technically, this means that the disorder-induced statistical fluctuations among the relevant energy scales increases without bounds under the renormalization-group coarse-grain
procedure. Additionally, it was realized that this exotic phenomena appears in all dimensions [30; 31; 32] and in many other contexts such as in Heisenberg-like quantum chains and ladders [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47], in the Hubbard chain [48; 49], in aperiodic quantum chains [50; 51; 52; 53], in open quantum rotors systems [54; 55], in reaction-diffusion models such as the contact process [56; 57; 58; 59; 60; 61; 62], in unbiased random walkers in random media [63], and in Floquet criticality [64; 65]. For a review, see, e.g., Refs. [66; 67]. Despite this overwhelming situations in which infinite-randomness is theoretically found, experimental evidences are few [68; 69; 70; 71; 72].
Despite all the progress on characterizing the infinite-randomness criticality, we currently do not know what are the necessary conditions ensuring its appearance.1 It is then desirable to further study other aspects which were not considered in the previous studies. Here, we consider interactions involving more than the conventional two-body interactions. We pay special attention to the \(Z_{2}\)-symmetric free-fermionic case in which the interactions involve three consecutive spins [6]. The clean phase diagram has three critical lines separating three gapful phases which are related to each other by triality.2 The universality classes of the associated transitions are that of the transverse-field Ising chain, but that of the multi-critical point (where these three critical lines meet) is yet to be determined. Currently, only its dynamical critical exponent \(z=\frac{3}{2}\)[6] and the specific heat exponent \(\alpha=0\)[8] are known. An inhomogeneous quantum Ising chain sharing the same energy spectrum of this multi-critical point was introduced in Ref. [16]. In this related Ising chaine the order parameter exponent is \(\beta=\frac{1}{8}\) like the standard Ising chain.
Footnote 1: Our best educated guess comes from the classification of the Griffiths singularities near the transition point [20; 73]. Due to statistical fluctuations inherent of quenched disorder systems, there may be arbitrarily large regions in space which are locally ordered and virtually disconnected from the bulk (the so-called rare regions). If there are regions are in their lower critical dimension and that \(\min\left\{d,d_{i}^{c}\right\}\nu<2\), where \(d\) is the dimension of the system, \(d_{i}^{+}\) is the upper critical dimension of the problem, and \(\nu\) is the correlation-length critical exponent of the clean theory, then, very likely, infinite-randomness is expected. However, this criterion can only be regarded as a sufficient one, not a necessary one as infinite-randomness occurs in aperiodic systems which do not have rare regions or Griffiths singularities. Thus, it is desirable to further study systems exhibiting infinite-randomness criticality with the aim to better understand the fundamental ingredients yielding it.
Footnote 2: This is a generalization of the duality as happens in the \(Z_{2}\)-symmetric transverse-field Ising chain. The model is self-dual if under the duality transformation the ferromagnetic and the paramagnetic phase are interchanged.
Similarly to the random transverse-field Ising chain and to the spin-1/2 XX chain with random couplings, we show in this paper that generic quenched disorder stabilizes quantum Griffiths phases surrounding the three transition lines. In addition, the corresponding universality classes of these transitions and that of the multi-critical point are the same and are of infinite-randomness type (with disorder-independent tunneling exponent \(\psi=\frac{1}{2}\)). Our results are based on a generalization of the strong-disorder renormalization-group (SDRG) method [74; 75; 76] and on exact numerical calculations of the spectral gap using the aforementioned method based on the characteristic polynomial that allows us to obtain exact numerical results for very large lattice sizes [12]. After unveiling the structure of the SDRG flow, we generalize our results to the case of \((p+1)\)-multi-spin interactions (\(p=1,2,3,\dots\)) and arrive basically at the same conclusions. All the transitions are in the same universality class of infinite-randomness type with the same tunneling exponent \(\psi=\frac{1}{2}\).
In addition, we have also studied the case in which disorder couples only to one type of coupling constants (the others remaining homogeneous). In that case, some of the clean transition lines remain stable for weak disorder strength while a line of finite-disorder fixed points (with non-universal dynamical critical exponents \(z\)) appear for large disorder strengths, and terminates in an infinite-randomness multi-critical point.
This paper is organized as follows. In Sec. II we define the model studied and review some key results important to our purposes. In Sec. III we overview the expected effects of quenched disorder in an heuristic way. Our arguments are mostly based on the effects caused by rare-regions in the near-critical Griffiths phases. In Sec. IV we review the SDRG method for the standard 2-spin interacting case and generalize it to the 3-spin interacting case. In Sec. V we report our numerical study of the finite-size gap statistics of the model which are in agreement with the SDRG results. We present further discussions and concluding remarks in Sec. VI. Finally, the details of the renormalization-group flow are presented in Appendices A and B.
## II The model and review of key results
We consider the \((p+1)\)-multi-spin interacting quantum chains \((p=1,2,3,\dots)\), whose Hamiltonian, introduced in Refs. [7; 8], is given by
\[\mathcal{H}_{p} = -\sum_{i=1}^{L-p}\lambda_{i}\sigma_{i}^{x}\prod_{j=i+1}^{i+p} \sigma_{j}^{z}-\sum_{i=L-p+1}^{L}\lambda_{i}\sigma_{i}^{x}\prod_{j=i+1}^{L} \sigma_{j}^{z} \tag{1}\] \[= -\sum_{i=1}^{L}\lambda_{i}h_{i},\text{ where }h_{i}=-\sigma_{i}^{x }\prod_{j=i+1}^{\min\left(i+p,L\right)}\sigma_{j}^{z}, \tag{2}\]
\(\sigma_{i}^{x,y}\) are Pauli matrices associated to spin-\(\frac{1}{2}\) degrees of freedom at site \(i\), and \(L\) is the total number of \(\{h_{i}\}\) energy-density operators (which is also the total number of spins in the chain). The case \(p=1\) is equivalent (up to global degeneracies) to the inhomogeneous transverse-field Ising quantum chain. The local interaction operator \(h_{i}\) involves \(\min\left\{p+1,i\right\}\) spins and satisfy the algebra
\[\left\{h_{i},h_{j}\right\}=0,\text{ if }|i-j|\leq p,\text{ and }|h_{i},h_{j}|=0\text{ otherwise}. \tag{3}\]
In other words, \(h_{i}\) and \(h_{j}\) commute if they are farther than \(p\) lattice units (\(|i-j|>p\)), and anti-commute otherwise. Evidently, from Eq. (1), \(h_{i}^{x}=\mathds{1}\). Finally, \(\lambda_{i}\) is the local multi-spin energy coupling. In this work, we introduce quenched disorder by considering \(\{\lambda_{i}\}\) as independent random variables. Their precise distribution will be defined later.
Interestingly, it was shown [7; 8] that the spectrum of (1) has the free fermionic form
\[E^{\{s_{k}\}}=-\sum_{k=1}^{L}s_{k}\epsilon_{k}, \tag{4}\]
where \(s_{k}=\pm 1\), and the free fermionic pseudo-energies \(\epsilon_{k}=1/\sqrt{\lambda_{k}}\), with \(\{x_{k}\}\) being the roots of the polynomial of degree \(\bar{L}=\left\lfloor\frac{L+p}{p+1}\right\rfloor\) (with \(\left\lfloor x\right\rfloor\) being the integer part of \(x\)):
\[P_{L}(x)=\sum_{\ell=0}^{L}C_{L}\left(\ell\right)x^{\ell}, \tag{5}\]
whose coefficients \(C_{L}\left(\ell\right)\) are obtained from the recurrence relation
\[P_{L}(x)=P_{L-1}(x)-x\lambda_{L}^{2}P_{L-p-1}(x), \tag{6}\]
with the initial condition \(P_{j}(x)=1\) for \(j\leq 0\).
It is important to notice that the free fermionic character is guaranteed when open boundary conditions are applied. For other boundary conditions, very likely this is not true [7; 8], and the solution of the model remains an open problem.
The first gap in the spectrum energy is \(\Delta=2/\sqrt{x_{\rm max}}\), where \(x_{\rm max}=\max\left\{x_{k}\right\}\) is the largest root of the polynomial (5). At and near criticality, it was recently shown [12] that \(x_{\rm max}\) can be efficiently computed for very large chains even though \(C_{L}\left(\ell\right)\) grows factorially with \(\ell\). (Indeed, there is no need to compute all \(C_{L}\left(\ell\right)\).) This is accomplished when one uses the Laguerre's upper bound (LB) for the roots of a polynomial
\[x_{\rm LB}=-\frac{\alpha_{1}}{\bar{L}}+\frac{L-1}{\bar{L}}\sqrt{\alpha_{1}^{2} -2\left(\frac{L}{\bar{L}-1}\right)\alpha_{2}}, \tag{7}\]
where
\[\alpha_{1}=\frac{C_{L}(\bar{L}-1)}{C_{L}(\bar{L})},\;{\rm and}\;\alpha_{2}= \frac{C_{L}(\bar{L}-2)}{C_{L}(\bar{L})}, \tag{8}\]
as the starting initial guess for the \(x_{\rm max}\).
In the (critical) homogeneous case \(\lambda_{i}=\lambda\), it was shown [12] that, for any \(p\), the quantity \(\Delta_{\rm LB}\equiv 2/\sqrt{\alpha_{\rm LB}}=(1-\varepsilon)\Delta\) as \(L\to\infty\), with \(0<\varepsilon<1\) being a constant. Thus, \(\Delta_{\rm LB}\) has the same finite-size scaling properties of the finite-size gap \(\Delta\) and, thus, can be used to obtain the dynamical critical exponent \(z\), i.e.,
\[\Delta_{\rm LB}\sim L^{-z_{\rm LB}}, \tag{9}\]
with \(z_{\rm LB}=z=\frac{p+1}{2}\)[7; 8]. Numerically, this is convenient since \(\Delta_{\rm LB}\) can be efficiently computed.
For \(p=1\), it was shown [12] that, in the critical quenched disordered case (\(\{\lambda_{i}\}\) being independent and identically distributed random variables), \(\Delta_{\rm LB}=(1-\epsilon_{L})\Delta\) with \(\epsilon_{L}\) vanishing slowly as \(L\to\infty\). This provides a convenient tool to obtain the dynamical scaling relation. In this case,
\[\ln\Delta_{\rm LB}\sim-L^{\psi}, \tag{10}\]
with universal tunneling exponent \(\psi=1/2\). By universal we mean that \(\psi\) does not depend on the particular distribution of the disorder variables.3 This result is expected since the model Hamiltonian (1) for \(p=1\), apart from global degeneracies, has the same spectrum as the transverse-field Ising chain. It is worthy noting that the activated dynamical scaling (10) has been confirmed by many analytical and numerical studies [66]. In addition, \(\Delta_{\rm LB}\) can also be used to study the finite-size gap in the near critical Griffiths phase. In this phase, the system is gapless even though it is short-range correlated (finite spin-spin correlation length) [29]. The finite-size gap \(\Delta\) and the associated LB estimate, \(\Delta_{\rm LB}\), obeys the power-law scaling (9) with \(z\) being the off-critical (Griffiths) dynamical exponent which depends on the distance from criticality. As criticality is approached, the off-critical dynamical exponent increases and becomes infinite at the critical point, in accordance to the activated critical dynamical scaling (10).
Footnote 3: Provided that the distribution is not pathological or extremely broad [33; 77; 78; 79], which is the case for most physically relevant distributions.
The model (1) for \(p=2\) and non-disordered couplings was studied in Ref. [6] for couplings of period 3 [which is the natural period given by the algebra (3)], i.e.,
\[\lambda_{3i-2}=\lambda_{A},\;\lambda_{3i-1}=\lambda_{B},\;{\rm and}\;\lambda_{ 3i}=\lambda_{C}. \tag{11}\]
Exploring the triality of the model, the phase diagram was determined (see Fig. 1).
We now want to discuss what we mean by triality in this model. In the bulk limit, the lattice translations \(i\to i+1\) (equivalent to \(A\to B\to C\to A\)) and \(i\to i+2\) (equivalent to \(A\to C\to B\to A\)), and the lattice reflection \((i\to L+1-i)\) do not change the algebra (3) and, therefore, the spectrum. Thus,
\[\mathcal{H}(\lambda_{A},\lambda_{B},\lambda_{C}) = \mathcal{H}(\lambda_{B},\lambda_{C},\lambda_{A})=\mathcal{H}( \lambda_{C},\lambda_{A},\lambda_{B}) \tag{12}\] \[= \mathcal{H}(\lambda_{C},\lambda_{B},\lambda_{A})\,.\]
Figure 1: The phase diagram of the clean Hamiltonian (1) for \(p=2\). Here, \(\lambda_{3i-2}=\lambda_{A}\), \(\lambda_{3i-1}=\lambda_{B}\), and \(\lambda_{3i}=\lambda_{C}\). The solid red dashed lines are transitions in the Ising universality class (\(z=1\)). The multi-critical point \(\lambda_{A}=\lambda_{B}=\lambda_{C}\) is in a different universality class where \(z=\frac{3}{2}\) and \(\alpha=0\). The three phases are gapful and are similar to each other (see text).
Now, consider the case \(\alpha=\lambda_{A}/\lambda_{C}<1\) fixed. If the transition happens at \(\beta=\lambda_{B}/\lambda_{C}\neq 1\), the relation (12) implies the existence of another transition at \(\beta^{-1}\) for the same \(\alpha\). Since there is only a global \(Z_{2}\) symmetry to be broken, then there is only a single phase transition. Therefore, the transition must happen at \(\beta_{c}=\beta_{c}^{-1}=1\). Successive applications of the relation (12) imply that the self-triality curves (red lines in Fig. 1) are the transition lines.
The three different phases are characterized by the following expectation values: \(H^{A}=|\Sigma_{i}\langle h_{3i-1}\rangle|\), \(H^{B}=|\Sigma_{i}\langle h_{3i-1}\rangle|\), and \(H^{C}=|\Sigma_{i}\langle h_{3i}\rangle|\). For \(\lambda_{A,B}<\lambda_{C}\), the system is in a phase where \(h^{C}>h^{A,B}\). By symmetry or, more precisely, by triality, there are other two phases in which \(h^{A}>h^{B,C}\) and \(h^{B}>h^{A,C}\) which happen when \(\lambda_{C,B}<\lambda_{A}\) and \(\lambda_{A,C}<\lambda_{B}\), respectively. There are three phase transition boundaries: \(\lambda_{A}<\lambda_{B}=\lambda_{C}\), \(\lambda_{C}<\lambda_{A}=\lambda_{B}\) and \(\lambda_{B}<\lambda_{C}=\lambda_{A}\). All of them are in the 2D Ising universality class, and, thus, the dynamical exponent is \(z=1\). The multi-critical point \(\lambda_{A}=\lambda_{B}=\lambda_{C}\) is, on the other hand, in a different universality class where \(z=\frac{3}{2}\)[6] and the specific heat exponent \(\alpha=0\) 8.
The purpose of present work is the study of the quenched disorder effects in the phase transitions of the model Hamiltonian (1) for \(p\geq 2\).
## III Overview of the effects of quenched disorder
In this work, we study the effects of quenched disorder on the Hamiltonian (1), paying special attention to the first non-trivial case \(p=2\). We inquire how disorder on the coupling constants changes the clean phase diagram Fig. 1 as well as the universality classes of the transitions.
We build our arguments taking as the starting point the physical behavior of the clean system [6] revised in Sec. II. For this sake, we assume that the set of couplings \(\{\lambda_{A,i}\}\equiv\{\lambda_{3i-2}\}\), \(\{\lambda_{B,i}\}\equiv\{\lambda_{3i-1}\}\), and \(\{\lambda_{C,i}\}\equiv\{\lambda_{3i}\}\) are random variables respectively distributed according to the probability distributions \(\mathcal{P}_{A}(\lambda)\), \(\mathcal{P}_{B}(\lambda)\) and \(\mathcal{P}_{C}(\lambda)\).
### The simpler case of a vanishing coupling \(\lambda_{A,i}\)
When one of the couplings is vanishing, say \(\lambda_{A,i}=0\), the effective algebra (3) is that of the model with \(p=1\) [see, also, Eq. (6)], which corresponds to the standard algebra of the transverse-field Ising chain. In that case, the phase diagram is that of the random transverse-field Ising chain which is well known [29; 80]. The transition takes place when the typical values (geometric mean) of the remaining couplings equal each other [81], i.e., the system is critical when \(\overline{\delta}=0\), where \(\overline{\delta}=\overline{\delta}_{i}\), with \(\overline{\cdots}\) denoting the disorder average and
\[\delta_{i}\equiv\ln\lambda_{B,i}-\ln\lambda_{C,i}. \tag{13}\]
For \(\overline{\delta}>0\) (\(\overline{\delta}<0\)), the system is in the \(B\)- (\(C\)-)phase.
#### iii.1.1 Uncorrelated disorder
According to the Harris' criterion [19], uncorrelated quenched disorder is a relevant perturbation at \(\overline{\delta}=0\) (since \(d\nu=1<2\), with \(d=1\) being the number of spatial dimensions in which disorder is uncorrelated and \(\nu=1\) being the correlation length exponent of the clean theory) and, thus, the universality class of the transition must change. As shown by Fisher [28; 29], the universality class is of infinite-randomness type with activated dynamical scaling (10).
In addition, surrounding this exotic quantum critical point, there are Griffiths phases whose spectral gap vanishes and the spin-spin correlations are short-ranged. The off-critical dynamical scaling is a power-law (9) with effective dynamical exponent \(z\propto\overline{\delta}^{-1}\) diverging as the system approaches criticality.
#### iii.1.2 Appearance of locally correlated disorder
On the other hand for \(\lambda_{B,i}=e^{\epsilon}\lambda_{C,i}\) with \(\epsilon\) being a constant, i.e., for perfectly correlation between the local random variables in lattices B and C, the Harris criterion has to be applied with caution. This is because the local distance from criticality \(\delta_{i}=\epsilon\) is uniform throughout the chain. It turns out that disorder is an irrelevant perturbation [80]. The clean critical behavior is stable up to some critical disorder strength, beyond which it changes to a finite-randomness critical behavior, where the dynamical critical scaling is a conventional power-law (9) but with a non-universal critical dynamical exponent \(z\), i.e., it depends on the disorder strength [82; 18]. In addition, no Griffiths effects exist in this case.
In this work, we do not consider explicitly the case where \(\lambda_{B,i}\) and \(\lambda_{C,i}\) are locally correlated. However, we do consider the case in which both couplings are uniform (\(\lambda_{B,i}=\lambda_{B}\) and \(\lambda_{C,i}=\lambda_{C}\)) and the third one (\(\lambda_{A,i}\)) is random. If the typical value of \(\lambda_{A,i}\) is sufficiently small, we show that quenched disorder is perturbatively irrelevant in the renormalization-group (RG) sense. Thus, disorder can be simply ignored, and the transition at \(\lambda_{B}=\lambda_{C}\) is in the universality class of the clean system. As we increase the values of \(\lambda_{A,i}\)'s beyond the perturbative regime, disorder induces randomness in the renormalized couplings \(\tilde{\lambda}_{B,C}\). Interestingly, the induced disorder has a perfect correlation, namely, \(\tilde{\lambda}_{B,i}=\tilde{\lambda}_{C,i}\). Thus, the long-distance physics of locally correlated random couplings naturally appears in this model. Consequently, a finite-randomness critical point governs the transition for sufficiently large \(\lambda_{A,i}\)'s.
### The boundary phases: the case of small \(\lambda_{A,i}\) couplings
What are the effects of small coupling \(\lambda_{A,i}\)? In the clean case, due to triality, small \(\lambda_{A}\) cannot shift the location of the critical point \(\lambda_{B}=\lambda_{C}\) (see Fig. 1) (see Ref. [6] for another argument). In the disordered case, we numerically show (see Sec. V) that this remains true, i.e., the critical point remains at \(\overline{\delta}=\ln\tilde{\lambda}_{B,i}-\ln\tilde{\lambda}_{C,i}=0\) provided that \(\overline{\ln\lambda_{A,i}}<\ln\tilde{\lambda}_{B,i}\). Thus,
the phase transition lines of the phase diagram, in the random system, are equal to those of the clean system with \(\lambda_{A}\), \(\lambda_{B}\), and \(\lambda_{C}\) replaced by their typical values \(\lambda_{A,\mathrm{typ}}\), \(\lambda_{B,\mathrm{typ}}\) and \(\lambda_{C,\mathrm{typ}}\) as sketched in Fig. 2.
How about the universality classes of the transitions? Here, we consider two cases: two competing (strongest) couplings (a) not generating random mass, and (b) do generating random mass \(\delta_{i}\), see Eq. (13).
In case (a) the two strongest couplings are homogeneous (\(\lambda_{B,i}=\lambda_{B}\) and \(\lambda_{C,i}=\lambda_{C}\)) and \(\lambda_{A,i}\) is typically much smaller than \(\lambda_{B,C}\), the transition is in the universality class of the clean system (Ising) as shown by the solid red line in Fig. 2. When approaching the multi-critical point, however, the weak disordered couplings \(\{\lambda_{A,i}\}\) become non-perturbative and a line of finite-randomness fixed points emerges (as previously mentioned). The resulting universality class of the transition [blue dotted line in Fig. 2] has critical dynamical exponent \(z\) larger than the unity and increases without bonds as the infinite-randomness multi-critical point is reached.
In case (b) the two competing couplings generate a random mass (\(\overline{\delta^{2}}-\overline{\delta}^{2}\neq 0\)). This happens whenever either one or both couplings are independent random variables. The clean (Ising) universality class in unstable since the Harris criterion is violated. The resulting universality class is the one of the random transverse-field Ising chain with activated dynamical scaling (10), and the associated phase boundaries are the dashed lines in the phase diagram of Fig. 2.
### The multi-critical point: the case of strong \(\lambda_{A,i}\) couplings
What is the change of the clean multi-critical point in the presence of disorder? We cannot apply the Harris criterion since the clean correlation length exponent \(\nu\) is not known. To answer this question, we develop an appropriate strong-disorder renormalization-group technique (see Sec. IV). Our results strongly indicate that quenched disorder is a relevant perturbation. In addition, we show that the resulting universality class is of infinite-randomness type with activated dynamics (10) and universal tunneling exponent \(\psi=\frac{1}{2}\). We have also confirmed these results by numerically studying the finite-size gap of the system (see Sec. V).
### The presence of quantum Griffiths phases
We now inquire about the off-critical properties. Are they affected by quenched disorder? The nature of the phases do not change since disorder on the coupling constants does not break neither a symmetry of the Hamiltonian nor a symmetry of the order-parameter field, i.e., disorder does not couple directly to the order-parameter field in the associated underlying field theory. Thus, the phase diagram has the same phases as the clean one. However, near the transition lines, random mass induces Griffiths phases. In those regions of the phase diagram (shaded areas in Fig. 2), the spectral gap vanishes and the correlations remain short-ranged. The finite-size gap scaling is of power-law type (9) with non-universal (i.e., disorder-dependent) effective dynamical exponent \(z\).
Here, the Griffiths phases can be understood through the lenses of the so-called rare regions (RRs): large and rare spatial regions in a phase locally different from the bulk. For definiteness, consider the case in which \(\lambda_{C,i}=\lambda_{C}\) (i.e., uniform) and \(\lambda_{A,i}\)'s are distributed between \(\lambda_{A,\mathrm{min}}\) and \(\lambda_{A,\mathrm{max}}\), with \(0<\lambda_{A,\mathrm{min}}<\lambda_{A,\mathrm{max}}\). Evidently the typical value \(\lambda_{A,\mathrm{typ}}=\exp(\overline{\ln\lambda_{A}})\) is between \(\lambda_{A,\mathrm{min}}\) and \(\lambda_{A,\mathrm{max}}\). To start, let us analyze the phase transition between the \(A\)- (\(\mu^{\prime}>h^{B,C}\)) and \(C\)-phases (\(h^{C}>h^{A,B}\)) where we can disregard the weak coupling \(\lambda_{B,i}\) (say, for simplicity, that \(\max\{\lambda_{B,i}\}<\min\{\lambda_{A,\mathrm{min}},\lambda_{C}\}\)). As previously discussed, the transition takes place when \(\lambda_{C}=\lambda_{A,\mathrm{typ}}\). When \(\lambda_{C}\gg\lambda_{A,\mathrm{max}}\), the system is deep in the homogeneous \(C\)-phase. Its properties are just the one of the clean systems with the random couplings \(\lambda_{A(B),i}\) replaced by their typical value \(\lambda_{A(B),\mathrm{typ}}\). Importantly, the spectral gap is finite.
When \(\lambda_{A,\mathrm{typ}}<\lambda_{C}<\lambda_{A,\mathrm{max}}\), on the other hand, there are RRs where the local couplings \(\lambda_{A,i}\) are typically greater than
Figure 2: The phase diagram of the Hamiltonian (1) for \(p=2\). In panel (a), \(\{\lambda_{3-2}\}=\{\lambda_{A,i}\}\) is a set of independent random variables while the remaining couplings \(\lambda_{3i-1}=\lambda_{B}\) and \(\lambda_{3i}=\lambda_{C}\) are homogeneous. In panel (b), at least two of the coupling constants are independent random variables (see text). The solid red line is a transition line in the Ising universality class of the clean system [\(z=1\), see Eq. (9)]. The red dashed lines are transitions in the infinite-randomness universality class [\(\psi=1/2\), see Eq. (10)]. The multi-critical point (in both cases) is also in the same infinite-randomness universality class. The blue dotted line in panel (a) is a transition line where the universality class is of finite-randomness type (\(1<z<\infty\)). The phases have the same nature as the homogeneous case (see Fig. 1) and the shaded regions delimits the associated Griffiths phases where the spectrum gap vanishes.
\(\lambda_{C}\). Being locally in the \(A\) phase, they endow the system a high \(A\)-phase susceptibility. The spin in the domain walls between the \(A\)- and \(C\)-phases can be arbitrarily weakly coupled and are responsible for the low-lying excitations closing the spectral gap. As neither the bulk nor the RRs are critical, the corresponding correlation length is finite. By duality, an analogous Griffiths phase appears when \(\lambda_{A,\min}<\lambda_{C}<\lambda_{A,\mathrm{typ}}\).
Interestingly, there is a simple quantitative argument providing the closing of the spectral gap in the Griffiths phase. Consider a RR of size \(L_{\mathrm{RR}}\). The effective interaction between the domain wall spins are thus of order \(J_{\mathrm{DW}}\sim e^{-L_{\mathrm{RR}}/\xi_{\mathrm{RR}}}\), where \(\xi_{\mathrm{RR}}\) is the corresponding correlation length in that particular RR. For simplicity, we will consider \(\xi_{\mathrm{RR}}=\xi\) to be RR-independent (a more precise treatment can be found in Ref. [20]). The reason for \(J_{\mathrm{DW}}\) being exponentially small is because the RR itself does not harbor Goldstone modes since the symmetry of the Hamiltonian interactions is discrete. The system low-energy density of states \(\rho_{\mathrm{DOS}}\) is dominated by the excitations of the weakly coupled domain walls. Ignoring the even weaker coupling to other domain wall spins belonging to other RRs, then \(\rho_{\mathrm{DOS}}\left(\omega\right)\sim\int dL_{\mathrm{RR}}\nu_{\mathrm{ RR}}\left(L_{\mathrm{RR}}\right)\delta\left(\omega-J_{\mathrm{DW}}\right)\). Here, we simply sum of all possible RRs weighting their contribution by their existence probability \(w_{\mathrm{RR}}\sim e^{-L_{\mathrm{RR}}/\ell}\), with \(\ell\propto-1/\ln p\), and \(p\) being the probability of \(\lambda_{A,j}\) being greater than \(\lambda_{C}\). Notice that the probability of finding a RR decreases exponentially with its volume, and \(\ell\) is a constant that depends on the distribution's details of the coupling constants. Consequently, one finds that \(\rho_{\mathrm{DOS}}\sim\omega^{-1+1/z}\), with dynamical Griffiths exponent \(z=\ell/\xi\). Notice the absence of a gap or a pseudo-gap in \(\rho_{\mathrm{DOS}}\). Actually, there is a divergence in the low-energy density of states when \(z>1\) and \(\omega\to 0\). We will see that \(z\) diverges \(\sim\overline{\delta}^{-1}\) when approaching the transition.
By triality, the resulting Griffiths phases are those sketched in Fig. 2(b) if the \(B\)-couplings are also randomly distributed between \(\lambda_{B,\min}\) and \(\lambda_{B,\max}\) (\(0<\lambda_{B,\min}<\lambda_{B,\max}\)).
Finally, near the multi-critical point there are Griffiths phases with RRs locally belonging to, say, either \(A\)-phase or \(B\)-phase, while the bulk is in the \(C\)-phase. A similar feature also appeared in the Griffiths phase of quantum Ashikin-Teller chain [83; 84]. In those cases, the effective dynamical exponent \(z=\max\{z_{A},z_{B}\}\), where \(z_{A(B)}\) is the dynamical exponent provided by the Griffiths singularities of the \(A\)- (\(B\)-)RRs.
### The absence of Griffiths phases
If, on the other hand, the \(B\)-couplings are also homogeneous (\(\lambda_{B,i}=\lambda_{B}\)), the resulting transition between the \(B\)- and \(C\)-phases is the one of the clean transverse-field Ising chain for sufficiently weak \(\lambda_{A,\max}\) (as previously discussed). In addition, there is no associated Griffiths phase since there are no RRs (\(\lambda_{A,\max}<\lambda_{B}=\lambda_{C}\)) as sketched in Fig. 2(a). However, when approaching the multi-critical point, RRs in the \(A\)-phase appear and enhance the low-energy density of states. As a result, the gap closes around the transition. At criticality, those \(A\)-RRs can even provide a larger dynamical exponent \(z\). In that case, the clean critical point is replaced by a line of finite-disorder critical points [dotted boundary line in Fig. 2(a)]. Finally, at the multi-critical point, the approximation of weak \(A\)-couplings completely breaks down and the most general theory contains a random-mass term. Therefore, this critical point is of infinite-randomness type.
## IV The strong-disorder renormalization-group method
In this section, we develop a strong-disorder renormalization-group (SDRG) method suitable for studying the long-distance physics of the Hamiltonian (1) for \(p=1\) and \(2\) and with random coupling constants. It is an energy-based RG method where strongly coupled degrees of freedom are locally decimated out hierarchically. Namely, we search for the strongest coupled local degrees of freedom and freeze them in their local ground state. The couplings between the remaining degrees of freedom are renormalized perturbatively. This procedure becomes more and more accurate if the local energy scales become more and more disordered (broadly distributed). In that case, the perturbative renormalization procedure becomes more accurate after each RG decimation step. This method was originally devised to conventional spin-1/2 models [74; 75; 76] and later on generalized to many other models. For a review, see Refs. [67; 66].
### Case \(p=1\)
#### iv.1.1 The decimation procedure
To start, let us consider the case \(p=1\) in which the model Hamiltonian (1) simplifies to
\[H=-\sum_{j}\lambda_{j}h_{j}=-\sum_{j}\lambda_{j}\sigma_{j}^{z}\sigma_{j+1}^{z}. \tag{14}\]
For simplicity, we disregard the boundary conditions. Although (14) and the random transverse-field Ising chain are distinct, they share (apart of global degeneracies) the same eigenenergies.
Following the SDRG philosophy, we search for the largest local energy scale \(\Omega=\max\left\{\left|\lambda_{j}\right|\right\},\) say \(\left|\lambda_{2}\right|\). We then project the Hamiltonian onto the low-energy sector of \(H_{0}=-\lambda_{2}h_{2}=-\lambda_{2}\sigma_{2}^{x}\sigma_{3}^{z}\). Denoting \(\sigma_{i}^{x}\left|\uparrow_{i}\right\rangle=\left|\uparrow_{i}\right\rangle\), \(\sigma_{i}^{z}\left|\downarrow_{i}\right\rangle=-\left|\downarrow_{i}\right\rangle\), \(\sigma_{i}^{x}\left|\rightarrow_{i}\right\rangle=\left|\rightarrow_{i}\right\rangle\), and \(\sigma_{i}^{x}\left|\leftarrow_{i}\right\rangle=-\left|\leftarrow_{i}\right\rangle\), then the ground-state sector of \(H_{0}\) is spanned by \(\left|\pm\right\rangle=(\left|\rightarrow_{2},\uparrow_{3}\pm\left|\leftarrow_{ 2},\downarrow_{3}\right\rangle)/\sqrt{2}\), if \(\lambda_{2}>0\), and \(\left|\pm\right\rangle=(\left|\rightarrow_{2},\downarrow_{3}\right\rangle\pm \left|\leftarrow_{2},\uparrow_{3}\right\rangle)/\sqrt{2}\), otherwise. As a result, the projection can be interpreted as a replacement of spins \(\sigma_{2}\) and \(\sigma_{3}\) by an effective spin-1/2 degree of freedom \(\sigma\) which is defined by \(\sigma^{z}\left|\pm\right\rangle=\pm\left|\pm\right\rangle\). Notice in addition that \(\left\langle\pm\left|h_{2}\right|\pm\right\rangle=\left\langle\pm\left| \sigma_{2}^{x}\sigma_{3}^{z}\right|\pm\right\rangle=\mathrm{sign}\left(\lambda_ {2}\right)\neq 0\).
The effective system Hamiltonian is obtained by treating \(H_{1}=-\lambda_{1}h_{1}-\lambda_{3}h_{3}\) as a perturbation to \(H_{0}\). In second order of perturbation theory, we find that
\[\tilde{H_{1}}=-\tilde{\lambda}\tilde{h}+\mathrm{const},\;\mathrm{with}\;\tilde {\lambda}=\frac{\lambda_{1}\lambda_{3}}{\Omega} \tag{15}\]
and \(\tilde{h}=\sigma_{1}^{\mathrm{x}}\tilde{\sigma}^{\mathrm{z}}\sigma_{4}^{\mathrm{z}}\). For our purposes, the constant term is harmless and can be disregarded.
Notice that \(\tilde{h}\) is now a three-spin interaction. However, since \(\tilde{\sigma}\) appears only in \(\tilde{h}\), it is a local gauge variable whose role is simply to double the degeneracy of the spectrum. The SDRG decimation procedure (15) can be straightforwardly generalized to operators involving an arbitrary number of "internal" degrees of freedom since the algebra (3) is preserved.
Alternatively, the additional degeneracy induced by the effective internal spin \(\tilde{\sigma}\) can be interpreted as if the renormalized chain is, actually, two decoupled new chains. In the first one, \(\tilde{\sigma}\) is fixed in state \(|\tilde{+}\rangle\) (the original spins 2 and 3 fixed in the ground state \(|+\rangle\) of \(H_{0}\)) and the corresponding renormalized Hamiltonian is simply \(\tilde{H}_{1}=-\tilde{\lambda}\sigma_{1}^{\mathrm{x}}\sigma_{4}^{\mathrm{z}}\), while in the second one \(\tilde{\sigma}\) is fixed at \(|\tilde{-}\rangle\) (the original spins frozen in the state \(|-\rangle\)) and \(\tilde{H}_{1}=+\tilde{\lambda}\sigma_{1}^{\mathrm{z}}\sigma_{4}^{\mathrm{z}}\). These two chains have the same spectrum and the subsequent SDRG decimations are identical (apart from the signs of the renormalized couplings which, for our purposes, are not important).
Thus, \(\tilde{h}\) can be simplified back to a two-spin interaction at the expense of dealing with two "twins" renormalized chains. The only difference between them being the sign of the renormalized coupling constant \(\tilde{\lambda}\).
In order to use this simplification and keep track of the degeneracies, we are then required to introduce the quantity \(g_{\Omega}\). It measures the total number of gauge (extra) spin-1/2 degrees of freedom at the energy scale \(\Omega\). Clearly, after each decimation it renormalizes to
\[g_{\Omega}\to g_{\Omega}+1, \tag{16}\]
with the initial condition \(g_{\Omega_{0}}=0\). The total number of effective degrees of freedom in the chain \(N_{\Omega}\) renormalizes to
\[N_{\Omega}\to N_{\Omega}-2, \tag{17}\]
with the initial condition \(N_{\Omega_{0}}=L\). Notice that \(N_{\Omega}+2g_{\Omega}=L\) is a constant throughout the RG flow.
In Fig. 3(a) we sketch the decimation procedure (15)-(17). Regarding the local energy scales, the decimation procedure (15) is identical to that of the random spin-1/2 XX chain [33, 85] and that of the random transverse-field Ising chain [29]. This is not a surprise since the free-particle spectra of all these models are the same.
Finally, it is instructive to recast the decimation procedure in the Hamiltonian space as shown in Fig. 3(b). The \(j\)th circle represents the local energy operator \(h_{j}\). A line connecting different circles means that the sharing operators anti-commute with each other. Disconnected operators act on different Hilbert spaces and, thus, trivially commute with each other. In the decimation procedure, \(h_{2}\) and the "neighboring" operators are replaced by \(\tilde{h}\) which anti-commutes with the neighboring operators \(h_{0}\) and \(h_{4}\). The algebra structure is, thus, preserved along the SDRG flow.
#### iii.2.2 The SDRG flow
Since the SDRG decimation rule (15) is the same as that for the spin-1/2 XX chain, the renormalization-group flow of the coupling constants is already known [33, 85, 29]. Let
\[\delta\equiv\frac{\overline{\ln\lambda_{\mathrm{odd}}}-\overline{\ln\lambda_ {\mathrm{even}}}}{\sigma_{\ln\lambda_{\mathrm{odd}}}^{2}+\sigma_{\ln\lambda_{ \mathrm{even}}}^{2}}, \tag{18}\]
with \(\sigma_{x}^{2}=\overline{x^{2}}-\overline{x}^{2}\) being the variance of \(x\). For \(\delta\gg 1\), the SDRG flow is towards a stable fixed point in which only the odd couplings are decimated. This implies that only the even couplings are renormalized and, thus, are much smaller than the odd ones. This corresponds to a phase in which \(|\langle h_{2i-1}\rangle|>|\langle h_{2i}\rangle|\). In the spin-1/2 XX chain, this corresponds to the odd-dimer phase where spin singlets are formed over the odd bonds, i.e., \(|\langle\mathbf{S}_{2i-1}\cdot\mathbf{S}_{2i}\rangle|>|\langle\mathbf{S}_{2i} \cdot\mathbf{S}_{2i+1}\rangle|\). The correspondence of this phase in the transverse-field Ising chain is not so straightforward since the XX spin-1/2 chains maps into two independent random transverse-field Ising chains [33]. In the first one, the odd couplings of our model play the role of the transverse fields of the Ising chain. In the second, these roles are exchanged. Thus, the phase \(|\langle h_{2i-1}\rangle|>|\langle h_{2i}\rangle|\) corresponds to the paramagnetic (ferromagnetic) phase in the first (second) quantum Ising chain.
If \(0<\delta\ll 1\), the system is in the associated Griffiths phase. Typically, \(|\langle h_{\mathrm{odd}}\rangle|>|\langle h_{\mathrm{even}}\rangle|\), but there are some "defects" inside which \(|\langle h_{\mathrm{odd}}\rangle|<|\langle h_{\mathrm{even}}\rangle|\). These defects form the rare regions discussed in Sec III. Surrounding a rare region, there are two (domain-wall) spins weakly coupled. As a result of their weak coupling, the typical and mean values of the finite-size gap vanish \(\sim L^{-z}\) [Eq. (9)], which defines an off-critical dynamical (Griffiths) exponent \(z\). As the critical point is approached, this exponent diverges as \(z\sim|\delta|^{-1}\).
We now further discuss on the effects of the RRs in the Griffiths phase through the lenses of the SDRG method. Recall that, in the transverse-field Ising chain, the origin of the gapless modes in, say, the paramagnetic phase is due the RRs which are locally in the ferromagnetic phase and fluctuate between the two ferromagnetic states. As this is a coherent tunneling process involving many spins, the associated relaxation time increasing exponentially with the RR's volume. In the XX spin-1/2 chain, these RRs correspond to patches which are locally in the even-dimer phase while the bulk is in the odd-dimer phase. The two domain walls delimiting a RR are, in zeroth order of approximation, simply free (unpaired) spins. To lowest non-vanishing order in perturbation theory, these
Figure 3: Decimation scheme for the Hamiltonian (1) in the \(p=1\) case. In (a), the decimation is sketched in the real space with points and lines representing spin sites and coupling constants, respectively. In (b), the decimation is sketched in the Hamiltonian space with circles representing the local energy operators and the lines connecting anti-commuting operators.
spins (say, at sites 1 and \(\ell\)) actually interact via an effective coupling constant equal to \(\tilde{\lambda}_{\ell}=\tilde{\lambda}_{1}\tilde{\lambda}_{3}\dots\tilde{ \lambda}_{\ell-1}/\tilde{\lambda}_{2}\tilde{\lambda}_{4}\dots\tilde{\lambda}_{ \ell-2}\) [which could also be obtained by a successive iteration of Eq. (15)]. Thus, the gap of a finite chain is simply the excitation energy of the weakest coupled spins in these domain walls: \(\min\{|\tilde{\lambda}_{\ell}|\}\). Notice that, as expected, \(\tilde{\lambda}_{\ell}\) vanishes exponentially with the RR size implying an exponentially large relaxation time. An analogous physical picture appears in our model. For simplicity, consider a compact RR in which the local even couplings \(\tilde{\lambda}_{2},\,\tilde{\lambda}_{4},\dots,\,\tilde{\lambda}_{\ell-2}\) (\(\ell\) even) are greater than the local odd couplings \(\tilde{\lambda}_{1},\tilde{\lambda}_{3},\dots,\,\tilde{\lambda}_{\ell-1}\). In that case, after decimating the even operators \(h_{2i}\) in that RR, an effective operator linking spins 1 and \(\ell\) appear \(\tilde{h}_{1}=-\tilde{\lambda}_{\ell}\sigma_{i}^{z}\sigma_{i}^{z}\) with \(\tilde{\lambda}_{\ell}=\tilde{\lambda}_{1}\tilde{\lambda}_{3}\dots\tilde{ \lambda}_{\ell-1}/\tilde{\lambda}_{2}\tilde{\lambda}_{4}\dots\tilde{\lambda}_ {\ell-2}\) (disregarding an unimportant sign). Thus, \(\langle h_{2}\rangle=\langle h_{4}\rangle=\dots=\langle h_{2\ell-2}\rangle=\pm 1\) and a longer correlation between spins 1 and \(\ell\) develop, \(\langle\,\sigma_{1}^{z}\sigma_{i}^{z}\rangle\neq 0\). Consequently, a low-energy mode arises with excitation energy of order \(\tilde{\lambda}\). Evidently, by duality, there are analogous conventional and Griffiths phases for \(\delta<0\).
At criticality \(\delta=0\), the fixed point is universal in the sense that critical exponents do not depend on the details of the disorder distributions. For instance, the finite-size gap distribution for sufficiently large systems is [86]
\[\mathcal{P}_{\text{SDRG}}\left(\eta\right) = \frac{4}{\sqrt{\pi}}\sum_{k=0}^{\infty}\left(-1\right)^{k}\left(k +\frac{1}{2}\right)e^{-\eta^{2}\left(k+\frac{1}{2}\right)^{2}}, \tag{19}\] \[= \frac{4\pi}{\eta^{3}}\sum_{k=0}^{\infty}\left(-1\right)^{k}\left( k+\frac{1}{2}\right)e^{-\pi^{2}\left(k+\frac{1}{2}\right)^{2}/\eta^{2}},\]
where
\[\eta=\frac{\ln\left(2\Omega_{0}/\Delta\right)}{\sigma_{0}\left(L/2\right)^{ \Psi}}, \tag{20}\]
\(\sigma_{0}=\sqrt{\frac{1}{2}\sigma_{\ln\lambda\lambda}^{2}+\frac{1}{2}\sigma _{\ln\lambda\epsilon}^{2}}\), \(\Omega_{0}\) is the maximum value of \(\lambda\) in the bare system, \(\Delta\) is the finite-size gap, and \(\psi=1/2\) is the universal tunneling exponent. The distribution \(\mathcal{P}_{\text{SDRG}}\) is \(L\) independent for \(L\gg\gamma_{0}^{-1}=\pi/8\sigma_{\ln\lambda}^{2}\), the inverse of the Lyapunov exponent [87] which plays the role of a clean-dirty crossover length [88; 89]. The relation between length and energy scales follow from the scaling variable in Eq. (20), from which follows the activated dynamical scaling \(\overline{\ln\Delta}\sim-L^{\Psi}\) in Eq. (10).
#### iii.1.3 Thermodynamics
The thermodynamic observables follow straightforwardly. For instance, the low-temperature entropy is simply \(S\sim\frac{1}{L}\left(N_{T}+g_{T}\right)\ln 2\), which simply counts the total number of active (undecimated) spins at the energy scale \(\Omega=T\). The reasoning is the following [76]. At low temperatures \(T\ll\Omega_{0}\), the distribution of effective coupling constants is singular and, thus, the majority of the couplings are much smaller than the maximum energy scale \(\Omega=T\). In sum, the active spins are essentially free. At criticality (\(\delta=0\)), \(N_{\Omega}\approx L/\left(1+\ln\left(\Omega_{0}/\Omega\right)/\sigma_{\ln \lambda}^{2}\right)^{1/\psi}\)[90; 91] and \(g_{\Omega}=\frac{1}{2}\left(L-N_{\Omega}\right)\). Then,
\[S=\frac{1}{2}\left(1+\left(\frac{\sigma_{\ln\lambda}^{2}}{\ln\left(\frac{ \Omega_{0}}{T}\right)}\right)^{1/\psi}\right)\ln 2. \tag{21}\]
Notice the residual zero-temperature entropy coming from the exponentially large ground-state degeneracy.
The specific heat follows from \(C=T\partial S/\partial T\). In the low-temperature limit,
\[C\sim\ln^{-\left(1+1/\psi\right)}\left(\Omega_{0}/T\right), \tag{22}\]
which is similar to that of the random spin-1/2 XX chain.
#### iii.1.4 Ground-state degeneracy and the spectrum of other models
It is interesting to further explore the connection between the \(p=1\) model and the spin-1/2 XX chain and the transverse-field Ising chain in view of the SDRG decimation procedure.
Within the SDRG framework, one can obtain the whole spectrum of the transverse-field Ising chain in the following way [92]. When performing the decimating procedure, one can either search for the ground state (and then project onto ground state of the local Hamiltonian) or, alternatively, search for the excited states (and then project onto the excited state of the local Hamiltonian). If one projects onto the excited states, the effective coupling of field pics up a different sign but the magnitude is the same [see Eq. (15)]. However, for decimation purposes, all remains the same as the sign is irrelevant. Performing the decimation for all possibilities of low and excited states, one constructs the entire spectrum of the Ising chain: \(2^{L/2}\) states in total. (Recall that the associated Ising chain has half of the sites, but the same number of operators in the Hamiltonian.) Notice that this is equivalent to consider two "twins" renormalized chains as previously outlined. Thus, all ground states of the \(p=1\) model are equivalent to the entire spectrum of the transverse-field Ising chain.
The connection to the XX model is alike. Here, the local Hamiltonian has 3 energy levels: one corresponds to the spin-0 singlet state, another to the zero-magnetization spin-1 triplet state, and the remaining one is doubly degenerate corresponding to the \(\pm 1\)-magnetization spin-1 states. If one disregards the doubly degenerate \(\pm 1\)-magnetization states, the decimation procedure recovers that of the Ising chain. Thus, the many ground-states of the \(p=1\) model corresponds to a small fraction of the states in the spectrum of the XX spin-1/2 chain.
We now inquire about the spectrum of the \(p=1\) chain. When projecting the system onto to local excited states, the only difference with respect to projecting onto the local ground state is a sign picked up by the effective energy scales and a flip of one of the spins. Thus, the entire spectrum can be easily related to the states in the ground-state manifold. There will be \(2^{\frac{L}{2}}\) states each of which is \(2^{\frac{L}{2}}\) degenerate. Precisely, all states can be represented by
\[\otimes_{j=1}^{L/2}\left|\phi_{j}\right\rangle, \tag{23}\]
where \(\left|\phi_{j}\right\rangle\) is either \(\left|\phi_{j_{+}}\right\rangle=(\left|\rightarrow_{j_{1}},\uparrow_{j_{2}} \right\rangle\pm\left|\leftarrow_{j_{1}},\downarrow_{j_{2}}\right\rangle)/ \sqrt{2}\) or \(\left|\phi_{j_{-}}\right\rangle=(\left|\rightarrow_{j_{1}},\downarrow_{j_{2}} \right\rangle\pm\left|\leftarrow_{j_{1}},\uparrow_{j_{2}}\right\rangle)/ \sqrt{2}\). Here, \(\{j_{1},j_{2}\}\) is the pair of spin sites decimated together in the \(j\)th decimation which is the same pair for all states. The precise state \(\left|\phi_{j}\right\rangle\) (if \(\left|\phi_{j_{+}}\right\rangle\) or \(\left|\phi_{j_{-}}\right\rangle\) and with the \(+\) or \(-\) sign) depends on the sign of coupling constant and on whether the projection was made into the local ground or excited states. All of these, are determined by the history of decimation procedure.
#### iii.1.5 Spin-spin correlations
In the SDRG approach, any of the \(2^{L/2}\) ground states of \(H_{0}\) is a simply product state as specified in (23). In that case, the spins in the \(j\)th pair are strongly correlated and dominates the average value of the spin-spin correlation. The probability that a pair of length \(r=j_{2}-j_{1}\) is formed along the SDRG flow is proportional to \(r^{-2}\) for sufficiently large \(r\)[33, 91]. Hence, the mean correlation function decays only algebraically,
\[\overline{\left|\left\langle\sigma_{i}^{x}\sigma_{i+r}^{z}\right\rangle \right|}\sim r^{-\eta}, \tag{24}\]
with universal exponent \(\eta=2\). (Here, \(\overline{\cdots}\) denotes the disorder average.) The typical value of the correlation, on the other hand, decays stretched exponentially fast [33],
\[\overline{\ln\left|\left\langle\sigma_{i}^{x}\sigma_{i+r}^{z}\right\rangle \right|}\sim-r^{\eta}. \tag{25}\]
The distribution of the values of the correlation function is also known. For that, we refer the reader to Refs. [89, 18].
The off-critical (\(\delta\neq 0\)) correlations are also known [29, 33, 80]. They decay exponentially faster \(\sim e^{-r/\xi}\) with a diverging correlation length
\[\xi\sim\delta^{-\nu}, \tag{26}\]
where \(\nu=2\) for the mean correlations and \(\nu=1\) for the typical correlations.
Interestingly, all those results apply for any state as well provided that only the magnitude of the correlations are concerned.
### Case \(p=2\)
#### iii.2.1 The usual SDRG method
We now derive the SDRG decimation rules for the Hamiltonian (1) with \(p=2\).
Following the usual SDRG receipt, we treat \(H_{1}=-\lambda_{3}h_{3}-\lambda_{4}h_{4}-\lambda_{6}h_{6}-\lambda_{7}h_{7}\) (where \(h_{j}=\sigma_{j}^{x}\sigma_{j+1}^{z}\sigma_{j+2}^{z}\)) as a perturbation to \(H_{0}=-\lambda_{6}h_{5}\). (We are assuming that the energy cut-off \(\Omega=\left|\lambda_{6}\right|\).) Up to second order in perturbation theory, the renormalized Hamiltonian is \(\tilde{H}_{1}=R_{0}\tilde{H}_{1}^{2}R_{0}/\left(-2\left|\lambda_{6}\right|\right)\), where \(P_{0}\) is the projector onto the ground state of \(H_{0}\). The direct terms in the square of \(H_{1}\) are unimportant constants since \(h_{i}^{2}=1\). The cross terms are proportional to the anti-commutators \(\left\{h_{i},h_{j}\right\}\). Thus, many of those vanish because of the algebra (3). The surviving ones are those which commute with each other. Thus, \(\tilde{H}_{1}\) simplifies to \(\tilde{H}_{1}=-\tilde{\lambda}_{C}\tilde{h}_{C}-\tilde{\lambda}_{AC}\tilde{h }_{AC}-\tilde{\lambda}_{A}\tilde{h}_{A}\) where the operators are
\[\tilde{h}_{C} = P_{0}h_{3}h_{6}P_{0}=\sigma_{3}^{x}\sigma_{4}^{z}\tilde{\sigma} ^{x}\sigma_{8}^{z},\] \[\tilde{h}_{AC} = P_{0}h_{3}h_{7}P_{0}=\sigma_{3}^{x}\sigma_{4}^{z}\tilde{\sigma} ^{x}\sigma_{8}^{x}\sigma_{9}^{z}, \tag{27}\] \[\tilde{h}_{A} = P_{0}h_{4}h_{7}P_{0}=-\sigma_{4}^{x}\tilde{\sigma}^{y}\tilde{ \tau}^{y}\sigma_{8}^{z}\sigma_{9}^{z},\]
and the renormalized couplings are
\[\tilde{\lambda}_{C}=\frac{\lambda_{3}\lambda_{6}}{\Omega},\;\tilde{\lambda}_{ AC}=\frac{\lambda_{3}\lambda_{7}}{\Omega},\;\text{and}\;\tilde{\lambda}_{A}= \frac{\lambda_{4}\lambda_{7}}{\Omega}. \tag{28}\]
Here, the effective degrees of freedom \(\tilde{\sigma}\) and \(\tilde{\tau}\) span the ground-state subspace \(\{\left|\rightarrow\uparrow\uparrow\right\rangle,\;\left|\rightarrow\downarrow \downarrow\right\rangle,\;\left|\leftarrow\downarrow\uparrow\right\rangle,\; \left|\leftarrow\uparrow\downarrow\right\rangle\}\) for \(\tilde{\lambda}_{5}>0\), otherwise the set of states is \(\{\left|\leftarrow\uparrow\uparrow\right\rangle,\;\left|\leftarrow\downarrow \downarrow\right\rangle,\;\left|\rightarrow\downarrow\uparrow\right\rangle,\; \left|\rightarrow\uparrow\downarrow\right\rangle\}\). These states are recognized as \(\{\left|\uparrow,\uparrow\uparrow\right\rangle,\;\left|\uparrow,\downarrow \downarrow\right\rangle,\;\left|\downarrow,\uparrow\uparrow\right\rangle\}\) in the basis of \(\tilde{\sigma}^{z}\otimes\mathbb{F}\).
The decimation procedure (27) and (28) is represented in the Hamiltonian space in Fig. 4(a). Unlike the \(p=1\) case [see Fig. 3(b)], the structure of the anti-commuting algebra (3) of the original Hamiltonian is not preserved and new operators (involving \(\sigma^{y}\)) appear.
#### iii.2.2 The block SDRG method
The arising of the new operator \(\tilde{h}_{AC}\) in (27) is worrisome for the practical implementation of the method. Firstly, it prevents us from projecting the renormalized system onto the ground states of the new effective spin operators \(\tilde{\sigma}\) and \(\tilde{\tau}\) as in the \(p=1\) case. Secondly, and more importantly, it requires a generalization of the SDRG procedure to take into account these new operators. While this is possible and cumbersome, we found, surprisingly, a quite simpler route guided by the algebra of the energy-density operators (3). We generalize the "usual" SDRG approach above reported to (in the lack of a better therminology) a "block" SDRG approach. In the latter, we consider a larger unperturbed Hamiltonian (and thus, a larger Hilbert space) when performing the decimation procedure (see details in App. A). The size of the block is the
Figure 4: Strong-disorder RG decimation scheme for the Hamiltonian (1) in the \(p=2\) case. The decimation is depicted in the Hamiltonian space analogous to Fig. 3(b) where circles represent the local energy operators and lines connect operators which anti-commute. In (a), the SDRG method is implemented in its simpler form (usual SDRG) where a newly generated operator \(\tilde{h}_{AB}\) disrupts the algebra (3). In (b), the SDRG method is implemented in a slightly more general fashion (block SDRG) which preserves the algebra (3) in the renormalized chain.
maximum number of operators which anti-commute among themselves.
When decimating the largest local energy scale \(\Omega=|\lambda_{5}|\), instead of considering \(H_{0}=-\lambda_{5}h_{5}\) (which is a \(B\)-type operator), we consider a larger block involving the \(A\)- and \(C\)-type "nearest-neighbor" operators, i.e., \(H_{0}=-\lambda_{4}h_{4}-\lambda_{5}h_{5}-\lambda_{6}h_{6}\). This is the largest block which encompass \(h_{5}\) and still have only two energy levels (as \(h_{5}\)),4 i.e., the eigenenergies of \(H_{0}\) are \(\pm\sqrt{\lambda_{4}^{2}+\lambda_{5}^{2}+\lambda_{6}^{2}}\). Then, we project the \(H-H_{0}\) on the ground-state subspace of \(H_{0}\). The degeneracy of the ground state is \(2^{4}\) and, thus, can be spanned by four effective spin-1/2 degrees of freedom \(\bar{\sigma}_{a}\), \(\bar{\sigma}_{b}\), \(\bar{\sigma}_{c}\), and \(\bar{\sigma}_{d}\). In practice, we have to project \(\lambda_{2}h_{2}\), \(\lambda_{3}h_{3}\), \(\lambda_{7}h_{7}\) and \(\lambda_{8}h_{8}\). The result is that, in the regime \(|\lambda_{5}|\gg|\lambda_{4,6}|\), the renormalized operators are
Footnote 4: Actually, there are other 2 blocks: \(-\lambda_{3}h_{3}-\lambda_{4}h_{4}-\lambda_{6}h_{5}\) and \(-\lambda_{6}h_{5}-\lambda_{6}h_{6}-\lambda_{7}h_{7}\). However, only the symmetric one (with respect to \(\lambda_{5}\)) provides the convenient SDRG decimation rules.
\[\tilde{h}_{2}=\sigma_{2}^{x}\sigma_{3}^{z}\bar{\sigma}_{a}^{z}\left(\sin \theta\bar{\sigma}_{b}^{x}\bar{\sigma}_{d}^{z}+\cos\theta\bar{\sigma}_{b}^{z} \right)\bar{\sigma}_{c}^{z}, \tag{29}\]
where
\[\cos\theta=\frac{-\text{sign}\left(\lambda_{5}\right)|\lambda_{4}|}{\sqrt{ \lambda_{4}^{2}+\lambda_{6}^{2}}}\text{ and }\sin\theta=\frac{\lambda_{6}}{\sqrt{\lambda_{4}^{2}+\lambda_{6}^{2}}}, \tag{30}\]
\[\tilde{h}_{8}=-\text{sign}\left(\lambda_{5}\right)\left(\cos\theta\bar{ \sigma}_{d}^{x}+\sin\theta\bar{\sigma}_{b}^{y}\bar{\sigma}_{d}^{y}\right) \sigma_{9}^{z}\sigma_{10}^{z}, \tag{31}\]
\[\tilde{h}_{3}=\sigma_{3}^{x}\bar{\sigma}_{a}^{y}\bar{\sigma}_{b}^{y}\bar{ \sigma}_{c}^{z}\bar{\sigma}_{d}^{z},\text{ and }\tilde{h}_{7}=\bar{\sigma}_{c}^{x}\bar{\sigma}_{d}^{z} \sigma_{9}^{z}. \tag{32}\]
The corresponding renormalized coupling constants are \(\tilde{\lambda}_{2}=\lambda_{2}\), \(\tilde{\lambda}_{8}=\lambda_{8}\),
\[\tilde{\lambda}_{3}=-\text{sign}\left(\lambda_{4}\right)\frac{\lambda_{3} \lambda_{6}}{\Omega},\text{ and }\tilde{\lambda}_{7}=\frac{|\lambda_{4}|\lambda_{7}}{\Omega}. \tag{33}\]
Surprisingly, the renormalized operators are different in character from those in Eq. (27). Interestingly, they preserve the algebra structure (3) of the original system as depicted in Fig. 4(b) if we identify \(\tilde{h}_{2}\to h_{2}\), \(\tilde{h}_{3}\to\tilde{h}_{C}\), \(\tilde{h}_{7}\to\tilde{h}_{A}\), and \(\tilde{h}_{2}\to h_{8}\). Furthermore, the "hybrid" operator \(\tilde{h}_{AC}\) [generated from the usual SDRG approach, see Fig. 4(a)] is not generated in the block SDRG approach [see Fig. 4(b)]. Instead, there are only "pure" operators except for the BC-type operator in (29) and the \(AB\)-type operator in (31). This is very convenient because they can be neglected at strong-disorder fixed points (corresponding to situations near and at the dashed transitions in Fig. 2). The reasoning is the following. Since the effective disorder at and near the transition is very large (which we show _a posteriori_), very likely either \(|\lambda_{4}|\gg|\lambda_{6}|\) or the \(|\lambda_{4}|\ll|\lambda_{6}|\). In the former case, \(\sin\theta\approx 0\) in Eq. (30), and, thus, the hybrid type operators can be neglected. The renormalized \(B\)-type operators then simplify to \(\tilde{h}_{2}=-\text{sign}\left(\lambda_{5}\right)\sigma_{2}^{x}\sigma_{3}^{z} \bar{\sigma}_{a}^{z}\bar{\sigma}_{b}^{z}\bar{\sigma}_{c}^{z}\), \(\tilde{h}_{8}=\bar{\sigma}_{d}^{x}\sigma_{9}^{z}\bar{\sigma}_{10}^{z}\). Taking \(\sin\theta=1-\cos\theta=0\) and noticing that the new effective spin-1/2 degrees of freedom \(\bar{\sigma}_{a}\) and \(\bar{\sigma}_{b}\) appear only in combinations that commute with each other (\(\sigma_{i}^{z}\bar{\sigma}_{b}^{z}\) in \(\tilde{h}_{2}\) and \(\sigma_{a}^{y}\bar{\sigma}_{b}^{y}\) in \(\tilde{h}_{3}\)), we can project the resulting renormalized Hamiltonian in the common eigenstates of \(\bar{\sigma}_{a}^{z}\bar{\sigma}_{b}^{z}\) and \(\bar{\sigma}_{a}^{y}\bar{\sigma}_{b}^{y}\). As a result, we will have four "twins" renormalized systems. They are
\[\tilde{H}_{1}=\pm\lambda_{2}h_{2}\pm\tilde{\lambda}_{C}\tilde{h}_{C}-\tilde{ \lambda}_{A}\tilde{h}_{A}-\lambda_{8}h_{8}, \tag{34}\]
where \(h_{2}=\sigma_{2}^{x}\sigma_{3}^{z}\bar{\sigma}_{c}^{z}\), \(\tilde{h}_{C}=\sigma_{3}^{x}\bar{\sigma}_{c}^{z}\bar{\sigma}_{d}^{z}\), \(\tilde{h}_{A}=\bar{\sigma}_{c}^{x}\bar{\sigma}_{d}^{z}\bar{\sigma}_{9}^{z}\), \(h_{8}=\bar{\sigma}_{d}^{x}\bar{\sigma}_{d}^{z}\bar{\sigma}_{5}^{z}\bar{\sigma}_{ 10}^{z}\),
\[\tilde{\lambda}_{C}=\frac{\lambda_{3}\lambda_{6}}{\Omega},\text{ and }\tilde{\lambda}_{A} \tilde{=}\frac{|\lambda_{4}|\lambda_{7}}{\Omega}. \tag{35}\]
The decimation procedure (34) and (35) is schematically depicted in Fig. 4(b). By symmetry, an analogous decimation procedure is obtained in the case \(|\lambda_{4}|\ll|\lambda_{6}|\), with the exchanges \(\lambda_{2}\simeq\lambda_{8}\) and \(\tilde{\lambda}_{A}\simeq\tilde{\lambda}_{C}\), which is obtained after a convenient change in the definition of the effective operators \(\tilde{\sigma}_{a}\), \(\tilde{\sigma}_{b}\), \(\tilde{\sigma}_{c}\), and \(\tilde{\sigma}_{d}\).
The decimation procedure (34) and (35) [see Fig. 4(b)] is very convenient. It preserves the algebra structure (3) and the operators of the original Hamiltonian \(\tilde{h}_{i}=\sigma_{i}^{x}\sigma_{i+1}^{z}\sigma_{i+2}^{z}\). This procedure is a straightforward generalization of the decimation procedure of the \(p=1\) case in the following sense. For \(p=1\), a decimation of an \(A\)-type coupling implies the renormalization of the neighboring \(B\)-type couplings and vice-versa [see Eq. (15)]. For \(p=2\), due to triality, the decimation of a \(B\)-type operator implies the renormalization of the neighboring \(A\)- and \(C\)-type couplings [see Eq. (35)].
#### iii.2.3 On the equivalence between the usual and the block SDRG approaches
In the regime \(|\lambda_{5}|\gg|\lambda_{4,3}|\), there should be no difference between the usual and block SDRG approaches as the ground state of the block \(H_{0}=-\lambda_{4}h_{4}-\lambda_{5}h_{5}-\lambda_{6}h_{6}\) is simply that of the local \(H_{0}=-\lambda_{5}h_{5}\) furnished with trivial degeneracies. It is somewhat surprising that seemingly fundamentally different decimation procedures arise from these approaches. Thus, we inquire whether these two decimating procedures are really different.
In Ref. [39] the SDRG method was carried out in the antiferromagnetic Heisenberg model in a zigzag and two-leg ladder geometries. In both cases, it was shown that, in the early stages of the SDRG flow, further neighbors interactions arise, just like what was found in the usual SDRG decimation [see Fig. 4(a)]. However, in the final stages of the SDRG flow, the renormalized geometry of the system always converged to a chain geometry. (Evidently, one had to disregard exceedingly small couplings that were generated along the flow.) This may be a general feature of quasi-one dimensional critical or near-critical systems. The low-energy long-wavelength effective theory is that of a chain. Technically, couplings of "long" operators that connect exceedingly distant spins arise after many renormalizations and, thus, are typically much smaller than those of "shorter" operators.
In sum, the longer range character of the new hybrid operator \(\tilde{h}_{AC}\) in the usual SDRG approach may be an irrelevant "operator" which vanishes in the latter stages of the SDRG flow. In the block SDRG approach, the new hybrid operators \(\tilde{h}_{AB}\) and \(\tilde{h}_{BC}\) could be clearly determined as irrelevant ones near and at criticality, where the flow is towards strong disorder, a self-consistent assumption that we have yet to prove. Therefore, it is plausible that \(\tilde{h}_{AC}\) in the usual SDRG approach can be neglected by the same reasons. In that case, both the usual and block SDRG approaches are equivalent near and at criticality. This is a fundamental feature of the renormalization-group philosophy. Details on defining the coarse-grained operators should not matter in the long-wavelength regime.
#### iii.1.4 SDRG flow corresponding to conventional phases
Having derived the SDRG decimation rules, we now analyze the flow of the coupling constants.
Let us start by analyzing the simpler case corresponding to flow towards the gapped phases. In this case, one of the coupling constants, say, of \(C\)-type, is always greater than the others. Precisely, \(\min\left\{\lambda_{3i}\right\}>\max\left\{\lambda_{3i-1},\lambda_{3i-2}\right\}\). In that case, notice that only the bare \(h_{3i}\) operators are decimated. Therefore, the corresponding fixed point is non-critical and, thus, represents a phase: the \(h^{C}>h^{A,B}\) conventional phase in Fig. 2. Notice that both the usual and the block SDRG can be applied here since only the original \(h_{3i}\) operators are decimated. Thus, in the SDRG framework, the spectrum gap is simply \(\Delta=2\min\left\{\lambda_{3i}\right\}\).
Evidently, by triality, there are additional two phases in which \(h^{A}>h^{B,C}\) and \(h^{B}>h^{A,C}\) as shown in Fig. 2. Finally, we call attention to the fact that, in the SDRG framework, these phases exist because a decimation of \(h_{i}\), renormalizes the neighboring interactions \(h_{j}\) with \(|j-i|\leq p=2\). Thus, for a generic value of \(p\) in the Hamiltonian (1), we expect, at least, \(p+1\) different phases.
iii.1.5 SDRG flow corresponding to conventional Griffiths phases and phase transitions between two phases.
First, we analyze the case in which at least two of the three types of couplings are random, i.e., the SDRG flow associated to the transitions and Griffiths phases shown in Fig. 2(b) away from the multi-critical point.
Let us consider the case in which \(\max\left\{\lambda_{3i-2}\right\}<\min\left\{\lambda_{3i-1},\lambda_{3i}\right\}\). In this case, the SDRG flow is dictated only by the competition between \(\left\{\lambda_{3i}\right\}\) and \(\left\{\lambda_{3i-1}\right\}\) as the \(A\)-type couplings will never be decimated. Thus, we can simply drop the \(\lambda_{A}\)'s in the decimation procedure (34) and (35) as the distribution \(P_{A}\) renormalizes to an extremely singular one. In that case, the zigzag-chain geometry of the renormalized system becomes that of a single chain. Therefore, the flow is simply that of the case \(p=1\).
In order to analyze it, we simply generalize the distance from criticality Eq. (18) to
\[\delta_{BC}\equiv\frac{\overline{\ln\lambda_{3i}}-\overline{\ln\lambda_{3i-1} }}{\sigma_{\ln\lambda_{B}}^{2}+\sigma_{\ln\lambda_{C}}^{2}}. \tag{36}\]
From here, all energy-related results from the \(p=1\) case follows straightforwardly. The transition is governed by an infinite-randomness critical point and happens for \(\delta_{BC}=0\). The gap distribution is that of Eq. (19) with the scaling variable (20) redefined as
\[\eta=\frac{\ln\left(2\Omega_{0}/\Delta\right)}{\sigma_{0}\left(L/3\right)^{ \psi}}. \tag{37}\]
This redefinition is due to the fact that the total number of decimations in the \(p=1\) case is \(L/2\) while it is \(L/3\) for the \(p=2\) case. Thus, \(L/2\) in the \(p=1\) case translates to \(L/3\) in the \(p=2\) case.
In addition, there are associated Griffiths phases for \(|\delta_{BC}|\ll 1\). The off-critical dynamical exponent \(z\) diverges as \(z\sim|\delta_{BC}|^{-1}\) when it approaches criticality. The nature of the low-energy modes are also associated to domain-wall spins surrounding a rare region, just like for \(p=1\).
By triality, there are other two boundary transitions for \(\delta_{AB}=0\) and \(\delta_{AC}=0\). These boundaries and the Griffiths phases are shown in Fig. 2 as dashed lines and shaded regions, respectively. The SDRG results here reported, thus, put the heuristic arguments of Sec. III in solid grounds.
Now we analyze the case in which the two competing (strongest) couplings are uniform, i.e., \(\lambda_{B,j}=\lambda_{B}\), \(\lambda_{C,i}=\lambda_{C}\), and \(\lambda_{A,i}\) random with \(\lambda_{A,\text{typ}}<\lambda_{B,C}\). This corresponds to the region in the phase diagram of Fig. 2(a) surrounding the clean and finite-randomness transition but sufficiently far from the multi-critical point.
When \(\max\left\{\lambda_{A,i}\right\}<\lambda_{B,C}\), the SDRG method does not need to be applied since weak \(\lambda_{A}\) coupling is irrelevant. The transition at \(\lambda_{B}=\lambda_{C}\) is in the clean Ising universality class, and the spectral gap vanishes only at criticality.
When \(\lambda_{A,\text{typ}}<\lambda_{B}<\lambda_{C}<\max\left\{\lambda_{A,i}\right\}\), the system in in the Griffiths \(C\)-phase with high \(A\)-susceptibility as in the generic case discussed above. (Analogously for \(\lambda_{A,\text{typ}}<\lambda_{C}<\lambda_{B}<\max\left\{\lambda_{A,i}\right\}\).)
Finally, we discuss the interesting case when \(\lambda_{A,\text{typ}}<\lambda_{B}=\lambda_{C}<\max\left\{\lambda_{A,i}\right\}\). The system is globally critical between \(B\)- and \(C\)-phases but there are rare regions locally in the \(A\)-phase. What are their effects? Applying the block-SDRG decimation procedure, we simply decimate \(A\)-operators in the first stages of the flow. After decimating all \(A\)-operators with \(\lambda_{B}=\lambda_{C}<\lambda_{A,i}<\max\left\{\lambda_{A,i}\right\}\), we can simply ignore the remaining \(A\)-operators since they would be surrounded by locally stronger \(B\)- and \(C\)-operators. The effective chain is, thus, that with \(B\)- and \(C\)-operators only. However, the effective coupling constants are not uniform. Interestingly, notice that the effective couplings obey the condition \(\tilde{\lambda}_{B,i}=\tilde{\lambda}_{C,i}\). The precise condition for the absence of random distances from criticality (random mass) discussed in Sec. III.1.2. When the \(A\)-rare regions are small, the effective coupling \(\tilde{\lambda}_{B,C,i}\) are weakly renormalized and their effective disorder (variance of their
distribution) is weak. As a result, the clean critical behavior is stable. However, when approaching the multi-critical point we expect the renormalization of \(\tilde{\lambda}_{B,C,j}\) to become more and more relevant. This means that their effective disorder increases to the point where the clean critical behavior is destabilized. The resulting phase transition is thus of finite-randomness type with non-universal critical exponent \(z\). This result should be valid up to the multi-critical point where \(z\) reaches its maximum value. Interesting, \(z\) is formally infinity in the other transition lines meeting at the multi-critical point.
We end this section by noticing that the above results straightforwardly generalizes to any value of \(p\).
#### iv.1.6 The SDRG flow at the multi-critical point
Finally, we deal with the case in which all couplings compete.
At first glance, one could think in analyzing the flow equations for the distributions \(P_{A}\), \(P_{B}\), and \(P_{C}\) analytically. The flow equation for \(P_{A}\) is (see App. B)
\[-\frac{\partial P_{A}}{\partial\Omega}=P_{A}\left(\Omega\right)P_{A}-\left(P_ {B}\left(\Omega\right)+P_{C}\left(\Omega\right)\right)\left(P_{A}-P_{A}\otimes P _{A}\right), \tag{38}\]
where \(P_{X}=P_{X}\left(\lambda;\Omega\right)\), \(P_{X}\left(\Omega\right)=P_{X}\left(\Omega;\Omega\right)\), and
\[P_{X}\otimes P_{X}=\int d\lambda_{1}d\lambda_{4}P_{X}\left(\lambda_{1};\Omega \right)P_{X}\left(\lambda_{4};\Omega\right)\delta\left(\lambda-\frac{\lambda_{ 1}\lambda_{4}}{\Omega}\right). \tag{39}\]
By triality, the flow equations for \(P_{B}\) and \(P_{C}\) follow from exchanging the labels accordingly. The last term on the r.h.s. of (38) implements the renormalization of the \(A\)-type coupling [\(\tilde{\lambda}_{A}\) in Eq. (35), and we are considering only the magnitude of the coupling constants] when a \(B\)- or \(C\)-type coupling is decimated. The remaining terms ensure that \(P_{A}\) remains normalized when the cutoff energy scale \(\Omega\) is changed. As shown in App. B, the critical point (where \(P_{A}=P_{B}=P_{C}\)) of the flow equation (38) is of infinite-randomness type [see Eq. (35)] and is related to a different universal tunneling exponent \(\psi=\frac{2}{3}\) which is greater than that \(\psi=\frac{1}{2}\) of the case \(p=1\). A larger tunneling exponent means larger effective disorder as the typical value of the finite-size gap is even smaller [see Eq. (37)]. This sounds intuitive since two coupling constants are renormalized in each decimation instead of one renormalization per decimation as happens in the \(p=1\) case.
Although this gives us a prescription to find a new universality class beyond that of the permutation-symmetric universality class (where \(\psi=1/N\), with \(N\) being an integer [46, 47, 37, 93, 37]), the analytical analysis of this problem is not correct. The flow equation (38) assumes no correlation between the three types of couplings. However, when, say, an \(B\)-type coupling is decimated the renormalized \(A\)- and \(C\)-type couplings are added as neighbors in the renormalized system [see Fig. 4(b)]. Being neighbors diminishes their chance of being further renormalized (which would increase the effective disorder strength by producing even more singular couplings) when compared with the case described by Eq. (38) where they are inserted in the chain in an uncorrelated fashion.
As treating the correlated case analytically is not simple, we then proceed our study numerically. We implement the block SDRG rules (34) and (35) for a system where all the coupling constants are independent random variable and identically distributed according to
\[P\left(\lambda;\Omega_{0}\right)=\frac{1}{D_{0}\Omega_{0}}\left(\frac{\Omega_ {0}}{\lambda}\right)^{1-1/D_{0}},\]
where \(\Omega_{0}\) is the initial energy cutoff and \(D_{0}\) parameterizes the disorder strength of the bare system.
We firstly study the moments of the distribution of the renormalized couplings along the SDRG flow. As usual in the SDRG approach, it is convenient to define the logarithmic couplings \(\zeta_{i}=\ln\left(\Omega/\lambda_{i}\right)\) and the logarithmic energy cutoff \(\Gamma=\ln\left(\Omega_{0}/\Omega\right)\). We then study the mean value \(\overline{\zeta}\) and standard deviation \(\sigma_{\zeta}=\sqrt{\overline{\zeta^{2}}-\overline{\zeta}^{2}}\) as a function of \(\Gamma\). Our results for \(D_{0}=1/2\) and \(D_{0}=1\) and system size of \(L=3^{15}\) spins is shown in Fig. 5(a). Clearly, \(\overline{\zeta}\approx\sigma_{\zeta}\approx\Gamma+\text{const}\) in the \(\Gamma\to\infty\) limit. This implies an infinite-disorder fixed-point critical distribution. Thus, the SDRG method here employed is justified.
Next, we study the infinite-disorder fixed-point distribution. For comparison, that distribution for \(p=1\) is well known [94, 33, 91]. It is
\[P_{p=1}^{\ast}\left(\lambda;\Omega\right)=\frac{1}{D_{\Gamma}\Omega}\left( \frac{\Omega}{\lambda}\right)^{1-1/D_{\Gamma}}, \tag{40}\]
with \(D_{\Gamma}=\Gamma+D_{0}\) [equivalent to \(\pi^{\ast}\left(\zeta;\Gamma\right)=e^{-\zeta/D_{\Gamma}}/D_{\Gamma}\)]. In Fig. 5(b), we show the distribution of log couplings at different stages of the SDRG flow for the case \(D_{0}=1\). (We find statistically identical result for \(D_{0}=1/2\) which is not shown for clarity.) After the initial stages of the SDRG flow, the fixed point is reached. Here, the different stages of the SDRG flow is parametrized by the density \(\rho=N_{\Omega}/L\), where \(N_{\Omega}\) is the number of active spins at the cutoff energy scale \(\Omega\). Recall that, after each decimation step of the block SDRG procedure, 3 spins are removed. Clearly, the fixed point distribution differs from the \(p=1\) case Eq. (40) (dashed line \(y=e^{-x}\)), but only slightly. We attribute this small difference to the correlations arising among renormalized couplings under the flow.
The relation between energy and length scales along the SDRG flow is shown in Fig. 5(a). Simply the length scale \(\rho^{-1}\sim\Gamma^{1/\psi}\) with universal tunneling exponent \(\psi=1/2\). We also expect the finite-size gap \(\Delta\) to obey the activated dynamical scaling (37). This is confirmed in our numerics as shown in Fig. 5(c). Both the mean value and the width of the distribution of \(\ln\left(\Delta\right)\) behave similarly \(\sim L^{\psi}\). Here, the finite-size gap is obtained by decimating the entire chain. The last decimated coupling constant \(\tilde{\lambda}_{\text{final}}\) provides the finite-size gap \(\Delta=2\tilde{\lambda}_{\text{final}}\).
#### iv.1.7 Thermodynamics and correlations
As the fixed-point distribution is of infinite-randomness type, the critical thermal entropy and specific heat behave sim
ilarly as in the \(p=1\) case discussed in Sec. IV.1.3. The only difference is the residual entropy which modifies Eq. (21) to
\[S=\frac{2}{3}\left(1+\left(\frac{\sigma_{\ln\lambda}^{2}}{\ln\left(\frac{\Omega_ {0}}{T}\right)}\right)^{1/\psi}\right)\ln 2, \tag{41}\]
the reasoning being that the ground state have degeneracy \(2^{2L/3}\). The specific heat follows straightforwardly and recovers Eq. (22).
In the SDRG framework, the ground-state spin correlation \(C_{i,j,k}=\left\langle\sigma_{i}^{\alpha}\sigma_{j}^{\alpha}\sigma_{k}^{z}\right\rangle\) is \(\pm 1\) if the spins \(i<j<k\) are decimated together, and weakly vanishing (we set to zero) otherwise.
Figure 5: (a) The mean value of \(\overline{\zeta}\), the standard deviation \(\sigma_{\zeta}\) and the density of active spins \(\rho\) as a function of the logarithmic cutoff energy \(\Gamma=\ln\Omega_{0}/\Omega\) (\(\Omega=\max\{\lambda_{i}\}\) ). (b) Snapshots of the coupling constant distributions at different densities \(\rho\) along the SDRG flow. (c) The typical value of the finite-size gap as a function of the system size \(L\). We have considered chains of up to \(L=3^{15}\) spins, with bare disorder strengths \(D_{0}=1/2\) and \(1\). Here, \(\psi=\frac{1}{2}\). The error bars are about the size of the symbol sizes.
Figure 6: Average value of the spin correlations (42). (a) The average value of the spin correlation \(\overline{|C(r_{1},r_{2})|}\) [see Eq. (42)] as a function of the internal spin-spin separations \(r_{1}\) and \(r_{2}\). (b) The average values of the diagonal \(\overline{|C(r,r)|}\) (open circles), the off-diagonal \(\overline{|C(r,2r)|}\) (stars), the integrated \(\sum_{r_{2}}\overline{|C(r,r_{2})|}=\sum_{r_{1}}\overline{|C(r_{1},r)|}\) (open squares), and the cluster-size \(\sum_{r_{1},r_{2}}\delta_{r_{1}+r_{2}}\overline{|C(r_{1},r_{2})|}\) (closed diamonds) correlations. (c) The average correlation \(\overline{|C(r_{1},r_{2})|}\) as a function of \(r_{1}\) for different values of \(r_{2}\). The system size is \(L=3^{15}\) averaged over \(1500\) disorder realizations which is sufficient to yield error bars of the size of the symbols. Solid lines are simple guide to the eyes.
In order to obtain the average value of \(C_{i,j,k}\), we then build a normalized histogram \(\overline{|C(r_{1},r_{2})|}\) in log-scale. After each SDRG decimation (decimation of spins \(i\), \(j\), and \(k\)), we add a unity to the bin \((r_{1},r_{2})\) if \(\ln{(j-i)}\) is between \(\ln{r_{1}}-\frac{1}{2}\) and \(\ln{r_{1}}+\frac{1}{2}\), and \(\ln{(k-j)}\) is between \(\ln{r_{2}}-\frac{1}{2}\) and \(\ln{r_{2}}+\frac{1}{2}\). This procedure is repeated until the entire chain is decimated and averaged over many disorder realizations. The histogram \(\overline{|C(r_{1},r_{2})|}\) is then a proxy to average value of \(C_{i,j,k}\), i.e.,
\[\overline{|C(r_{1},r_{2})|}=\overline{\left|\left\langle\sigma_{i}^{\ast} \sigma_{i+r_{1}}^{z}\sigma_{i+r_{1}+r_{2}}^{z}\right\rangle\right|}. \tag{42}\]
We plot in Fig. 6(a) the mean value of the spin correlation \(\overline{C(r_{1},r_{2})}\) as a function of the internal distances \(r_{1}\) and \(r_{2}\). Clearly, the correlation is maximum when \(r_{1}=r_{2}\), as expected from symmetry.
In Fig. 6(b), we plot the diagonal correlation \(\overline{C(r,r)}\), one of the possible off-diagonal correlations \(\overline{C(r,2r)}\), the integrated correlation \(\sum_{\mathcal{I}_{2}}\overline{C(r,r_{2})}=\sum_{r_{1}}\overline{C(r_{1},r)}\), and the cluster-size correlation \(\sum_{r_{1}}\overline{|C(r_{1},r-r_{1})|}=\sum_{r_{1},r_{2}}\delta_{r,r_{1}+ r_{2}}\overline{|C(r_{1},r_{2})|}\). They all vanish algebraically \(\sim r^{-\phi}\), with universal exponent \(\phi\approx 1\) but the off-diagonal correlations which vanishes as \(\overline{C(r,2r)}\sim r^{-\phi}\) with \(\phi\approx 1.85\).
In Fig. 6(c), we plot the average correlation \(\overline{C(r_{1},r_{2})}\) for various values of \(r_{2}\). Interestingly, our numerical data indicates that \(\overline{C(r_{1},r_{2})}\sim r_{1}^{-(\phi-\phi)}\) for fixed \(r_{2}\) and \(r_{1}\gg r_{2}\).
Analyzing these exponents for other sample sizes, we estimate their error to be of order 10%. Unfortunately, we do not have an analytical derivation for these exponents.
How about the typical value of these correlations? Typically, the triad of spins in (42) are not decimated together and, thus, develop quite weak correlations. As argued by Fisher [33] and shown in many numerical works [95, 18, 89] for the case \(p=1\), this weak correlations are of order of the typical value of the coupling constants involved, \(C_{\text{typ}}(r)\sim J_{\text{typ}}(r)\). Very plausible, this is also the case for any \(p\) with the caveat that more than one coupling constant is involved. Thus, \(C_{\text{typ}}\sim J_{\text{typ}}(r_{1})J_{\text{typ}}(r_{2})\ldots J_{\text{ typ}}(r_{p})\). Thus, from the dynamical scaling (10) we then conclude that the typical value of the correlations decays stretched exponentially fast with the spin separations. Roughly, we expect
\[C_{\text{typ}}\left(r_{1},r_{2}\right)\sim e^{-A(\ell(r_{1},r_{2})\gamma_{D} )^{\psi}}, \tag{43}\]
where \(\ell(r_{1},r_{2})\) is a function which equals \(\max{\{r_{1},r_{2}\}}\) for \(r_{1}\gg r_{2}\) or \(r_{2}\gg r_{1}\) and \(\approx cr\) when \(r_{1}\approx r_{2}\approx r\) (with \(c\) being disorder-independent constant of order unity), \(\gamma_{D}\) is a disorder-dependent Lyapunov exponent [89, 18] related to the clean-dirty crossover length [88], and \(A\) is a constant of order unity.
## V Finite-Size Gap Statistics
In this section, we present our numerically exact results on the spectral gap of the model Hamiltonian (1) for \(p=2\). This quantity is computed from the roots of the polynomial as described in Sec. II and compared with the SDRG predictions described in Sec. IV.2.
### Technical details
We refer the reader to Ref. [12] for useful numerical methods for calculating the roots of the polynomial (5)--(6). We emphasize that our method allow us to evaluate the finite-size gap in lattice sizes up to \(\sim 10^{7}\) sites in the neighborhood of the transition lines with a numerical cost that grows linearly with the system size competing with the SDRG numerical cost. For systems that large, typically, the polynomials (5) have coefficients spanning in 500 orders of magnitude. Therefore, the entire evaluation of these coefficients can only be achieved using high numerical precision (500 digits). However, as shown in Ref. [12], we only need the last 100 coefficients of the polynomial to obtain the finite-size gap with quadruple standard numerical precision (32 digits). When applying this method to our Hamiltonian, we found useful to keep the last coefficient of (5) always of order unity. This is accomplished by factorizing these coefficients when iterating the recursion relation (6). With that, the entire procedure can be accomplished using only standard routines in FORTRAN with quadruple precision.
### Off-critical finite-size gap I
We start analyzing the off-critical Griffiths phases sketched in the phase diagram of Fig. 2(b). For such, we study three sets of chains: (set I) \(\mathcal{A}_{A}\) is uniformly distributed in the interval \([0,0.2]\); (set II) \(\mathcal{A}_{A}\) is uniformly distributed in the interval \([0.2,1.2]\); (set III) \(\mathcal{A}_{A}\) is uniformly distributed in the interval \([0.5,1.5]\). In all cases, \(\lambda_{B}\) is distributed uniformly in the interval \([0.5,1.5]\), and \(\mathcal{A}_{C}\) is a constant (running from 1.2 down to 0.9) which serves as a tuning parameter. The running \(\lambda_{C}\) passes through the critical point \(\lambda^{\ast}=\lambda_{B,\text{typ}}=e^{\frac{13\ln 3-2\ln 2}{2}-1}=0.95578\) [see Eq. (36)] but does not include it since the analysis of the critical system is reported in other sections. Notice that set III of chains is critical for \(\lambda_{C}<\lambda_{C}^{\ast}\). For that reason, we consider only \(\lambda_{C}>\lambda_{C}^{\ast}\) in this section.
In case I, we explore the interplay between the two strongest couplings \(\lambda_{B}\) and \(\lambda_{C}\) while \(\lambda_{A}\) is a much weaker coupling. The Rare Regions are of \(B\)-type inside a bulk in the \(C\) phase. In case II, the value of \(\lambda_{A}\) is increased such that some Rare Regions of \(A\)-type also appear. Finally, in case III, both Rare Regions of \(A\)- and \(B\)-type appear equally. At the final point \(\lambda_{C}=\lambda_{C}^{\ast}\), the multi-critical point is reached.
In Fig. 7(a) we show the relation between \(\overline{\ln\Delta}\) (with \(\Delta\) being the system finite-size gap) and the system size \(L\) for \(\lambda_{C}=1\). From this, we can obtain the off-critical dynamical exponent \(z\) by fitting \(\Delta\sim L^{-z}\) to the data. Repeating this procedure for other values of \(\lambda_{C}\), we determine how \(z\) diverges as the critical point is approached, see Fig. 7(b). It diverges as \(z\approx 1/\left(2\delta\right)\), with \(\delta\) being the distance from criticality as defined in (36). We emphasize that this numerically exact result is in agreement with the SDRG analytical predictions in the \(z\to\infty\) limit.
### Off-critical finite-size gap II
We now study the case in which only one of the couplings is disordered, say, \(\lambda_{A}\) uniformly distributed in the interval \([0,0.2]\). The corresponding phase diagram is shown in Fig. 2(a). Here, we focus on the off-critical region near the transition between the \(B\)- and \(C\)-phases. Notice that there are no Griffiths phases surrounding the transition for sufficiently small \(\lambda_{A,\mathrm{typ}}\). Thus, the gap is vanishing only at the transition point \(\lambda_{B}=\lambda_{C}\). Closer to the multi-critical point, Griffiths phases appear.
We plot in Fig. 8 the typical value of the system gap \(\Delta\) as a function of the distance from criticality \(\delta\). Since \(\lambda_{C}\) and \(\lambda_{B}\) are homogeneous, the definition (18) is not useful because of the vanishing denominator. Here, we simply use \(\delta=\ln\left(\lambda_{C}-\lambda_{B}\right)\). Also, we fix \(\lambda_{C}=1>\max\left\{\lambda_{A}\right\}\) and use \(\lambda_{B}\) as a running parameter. Notice that \(\Delta\) diminishes as in the clean system: \(\Delta\sim\delta\). For small \(\delta\), \(\Delta\) saturates due to finite-size effects, meaning that the correlation length is greater than \(L\). We recall that our method is not optimal for gapful systems. This is because larger the gap larger is the number of coefficients required in the characteristic polynomial [12]. For that reason, we studied chains of "small" sizes up to \(10\,267\) sites.
Having shown the absence of Griffiths phases far from the multi-critical point \(\left(\lambda_{C}>\max\left\{\lambda_{A}\right\}\right)\), we now show their existence otherwise. We thus study chains in which \(\lambda_{C}=1\), \(\lambda_{A}\) is uniformly distributed in the interval \([0,e\lambda_{A,\mathrm{typ}}]\), and \(\lambda_{B}\) is the tuning parameter. We report that the finite-size gap vanishes as \(\Delta\sim L^{-z}\) [as in Fig. 7(a)], with the value of the off-critical dynamical exponent \(z\) increasing up to a finite value as the transition is approached, see Fig. 9. Notice that far from the critical point \(\lambda_{B}=1\), the off-critical dynamical exponent is practically insensitive to the distance from criticality \(\delta=1-\lambda_{B}\) as the low-energy behavior is dominated by Rare Regions which are locally in the \(A\) phase.
### Critical finite-size gap statistics I
We now address the infinite-randomness critical lines, the dashed lines in the phase diagram of Fig. 2(a) and (b). In order to illustrate universality, we consider two cases: (chain A) \(\lambda_{A}\) and \(\lambda_{B}\) are, respectively, uniformly distributed in the intervals \([0,0.2]\) and \([0.5,1.5]\), and \(\lambda_{C}\) is homogeneous and equals \(\lambda_{B,\mathrm{typ}}=e^{\frac{\lambda_{B}-\lambda_{B}}{2}-1}\approx 0.95 578\) (ensuring criticality); and (chain B) \(\lambda_{A}\), \(\lambda_{B}\), and \(\lambda_{C}\) are, respectively, uniformly distributed in the intervals \([0,0.5]\), \([0,1]\), and \([0,1]\).
In Fig. 10 we plot the typical value of the finite-size gap [and the corresponding Laguerre bound \(\Delta_{\mathrm{LB}}\), see Eqs. (7)--(10)] for those critical chains as a function of the system size \(L\). As expected from the activated dynamical scaling Eq. (10), the finite-size gap vanishes stretched exponen
Figure 8: The typical value of the system gap \(\Delta\) as a function of the distance from criticality \(\delta=\ln\left(1-\lambda_{B}\right)\) for chains of different sizes \(L\). The coupling constants are such that \(\lambda_{A}\) is uniformly distributed in the interval \([0,0.2]\), and \(\lambda_{B}\) and \(\lambda_{C}\) are homogeneous with \(\lambda_{C}=1\) and \(\lambda_{B}\) being the tuning parameter. The data is average over \(N_{\mathrm{samples}}=1\,500\) disorder realizations. The error bars are about the symbol sizes. Solid lines are simple guide to the eyes. The dashed line is the clean behavior in the thermodynamic limit: \(\Delta\sim\delta^{\mathrm{q_{A}}}\), with \(\phi_{\Delta}=1\).
Figure 7: Finite-size gap analysis of the Griffiths phases in Fig. 2(b). The coupling constants \(\left\{\lambda_{A,j}\right\}\) are uniformly distributed in the interval \([0,0.2]\) (set I), \([0.2,1.2]\) (set II) and \([0.5,1.5]\) (set III). In all cases, \(\left\{\lambda_{B,j}\right\}\) are uniformly distributed in the interval \([0.5,1.5]\), and \(\lambda_{C}\) is uniform and plays the role of a running parameter throughout the Griffiths phases. (a) The typical finite-size gap \(\overline{\ln\Delta}\) as a function of the system size \(L\) for \(\lambda_{C}=1\). The data is averaged over \(N_{\mathrm{samples}}=6\,000\) disorder realizations and the error bars are about the size of the data points. Linear fits for the data are provided in the legends from which the dynamical exponent \(z\) is obtained. The best fits are restricted to system sizes \(L>10^{6}\). (b) The dynamical exponent \(z\) as a function of the distance from criticality Eq. (36). Lines are guide to the eyes. Inset: same data in the main panel in log-log scale. The dashed line is the SDRG asymptotic behavior (\(\delta\ll 1\)) \(z=1/(2\delta)\).
tially fast with universal (i.e., disorder-independent) tunneling exponent \(\psi=1/2\), compatible with the asymptotic behavior (\(L\gg 1\)) of our data. We call attention to the fact that \(\Delta\) and \(\Delta_{\text{LB}}\) become virtual indistinguishable for \(L\gg 1\). This feature was already noticed for the case \(p=1\)[12]. We conjecture that finite-size values of \(\Delta_{\text{LB}}\to\Delta\) for \(L\to\infty\) in the case of infinite-randomness criticality. As argued in Ref. [12], this is because the largest root of the characteristic polynomial separates from the other ones as \(L\) increases. Thus, only the few largest coefficients of the polynomial are necessary to accurately compute it (see Sec. V.1).
### Critical finite-size gap statistics II
Here, we investigate the intriguing critical line in which the two major couplings are uniform, i.e., the horizontal boundary line in the phase diagram Fig. 2(a). Thus, we take \(\lambda_{B}=\lambda_{C}=1\) and \(\lambda_{A}\) uniformly distributed between \(0\) and \(e\lambda_{A,\text{typ}}\). The typical value of the \(A\) couplings, \(\lambda_{A,\text{typ}}\), is used as a tuning parameter and reaches the multi-critical point when \(\lambda_{A,\text{typ}}=1\).
Our results are shown in Fig. (11). Clearly, sufficiently far from the multi-critical point (\(\lambda_{A,\text{typ}}\leq\lambda_{A,\text{typ}}^{s}\sim 0.6\)) the critical point is in the clean Ising universality class \(z=1\) [solid line in Fig. 2(a)]. Approaching the multi-critical point (\(\lambda_{A,\text{typ}}>\lambda_{A,\text{typ}}^{s}\)), a line of finite-disorder fixed points [dotted line in Fig. 2(a)] is tuned with varying critical dynamical exponent \(z\) that diverges as the multi-critical point is approached (see inset). For comparison, we show the line \(z=(2\gamma)^{-1}\) as predicted by the SDRG method when \(\gamma=1-\lambda_{A,\text{typ}}\to 0\). This seems to be the trend for \(\gamma\gtrsim 0.01\). We cannot rule out that we are plagued by finite-size effect for smaller \(\gamma\).
### Multi-critical finite-size gap statistics
Here, we finally investigate the multi-critical point. We find that in both phase diagrams of Fig. 2, the multi-critical point is of infinite-randomness type with universal tunneling exponent \(\psi=1/2\).
In Fig. 12 we plot the distribution \(\mathcal{P}\) of the finite-size gap \(\Delta\) properly rescaled according to Eq. (37) for various system lengths and disorder parameters. Clearly, the scaling variable \(\eta\) is sufficient to produce that data collapse for
Figure 11: The typical value of the finite-size gap \(\Delta\) as a function of the system size \(L\) for chains with homogeneous coupling constants \(\lambda_{B}=\lambda_{C}=1\) and \(\{\lambda_{A}\}\) uniformly distributed in the interval \([0,e\times\lambda_{A,\text{typ}}]\) for various values of the tuning parameter (from top to bottom) \(\lambda_{A,\text{typ}}=0.6\), \(0.8\), \(0.9\), \(0.95\), \(0.99\), and \(0.999\). The data is averaged over \(N_{\text{samples}}=10^{4}\) disorder realizations and the error bars are about the symbol sizes. The solid lines are best linear fits to \(\overline{\ln\Delta}\sim-z\ln L\) (constrained to \(L>e^{13}\)) from which the critical dynamical exponent is obtained (respectively, \(z=1.00\), \(2.08\), \(4.56\), \(9.76\), \(19.48\), and \(49.82\)) and shown in the inset as a function of the distance from the multi-critical point \(\gamma=1-\lambda_{A,\text{typ}}\).
Figure 10: Finite-size gap \(\Delta\) (and the corresponding Laguerre bound \(\Delta_{\text{LB}}\)) as a function of the system size for chains A and B (see text). The typical value is compatible with activated dynamical scaling \(\overline{\ln\Delta}\sim-L\psi\), with universal (disorder-independent) tunneling exponent \(\psi=\frac{1}{2}\), see dashed line and Eq. (37). The data is averaged over \(N_{\text{samples}}=10^{3}\) (\(4\times 10^{4}\)) disorder realizations for chain A (B), and the error bars are about the symbol sizes.
Figure 9: The typical value of the finite-size gap \(\Delta\) as a function of the system size \(L\). The couplings \(\lambda_{B}\) and \(\lambda_{C}\) are non-random and their values are given in the legends. The coupling \(\lambda_{A}\) is a independent random variable uniformly distributed in the interval \([0,0.9e]\) (thus, typical value \(\lambda_{A,\text{typ}}=0.9\)). The data is average over \(N_{\text{samples}}=6\,000\) disorder realizations. The error bars are about the symbol sizes. Solid lines are best fits to Eq. (9) in the region \(L>e^{12}\). The corresponding Griffiths dynamical exponent are given in the legends.
the system sizes used and this gives us further confidence on the activated dynamical scaling (10) with universal (disorder-independent) tunneling exponent \(\psi=1/2\). We notice, in addition, that the probability distribution \(\mathcal{P}\) is not universal and is weakly dependent on the disorder details.5 Furthermore, \(\mathcal{P}\) is quite different from the SDRG prediction Eq. (19) for the transverse-field Ising chain (\(p=1\)), even though they are governed by the same infinite-disorder fixed point. Finally, we report (not shown for clarity) that this distribution depends on the modularity of the lattice size \(L\), i.e., \(\mathcal{P}(\eta)\) depends on \(L\mod 3\). This is not a surprise since a similar difference also appears in the clean system. There, the finite-size gap amplitudes also depend on the modularity, i.e., \(\Delta\sim aL^{-z}\) with \(a\equiv a\left(L\mod 3\right)\).
Footnote 5: It is highly nontrivial to understand those details even in the simpler case of \(p=1\) as shown in Ref. [18]. It involves non-universal quantities such as the crossover length between the clean and infinite-randomness critical points.
## VI Conclusions
We have studied the effects of quenched disorder on a family of free fermionic models (1) with \((p+1)\)-multispin interaction, paying special attention to the case \(p=2\) corresponding to the random version of the three-spin interacting Fendley model.
When all the coupling constants are generically disordered, the clean phase transitions (including the multi-critical points) are destabilized. The replacing transition is of infinite-randomness character where the critical dynamical scaling is activated (10) with universal (disorder- and \(p\)-independent) tunneling exponent \(\psi=1/2\). This exponent governs the low-temperature singular thermodynamic behavior. The average value of the correlation functions are also universal and decay algebraically with the spin separations. We do not have an analytical theory for those exponents and a detailed study is left for future research. The typical value of the correlations, on the other hand, behaves quite differently. It decays stretched exponentially fast with the spin-spin separation. In addition, strong Griffiths singularities surround the transitions. Although the system is non-critical with short-range correlations, the spectral gap vanishes. The singularities are related to the slow dynamics of the domain walls surrounding the so-called rare regions. This phenomena is very similar to the domain-wall-induced (or rare-regions-induced) Griffiths singularities in the dimerized XXZ spin-1/2 chain and in the random transverse-field Ising model.
When only one (of few) type of coupling constants are disordered, the scenario is more involved. When the disordered coupling is weak, the singular critical behavior of the clean system is stable and there is no surrounding Griffiths phases. Upon increasing the magnitude of the disordered coupling, the universality of the transition changes and is of finite-randomness character. The critical scaling is power-law conventional (9) but with non-universal (disorder-dependent) dynamical critical exponent \(z\). The increasing of the magnitude of the disordered couplings yields to another effect. It nucleates rare regions which contributes to off-critical Griffiths singularities. Interestingly, the finite-randomness criticality extends up to the multi-critical point.
Although we have explicitly worked out those results for the cases of \(p=1\) and \(2\), from symmetry grounds we expect them to be valid for all \(p\) in the family model (1) for sufficiently large disorder. Evidently, we cannot exclude other scenarios appearing when \(p\) is very large and disorder is weak.
Finally, the agreement between our numerically exact results obtained for quite large lattice sizes (up to \(L\sim 10^{7}\) sites)
Figure 12: Finite-size gap distribution \(\mathcal{P}\) for different system sizes \(L\). The finite-size gap \(\Delta\) is rescaled according to the scaling variable \(\eta\) in Eq. (37). The coupling constants are either homogeneous equal to \(e^{-1}\) or uniformly distributed in the interval \([0,1]\) as indicated in the legends. The width \(\sigma_{0}\) is, respectively, \(\sqrt{1/3}\), \(\sqrt{2/3}\), and \(1\) in panels (a), (b), and (c). The normalized histograms were built using \(10^{5}\) to \(10^{6}\) disorder realizations. The dashed line is the SDRG prediction (19) for the transverse-field Ising chain (\(p=1\)).
with the "block" SDRG is remarkable. We stress that the SDRG method is not an exact one. It took more than a decade after its conception to realize that it can provide asymptotically exact critical exponents and other universal quantities. This was accomplished comparing the SDRG with exact diagonalization of a few models such as the transverse-field Ising chain and the XX spin-1/2 chain and for moderate lattice sizes (\(L\sim 10^{3}\)) and the Heisenberg chain (\(L\sim 200\)). Here, we have the rare opportunity to show that this is also true for a different family of models and for quite large chains.
We believe that our method for evaluating the finite-size gaps for large system sizes with minimal numerical cost (\(\sim L\)) will be a useful tool to study the effects of disorder in the phase transition of other systems.
###### Acknowledgements.
We thank Jesko Sirker for useful discussions. This work was supported in part by the Brazilian agencies FAPESP and CNPq. J.A.H. thanks IIT Madras for a visiting position under the IoE program which facilitated the completion of this research work. R.A.P. acknowledge support by the German Research Council (DFG) via the Research Unit FOR 2316.
## Appendix A The block SDRG method
Here, we derive another SDRG decimation procedure. We follow the idea of Ref. [59] where a larger block spin is considered. It was initially devised to enable the SDRG approach to tackle cases where the bare disorder in the system is weak. It was important to correct the SDRG flow from nonphysical features introduced by the simpler perturbative approach (as in Sec. IV). Here, this approach has a fundamentally different appeal. It consider the largest spin block in which there are only two energy levels. The projection is, thus, onto a richer ground-state which may capture additional features neglected in the simplest approach.
### Case \(p=1\)
Let us start with the simpler case where \(p=1\). For convenience we rewrite the Hamiltonian as
\[H=\sum_{i}\left\{-\lambda_{A,i}h_{A,i}-\lambda_{B,i}h_{B,i}\right\}, \tag{10}\]
where \(h_{A,i}=\sigma_{2i-1}^{x}\sigma_{2i}^{z}\) and \(h_{B,i}=\sigma_{2i}^{x}\sigma_{2i+1}^{z}\).
Following the SDRG philosophy, we search for the local Hamiltonian which exhibits the largest energy gap between its two energy levels. As the largest spin block still exhibiting only two energy levels is either \(-\lambda_{A,i}h_{A,i}-\lambda_{B,i}h_{B,i}\) or \(-\lambda_{B,i}h_{B,i}-\lambda_{A,i+1}h_{A,i+1}\), we then define the energy cutoff as
\[\Omega=\max\left\{\sqrt{\lambda_{A,i}^{2}+\lambda_{B,i}^{2}}\right\}\text{.} \text{
where \(h_{A,j}=\sigma_{3i-2}^{x}\sigma_{3i-1}^{z}\sigma_{3i}^{z}\), \(h_{B,i}=\sigma_{3i-1}^{x}\sigma_{3i}^{z}\sigma_{3i+1}^{z}\), and \(h_{C,i}=\sigma_{3i}^{x}\sigma_{3i+1}^{z}\sigma_{3i+2}^{z}\).
Following the block SDRG philosophy, we search for the largest local coupling \(\Omega=\max\left\{|\lambda_{A,j}|,|\lambda_{B,i}|,|\lambda_{C,i}|\right\}\), say, \(\Omega=|\lambda_{B,2}|\), and consider the largest block of operators which still exhibits only two energy levels. Thus, we consider as unperturbed Hamiltonian \(H_{0}=-\lambda_{A,2}h_{A,2}-\lambda_{B,2}h_{B,2}-\lambda_{C,2}h_{C,2}\). The renormalized system is obtained by projecting \(h_{B,1},h_{C,1},h_{A,3}\), and \(h_{B,3}\) onto the ground-state subspace of \(H_{0}\).
The construction of the eigenstates of \(H_{0}\) is straightforward but tedious. There are only two energy levels with energies \(\pm E\), where \(E=\sqrt{\lambda_{A,2}^{2}+\lambda_{B,2}^{2}+\lambda_{C,2}^{2}}\). Each level is \(2^{4}\) degenerate as \(H_{0}\) involves 5 spins-1/2 (spins \(\sigma_{4,5,\ldots,8}\)). The ground-state manifold is the set \(\left\{|s_{4,5,\ldots,9,8}\rangle\right\}\) where \(\left|s_{4,5,\ldots,9,8}\right\rangle=\left|s_{4,5},s_{7},s_{8}\right\rangle \otimes\left|s_{4,5,\ldots,9,8}\right\rangle\) since \(H_{0}\) is diagonal in the operators \(\sigma_{4}^{x}\), \(\sigma_{7}^{z}\), and \(\sigma_{8}^{z}\). Here, \(\sigma_{4}^{x}\left|s_{4,5,\ldots,9,8}\right\rangle=s_{4}\left|s_{4,5,\ldots,9,8}\right\rangle\), \(\sigma_{7}^{z}\left|s_{4,5,\ldots,9,8}\right\rangle=s_{7}\left|s_{4,5,\ldots,9,8}\right\rangle\), \(\sigma_{8}^{z}\left|s_{4,5,\ldots,9,8}\right\rangle=s_{8}\left|s_{4,5,\ldots,9,8}\right\rangle\), and the two states \(\left|s_{4,5,\ldots,9,8}\right\rangle\) (\(s=\pm 1\)) encodes the states of spins 5 and 6. They are
\[\left|\pm_{s_{4},5,\ldots,9,8}\right\rangle=\frac{\left|a_{\bar{ \jmath}}\right\rangle+\left|b_{\bar{\jmath}}\right\rangle}{2\sqrt{1+\left\langle b _{\bar{\jmath}}\right|a_{\bar{\jmath}}\right\rangle}}\pm\frac{\left|a_{\bar{ \jmath}}\right\rangle-\left|b_{\bar{\jmath}}\right\rangle}{2\sqrt{1-\left\langle b _{\bar{\jmath}}\right|a_{\bar{\jmath}}}},\]
where
\[\left|a_{\bar{\jmath}}\right\rangle = \left|a_{s_{4},5,\ldots,9,8}\right\rangle=N_{-}\left[\lambda_{C,2 5}\gamma_{8}\left|\rightarrow_{5},\uparrow_{6}\right\rangle\right.\] \[\left.+\left(E-\lambda_{B,25}\gamma_{1}\left|\rightarrow_{5}, \downarrow_{6}\right\rangle-\lambda_{4,2}\pi_{4}\left|\leftarrow_{5}, \downarrow_{6}\right\rangle\right],\] \[\left|b_{\bar{\jmath}}\right\rangle = \left|b_{s_{4},5,\ldots,9,8}\right\rangle=N_{+}\left[\left(E+ \lambda_{B,25}\gamma_{5}\left|\rightarrow_{5},\uparrow_{6}\right\rangle\right.\right.\] \[\left.\left.\left.+\lambda_{C,25}\gamma_{5}\left|\rightarrow_{5}, \downarrow_{6}\right\rangle+\lambda_{4,2}\pi_{4}\left|\leftarrow_{5}, \uparrow_{6}\right\rangle\right],\]
\(N_{\pm}=1/\sqrt{2E\left(E\pm\lambda_{B,25}\gamma\right)}\) are normalization constants, and \(\left\langle b_{\bar{\jmath}}\right|a_{\bar{\jmath}}\right\rangle=s_{7}s_{8} \lambda_{C,2}/\sqrt{\lambda_{A2}^{2}+\lambda_{C2}^{2}}\).
We define the four spin-1/2 operators \(\sigma_{4}\), \(\sigma\), \(\sigma_{7}\), and \(\tilde{\sigma}_{8}\) which span the ground-state subspace in the following manner: \(\tilde{\sigma}_{4}^{x}\left|s_{4,s_{4},5,\ldots,9,8}\right\rangle=s_{4}\left|s _{4,s_{4},5,\ldots,9,8}\right\rangle\), \(\tilde{\sigma}_{4}^{z}\left|s_{4,s_{4},5,\ldots,9,8}\right\rangle=s\left|s_{ 4,s_{4},5,\ldots,9,8}\right\rangle\), \(\tilde{\sigma}_{7}^{z}\left|s_{4,s_{4},5,\ldots,9,8}\right\rangle=s_{7}\left|s _{4,s_{4},5,\ldots,9,8}\right\rangle\), and \(\tilde{\sigma}_{8}^{z}\left|s_{4,s_{4},5,\ldots,9,8}\right\rangle=s_{8}\left|s _{4,s_{4},5,\ldots,9,8}\right\rangle\).
We are not ready to project the \(H_{1}\) onto the ground states of \(H_{0}\). Let us start by projecting the operator \(\sigma_{4}^{z}\). The matrix element is
\[\left\langle g_{4,t_{4},t_{3},t_{3}}\left|\sigma_{4}^{z}\right| g_{s_{4},5,\ldots,9}\right\rangle=\delta_{4,t_{4}-4}\delta_{5,\ldots,t_{3},t_{3}} \delta_{5,\ldots,9,8}\left\langle t-s_{4,5,\ldots,9,8}\right|s_{4,5,\ldots,9,8}\right\rangle\] \[=\frac{\delta_{4,t_{4}}\delta_{5,\ldots,t_{3}}\delta_{5,\ldots,9,8} \delta_{5,\ldots,9}}{\sqrt{\lambda_{A,2}^{2}+\lambda_{C,2}^{2}}}\left(-\frac{ \left|\lambda_{A,2}\right|\lambda_{B,25}\gamma_{5}\delta_{5,\ldots}}{E}+s_{7}s_{8 }\lambda_{C,2}\delta_{5,\ldots,t}\right).\]
Thus, the projected operator is
\[\tilde{\sigma}_{4}^{z}\left(-\frac{\left|\lambda_{A,2}\right|\lambda_{B,2}}{E \sqrt{\lambda_{A,2}^{2}+\lambda_{C,2}^{2}}}\tilde{\sigma}^{z}\tilde{\sigma}_{ 7}^{z}+\frac{\lambda_{C,2}}{\sqrt{\lambda_{A,2}^{2}+\lambda_{C,2}^{2}}}\tilde{ \sigma}^{x}\tilde{\sigma}_{7}^{z}\tilde{\sigma}_{8}^{z}\right).\]
Similarly, projecting \(\sigma_{8}^{x}\), \(\sigma_{7}^{z}\sigma_{8}^{z}\), and \(\sigma_{4}^{z}\sigma_{5}^{z}\) we obtain, respectively,
\[\frac{\left|\lambda_{A,2}\right|}{\sqrt{\lambda_{A,2}^{2}+\lambda_{C,2}^{2}}} \sigma_{8}^{x}-\frac{\lambda_{B,2}\lambda_{C,2}}{E\sqrt{\lambda_{A,2}^{2}+ \lambda_{C,2}^{2}}}\sigma^{y}\sigma_{8}^{y},\] \[\frac{\left|\lambda_{A,2}\right|}{E}\tilde{\sigma}_{7}^{x}\tilde{ \sigma}_{8}^{z},\text{ and }-\text{sign}\left(\lambda_{A,2}\right)\frac{\lambda_{C,2}}{E}\tilde{ \sigma}_{4}^{y}\tilde{\sigma}^{z}\tilde{\sigma}_{7}^{z}\tilde{\sigma}_{8}^{z}.\]
Collecting all those terms, we find that \(\tilde{H}_{1}=-\tilde{\lambda}_{B,1}\tilde{h}_{B,1}-\tilde{\lambda}_{C,1} \tilde{h}_{C,1}-\tilde{\lambda}_{A,3}\tilde{h}_{A,3}-\tilde{\lambda}_{B,3} \tilde{h}_{B,3}\), where
\[\tilde{\lambda}_{C,1}=-\text{sign}\left(\lambda_{A,2}\right)\frac{\lambda_{C,1}\lambda_{C,2}}{E},\ \tilde{h}_{C,1}=\sigma_{3}^{z}\sigma_{4}^{y}\sigma_{7}^{z}\tilde{\sigma}_{ 7}^{z}\tilde{\sigma}_{8}^{z},\]
\[\tilde{\lambda}_{A,3}=\frac{\left|\lambda_{A,2}\right|\lambda_{A,3}}{E},\ \tilde{h}_{A,3}=\tilde{\sigma}_{7}^{z}\tilde{\sigma}_{8}^{z}\sigma_{9}^{z},\]
\[\tilde{\lambda}_{B,1}\tilde{h}_{B,1}=\lambda_{B,1}\sigma_{2}^{x}\sigma_{3}^{z} \tilde{\sigma}_{4}^{z}\left(\frac{\lambda_{C,2}E\sigma^{x}\tilde{\sigma}_{8}^{z} -\left|\lambda_{A,2}\right|\lambda_{B,2}\sigma^{z}}{E\sqrt{\lambda_{A,2}^{2}+ \lambda_{C,2}^{
\[\tilde{h}_{B,3}=\left(\frac{|\lambda_{A,2}|\,\tilde{\sigma}_{8}^{x}-\text{sign}\left( \lambda_{B,2}\right)\lambda_{C,2}\tilde{\sigma}^{y}\tilde{\sigma}_{8}^{y}}{ \sqrt{\lambda_{A,2}^{2}+\lambda_{C,2}^{2}}}\right)\sigma_{9}^{z}\sigma_{10}^{z}. \tag{10}\]
and the corresponding renormalized couplings are simply \(\tilde{\lambda}_{B,1}=\lambda_{B,1}\) and \(\tilde{\lambda}_{B,3}=\lambda_{B,3}\).
It is important to notice that the commutator (11) is vanishing only when \(|\lambda_{B,2}|\gg|\lambda_{A,2}|,|\lambda_{C,2}|\). In any other regime, it is of order unity. This justify the choice of the block Hamiltonian \(H_{0}\). Having localized the largest coupling in the system, the block must take into account the nearest-neighbors.
Having devised a decimation procedure that preserves the algebra (3), we now show further simplifications which appear at and near the critical points. At or near the transition lines of Fig. 2(b), one of the couplings are much smaller than the competing ones. Without loss of generality, say that \(|\lambda_{C,2}|\ll|\lambda_{A,2}|\). Near the multicritical point, on the other hand, all couplings are of the same order of magnitude. However, under renormalization, the effective disorder is large. Thus, very likely either \(|\lambda_{A,2}|\gg|\lambda_{C,2}|\) or \(|\lambda_{A,2}|\ll|\lambda_{C,2}|\). Without loss of generality, let us consider that \(|\lambda_{C,2}|\ll|\lambda_{A,2}|\).
Thus, the regime \(|\lambda_{B,2}|\gg|\lambda_{A,2}|\gg|\lambda_{C,2}|\) is quite general near and at the transitions. In that case the \(B\)-type operators and renormalized coupling constants simplify to
\[\tilde{\lambda}_{B,1}=-\text{sign}\left(\lambda_{B,2}\right)\lambda_{B,1},\; \tilde{h}_{B,1}=\sigma_{2}^{x}\sigma_{3}^{z}\sigma_{4}^{z}\tilde{\sigma}^{z} \sigma_{7}^{z},\]
\[\tilde{\lambda}_{B,3}=\lambda_{B,3},\;\text{and}\;\tilde{h}_{B,3}=\sigma_{8}^ {x}\sigma_{9}^{z}\sigma_{10}^{z}.\]
(An analogous simplification is obtained in the case \(|\lambda_{A,2}|\ll|\lambda_{C,2}|\) after a convenient redefinition of \(\tilde{\sigma}\).)
As a final simplification, notice that the new effective degrees of freedom \(\sigma_{4}\) and \(\tilde{\sigma}\) appear only in \(\tilde{h}_{B,1}\) and \(\tilde{h}_{C,1}\) with the combination \(\tilde{\sigma}_{4}^{z}\tilde{\sigma}^{z}\) and \(\tilde{\sigma}_{4}^{y}\tilde{\sigma}^{y}\) which commute with each other. Thus, we can diagonalize the system in those degrees of freedom. The eigenstates are \((|\uparrow_{4},\uparrow_{\sim}\rangle\pm|\downarrow_{4},\downarrow_{\sim} \rangle)/\sqrt{2}\) and \((|\uparrow_{4},\downarrow_{\sim}\rangle\pm|\uparrow_{4},\downarrow_{\sim} \rangle)/\sqrt{2}\). This means that we can fix the degrees of freedom of spins \(4\) and \(5\) in one of these states and obtain four different effective Halmiltonians, namely,
\[\tilde{H}_{1}=\pm\lambda_{B,1}\tilde{h}_{B,1}\pm\tilde{\lambda}_{C,1}\tilde{ h}_{C,1}-\tilde{\lambda}_{A,3}\tilde{h}_{A,3}-\lambda_{B,3}\tilde{h}_{B,3}, \tag{11}\]
where the renormalized operators are \(\tilde{h}_{B,1}=\sigma_{2}^{x}\sigma_{3}^{z}\tilde{\sigma}_{7}^{z}\), \(\tilde{h}_{C,1}=\sigma_{3}^{x}\sigma_{3}^{z}\tilde{\sigma}_{8}^{z}\), \(\tilde{h}_{A,3}=\sigma_{7}^{y}\sigma_{8}^{z}\sigma_{9}^{z}\), and \(\tilde{h}_{B,3}=\sigma_{8}^{x}\sigma_{9}^{z}\sigma_{10}^{z}\), and the renormalized couplings are
\[\tilde{\lambda}_{C,1}=\frac{\lambda_{C,1}\lambda_{C,2}}{\Omega}\;\text{and}\; \tilde{\lambda}_{A,3}=\frac{|\lambda_{A,2}|\,\lambda_{A,3}}{\Omega}. \tag{12}\]
The decimation procedure (11) and (12) is depicted in Fig. 4(a).
#### a.2.2 On the difference between the usual and block SDRG approaches
At first glance, the block SDRG approach here devised is not qualitatively different from usual SDRG described in Sec. IV.2. Clearly, it has the advantage, however, of providing a clear reason for neglecting the new operator \(\tilde{h}_{AC}\) in Eq. (27) near and at the phase transitions. Analyzing a bit further the differences between these approaches, we notice that the usual SDRG method generates an hybrid operator \(\tilde{h}_{AC}\) originated from \(A\)- and \(C\)-type original operators. No such operator is generated in the block SDRG method. Instead, the \(B\)-type operators (10) and (10) are actually \(B\)-type operators plus \(AB\)- and \(CB\)-type operators as well. These hybrid operators, however, in the regime of different local energy scales (\(|\lambda_{B,2}|\gg|\lambda_{A,2}|\gg|\lambda_{C,2}|\)) can be neglected. The reason is the following. Consider for instance the terms inside parenthesis in Eq. (10), clearly, the term originating the \(AB\)-type operators (the first term) can be viewed as a small tilt to the molecular field of the \(B\)-type operator (the second term). In the regime \(|\lambda_{A,2}|\gg|\lambda_{C,2}|\), this small transverse molecular field can be neglected.
This possibility of neglecting an hybrid operator in detrimental of a "pure" one does not appear in the usual SDRG. Maybe because the local Hilbert space (that of \(H_{0}\)) is not large enough to accommodate more than one possibility of renormalization.
## Appendix B Simplified SDRG flow
In this section, we consider the simplified version of the SDRG decimation procedure for the \(p=2\) case. As stated in Sec. IV.2, the first decimation is such that five operators are removed \(h_{1,2,\dots,5}\) and three new ones are inserted \(\tilde{h}_{1,2,3}\) in the system (see Fig. 4). If, for some reason, one could neglect \(\tilde{h}_{2}\), the algebra structure would not change after decimation. This allows for a simplification of the problem. The new operator \(\tilde{h}_{1}\) (\(\tilde{h}_{3}\)) corresponds to a renormalization of the couplings on the sites \(3i-2\) (\(3i-1\)). We then can write an equation for the transformation of the coupling constant distributions. The transformation for the distribution of the couplings \(3i-2\) when the cutoff \(\Omega\) diminished to \(\Omega-d\Omega\) is
\[\mathcal{P}_{A}\left(\lambda,\Omega-d\Omega\right)\mathcal{N}=\mathcal{P}_{A} \left(\lambda,\Omega\right)+R_{B}\left[\mathcal{P}_{A}\right]+R_{C}\left[\mathcal{P }_{A}\right], \tag{13}\]
where \(\mathcal{N}\) is a normalization constant (which we define later) and
\[\begin{split} R_{X}\left[\mathcal{P}_{Y}\right]&=\mathcal{P }_{X}\left(\Omega,\Omega\right)d\Omega\int d\lambda_{1}d\lambda_{4}\mathcal{P}_{Y} \left(\lambda_{1},\Omega\right)\mathcal{P}_{Y}\left(\lambda_{4},\Omega\right)\\ &\times\left[-\delta\left(\lambda-\lambda_{1}\right)-\delta\left( \lambda-\lambda_{4}\right)+\delta\left(\lambda-\frac{\lambda_{1}\lambda_{4}}{ \Omega}\right)\right],\end{split} \tag{14}\]
is a functional quantifying the change in \(\mathcal{P}_{Y}\) when a \(X\)-type coupling constant is decimated. Here, \(\mathcal{P}_{X}\left(\Omega,\Omega\right)\mathcal{P}_{Y}\left(\lambda_{1}\right) \mathcal{P}_{Y}\left(\lambda_{4}\right)d\Omega d\lambda_{1}d\lambda_{4}\) is the probability of having the decimation of an \(X\)-type coupling which involves the neighboring \(Y\)-type couplings \(\lambda_{1}\) and \(\lambda_{4}\). We need to sum over all possibilities for the values of these neighboring couplings. The first two deltas correspond to the removal of these \(Y\)-type couplings. The last one corresponds to the addition of the renormalized coupling \(\tilde{\lambda}=\frac{\lambda_{1}\lambda_{4}}{\Omega}\).
The normalization constant is important to keep the distribution \(\mathcal{P}_{A}\) normalized after the cutoff is reduced. Integrating both sides of Eq. (11) from \(\lambda=0\) to \(\Omega-d\Omega\), the l.h.s. is simply \(\mathcal{N}\). Up to linear order in \(d\Omega\), the r.h.s is \(1-\left(\mathcal{P}_{A}\left(\Omega,\Omega\right)+\mathcal{P}_{B}\left(\Omega,\Omega\right)+\mathcal{P}_{C}\left(\Omega,\Omega\right)\right)d\Omega\). This is the expected result if one counts that a decimation of type \(B\) removes a net fraction of \(\mathcal{P}_{B}\left(\Omega,\Omega\right)d\Omega\) couplings of type \(A\). In addition, a decimation of \(A\) type removes a fraction of \(\mathcal{P}_{A}\left(\Omega,\Omega\right)d\Omega\) couplings of type \(A\). The beta function for the distribution \(P_{A}\) simplifies to
\[-\frac{\partial\mathcal{P}_{A}}{\partial\Omega}=\mathcal{P}_{A}\left(\Omega \right)\mathcal{P}_{A}-\mathcal{P}_{B,C}\left(\Omega\right)\left(\mathcal{P} _{A}-\mathcal{P}_{A}\otimes\mathcal{P}_{A}\right), \tag{13}\]
where \(\mathcal{P}_{X}\left(\Omega\right)=\mathcal{P}_{X}\left(\Omega,\Omega\right)\), \(\mathcal{P}_{X}=\mathcal{P}_{X}\left(\lambda,\Omega\right)\), \(\mathcal{P}_{B,C}\left(\Omega\right)=\mathcal{P}_{B}\left(\Omega\right)+ \mathcal{P}_{C}\left(\Omega\right)\), and
\[\mathcal{P}_{A}\otimes\mathcal{P}_{A}=\int d\lambda_{1}d\lambda_{4}\mathcal{P }_{A}\left(\lambda_{1},\Omega\right)\mathcal{P}_{A}\left(\lambda_{4},\Omega \right)\delta\left(\lambda-\frac{\lambda_{1}\lambda_{4}}{\Omega}\right). \tag{14}\]
The equivalent equations for \(\mathcal{P}_{B,C}\) are obtained by exchanging \(A\rightleftharpoons B\) and \(A\rightleftharpoons C\).
At criticality, \(\mathcal{P}_{A}=\mathcal{P}_{B}=\mathcal{P}_{C}\) and, thus,
\[\frac{\partial\mathcal{P}_{A}}{\partial\Omega}=\mathcal{P}_{A}\left(\Omega \right)\left(\mathcal{P}_{A}-2\mathcal{P}_{A}\otimes\mathcal{P}_{A}\right).\]
Using the ansatz
\[\mathcal{P}_{A}=\frac{1}{z\left(\Omega\right)\Omega}\left(\frac{\Omega}{ \lambda}\right)^{1-1/z\left(\Omega\right)}, \tag{15}\]
then
\[\frac{1}{\mathcal{P}_{A}}\frac{\partial\mathcal{P}_{A}}{\partial\Omega}=- \frac{\dot{z}}{z}-\frac{1}{z\Omega}+\frac{\dot{z}}{z^{2}}\ln\frac{\Omega}{ \lambda},\]
\[\mathcal{P}_{A}\otimes\mathcal{P}_{A}=\Omega\int_{\lambda}^{\Omega}\frac{dx }{x}\mathcal{P}_{A}\left(x\right)\mathcal{P}_{A}\left(\frac{\lambda\Omega}{x }\right)=\frac{\mathcal{P}_{A}}{z}\ln\frac{\Omega}{x},\]
\[\frac{\dot{z}}{z}+\frac{1}{z\Omega}-\frac{\dot{z}}{z^{2}}\ln\frac{\Omega}{ \lambda}=-\frac{1}{z\Omega}+2\frac{1}{z^{2}\Omega}\ln\frac{\Omega}{x}.\]
Thus,
\[\frac{\dot{z}}{z}=-\frac{1}{z\Omega}-\frac{1}{z\Omega},\text{ and }\dot{z}=-\frac{2}{\Omega},\]
which are the same. So the ansatz is acceptable. Then,
\[z\left(\Omega\right)=D+2\Gamma,\]
where \(\Gamma=\ln\left(\Omega_{0}/\Omega\right)\) and \(D=z\left(\Omega_{0}\right)\).
The relation between the number of coupling constants and the cutoff energy scale is
\[N\left(\Omega-d\Omega\right)=N\left(\Omega\right)-3N\left(\Omega\right) \mathcal{P}_{A}\left(\Omega\right)d\Omega,\]
which simplifies to
\[\frac{d\ln N}{d\Omega}=3\mathcal{P}_{A}\left(\Omega\right)=\frac{3}{z\Omega},\]
from which we obtain
\[\ln\frac{N\left(\Omega\right)}{L}=-\frac{3}{2}\ln\frac{D+2\Gamma}{D}.\]
Thus,
\[\ell\equiv\frac{L}{N}=\left(1+\frac{2\Gamma}{D}\right)^{\frac{1}{\psi}},\]
with tunneling exponent \(\psi=\frac{2}{3}>\frac{1}{2}\).
|
2308.11526 | Learning Representations on Logs for AIOps | AI for IT Operations (AIOps) is a powerful platform that Site Reliability
Engineers (SREs) use to automate and streamline operational workflows with
minimal human intervention. Automated log analysis is a critical task in AIOps
as it provides key insights for SREs to identify and address ongoing faults.
Tasks such as log format detection, log classification, and log parsing are key
components of automated log analysis. Most of these tasks require supervised
learning; however, there are multiple challenges due to limited labelled log
data and the diverse nature of log data. Large Language Models (LLMs) such as
BERT and GPT3 are trained using self-supervision on a vast amount of unlabeled
data. These models provide generalized representations that can be effectively
used for various downstream tasks with limited labelled data. Motivated by the
success of LLMs in specific domains like science and biology, this paper
introduces a LLM for log data which is trained on public and proprietary log
data. The results of our experiments demonstrate that the proposed LLM
outperforms existing models on multiple downstream tasks. In summary, AIOps
powered by LLMs offers an efficient and effective solution for automating log
analysis tasks and enabling SREs to focus on higher-level tasks. Our proposed
LLM, trained on public and proprietary log data, offers superior performance on
multiple downstream tasks, making it a valuable addition to the AIOps platform. | Pranjal Gupta, Harshit Kumar, Debanjana Kar, Karan Bhukar, Pooja Aggarwal, Prateeti Mohapatra | 2023-08-18T20:34:46Z | http://arxiv.org/abs/2308.11526v1 | # Learning Representations on Logs for AIOps
###### Abstract
AI for IT Operations (AIOps) is a powerful platform that Site Reliability Engineers (SREs) use to automate and streamline operational workflows with minimal human intervention. Automated log analysis is a critical task in AIOps as it provides key insights for SREs to identify and address ongoing faults. Tasks such as log format detection, log classification, and log parsing are key components of automated log analysis. Most of these tasks require supervised learning; however, there are multiple challenges due to limited labeled log data and the diverse nature of log data. Large Language Models (LLMs) such as BERT and GPT3 are trained using self-supervision on a vast amount of unlabeled data. These models provide generalized representations that can be effectively used for various downstream tasks with limited labeled data. Motivated by the success of LLMs in specific domains like science and biology, this paper introduces a LLM for log data which is trained on public and proprietary log data. Results of our experiments demonstrate that the proposed LLM outperforms existing models on multiple downstream tasks. In summary, AIOps powered by LLMs offers an efficient and effective solution for automating log analysis tasks and enabling SREs to focus on higher-level tasks. Our proposed LLM, trained on public and proprietary log data, offers superior performance on multiple downstream tasks, making it a valuable addition to the AIOps platform.
AIOps, Log Analysis, Large Language Model
## I Introduction
With the growing prevalence of scalable microservices-based applications, log analysis is an integral part of building robust systems [1, 2]. As the scale of applications expands, the quantity of generated logs increases exponentially, posing a challenge for IT teams to analyze them manually [3]. One critical aspect of building resilient systems is log analysis. By analyzing logs generated by various components of the system, IT teams can detect issues, identify their root cause(s), and take corrective action before they impact system availability. In order to extract relevant information, log parsing is necessary, and log format detection plays a crucial role in this process. Accurately detecting the log format allows Site Reliability Engineers (SREs) to focus on the relevant logs and interpret log data effectively. After logs are parsed and collected, monitoring them is crucial for assessing the system's health. Logs can be categorized into different "golden signals" [4] to facilitate this monitoring process. The combination of log format detection and golden signal classification can reduce the mean time to detect an issue. Additionally, accurate fault category prediction is crucial in reducing the mean time to engage the right expert to handle the fault. A deeper understanding and representation of logs plays a crucial role in providing key insights to SREs in detecting faults, improving system availability, and minimizing downtime.
One popular way of obtaining generalized representation for text is Large Language Models (LLMs). Recent years have seen the emergence of deep learning pre-trained LLMs such as BERT [5], GPT-3 [6], PaLM [7], DALL-E [8], and Stable Diffusion [9]. LLMs have been demonstrated to possess significant capabilities for representation learning and have the versatility to be utilized in diverse downstream applications, such as response generation, summarization, and text-to-image generation, etc. LLMs are built by infusing large amounts of data to train a deep neural network using one or more self-supervised auxiliary tasks. These models provide generalized representations that are not task-specific but can be fine-tuned for multiple downstream tasks, with limited labeled data in a few-shot setting. By leveraging the advanced representation learning capabilities of LLMs, log analysis can become even more efficient and effective, leading to significant improvements in application performance and reliability. Our focus is to utilize the log specific LLM on the three log analysis tasks: log format detection, golden signal classification, and fault category prediction.
Directly applying existing LLMs, which are pre-trained on natural language text, to semi-structured log data can be challenging due to the dissimilarities between the two data types. For instance, the lexicon and vocabulary of log data differs from natural language text; this is because log data contains domain-specific words and phrases that are not commonly used outside of their respective fields. As most log messages have a limited grammatical structure, the representations or embeddings provided by pre-trained LLMs do not provide the best results on downstream tasks. Also, capturing token positional information in logs is crucial as they help in log parsing tasks. Displayed in Table I are examples of log lines, their corresponding templatized forms [10], and the labels assigned to them based on their downstream tasks. These examples demonstrate the crucial role of comprehending the inherent structure, vocabulary, and attributes of log messages in facilitating efficient log analysis. Similar observations have also been reported by several other prior works [11, 12, 13, 14] that pre-trained LLMs do not perform as expected on domain-specific tasks.
Inspired by the LLMs for different domains, such as Tweet-BERT [15], SciBERT [16], and BioBERT [11], we introduce _BERTOps_, an LLM for AI in operations (AIOps) domain, pre-trained over large-scale public and proprietary log data. The emergence of domain-specific LLMs is due to the fact that existing LLMs are trained on general corpora such as news articles and Wikipedia, and hence do not perform as expected for specific domains. To the best of our knowledge,
_BERTOps_ is the first LLM that can effectively generalize to multiple downstream tasks of log analysis. We finetune the pre-trained _BERTOps_ model under a few-shot setting for three downstream tasks - Log Format Detection (LFD), Golden Signal Classification (GSC), and Fault Category Prediction (FCP). For the aforementioned tasks, we show that BERTOps outperforms classical machine learning models and LLMs. The main contributions of this paper are as follows:
1. We propose an encoder-based LLM (_BERTOps_) for the AIOps domain, and show its application on the three downstream tasks for log analysis - Log Format Detection, Golden Signal Classification, and Fault Category Prediction.
2. We provide labeled data for the aforementioned three downstream tasks, and release it publicly along with the code base. We believe that these tasks along with the datasets and the model will work as a benchmark for further research. To the best of our knowledge, there are currently no existing labeled datasets available for the Golden Signal Classification task and Fault Category Prediction task.
3. Our experiments suggest that the encoder-based _BERTOps_ exhibit few-shot generalization with purely unsupervised training. To demonstrate the effectiveness of LLM for AIOps, we compare the _BERTOps_ with classical machine learning models and pre-trained encoder-based models such as BERT and RoBERTa. An important observation is a significant gain in performance of _BERTOps_ visa-a-vis classical ML models for log analysis tasks.
## II Method
In this section, we introduce _BERTOps_, a large-scale Transformer model for log analysis and understanding. Figure 1 shows an end-to-end architecture of the _BERTOps_, including pretraining the model and finetuning it on the downstream tasks. The downstream tasks for log analysis are mostly classification-based, therefore we build an encoder-based LLMs for log data. The architecture design of _BERTOps_ is motivated from _BERT-BASE_[5] with \(12\) layers, \(12\) self-attention heads, and the size of representation layer is \(768\). Since there is a vocabulary overlap between natural text and log data, especially for non domain-specific words and phrases, the pretrained weights from BERT-BASE are used as the initialization weights for _BERTOps_ and further pretrain it on logs data. The intuition is to bootstrap _BERTOps_ with BERT's knowledge of natural language text, so that it can focus more on understanding log data specific vocabulary and representation during pretraining. The transformer encoder of BERTOps is further pretrained on logs data using the masked language modeling (MLM) task [5]. During pretraining phase, some percentage of tokens in a log sequence are randomly masked as shown in Figure 1. The objective of the MLM task is to predict the token corresponding to the masked token based on the neighbouring context tokens. For example, Figure 1 demonstrates an instance of token masking in which the token "server" is masked. The objective of the MLM task is to utilize the embedding \(H_{masked}\) to predict the original token, "server", by employing cross-entropy loss (\(L_{MLM}\)). We continue pretraining the BERTOps until training loss(\(L_{MLM}\)) saturates, and use the model with the least validation loss as the final model. For example, Figure 2 shows that at epoch \(5\), the validation loss is minimum. Also, note that the validation loss saturates between epochs \(5\) and \(6\). Although we continued training after epoch \(6\), hoping that loss would further reduce, however, that did not happen. Instead, it went up a little and in the following epoch, it came back to the same level which is a sign that the loss is converging. Therefore, we used the model checkpoint at epoch \(6\) as the final model for the downstream tasks. Besides using validation loss for model selection, we also used perplexity scores to monitor the progress during pretraining. Perplexity [17] measures how well a LLM predicts a sequence of words. The lower the perplexity score, the better the language model is at predicting the next word. It is calculated as follows:
\[PP(\mathbf{w})=\sqrt[n]{\frac{1}{P(w_{1},w_{2},...,w_{N})}}\]
where \(\mathbf{w}=w_{1},w_{2},...,w_{N}\) is a sequence of words, and \(P(w_{1},w_{2},...,w_{N})\) is the joint probability of the sequence
of words to occur together. As the training continued, the perplexity score came down from \(87.9937\) at epoch \(0\) to \(1.7363\) at epoch \(5.4\), the same checkpoint which had the least validation loss. The high perplexity score in epoch \(0\) validates the hypothesis mentioned in the Introduction (Section I) that log lines do not have proper grammatical structure as opposed to natural language text. On further training, the perplexity score dropped, this suggests that the _BERTOps_ adjusted its weights to build representations in accordance with the syntactical and semantical structure of words appearing in log data.
As mentioned earlier, pretraining LLMs requires a huge amount of data. For BERTOps, the pretraining data consists of logs from \(17\) different data sources, out of which \(12\) are collected from an open-source repository [18], and the remaining \(5\) are proprietary data sources. Tables II and III show the number of instances per log format for pretraining BERTOps. Proprietary data sources consist of log formats from custom applications whose structure are different from existing public log sources. The proprietary data sources
Fig. 1: BERTOPs end-to-end architecture pre-training and finetuning on three downstream tasks. The model has been designed to be plug-and-play such that new downstream tasks can be added when required. It computes two losses: pretraining loss \(L_{MLM}\) and task specific finetuning loss \(L_{classification}\).
Fig. 2: Train vs Validation Loss during pretraining of BERTOps
only constitute \(0.937\%\) of the entire pretraining corpus. Therefore, we can infer that using only public datasets for pretraining shouldn't have a significant deviation from the results presented in this paper. In order to generate the training and validation sets for our experiments, an \(80:20\) split was performed on each log source. Subsequently, the resulting data was combined to form the training and validation sets, which comprised of \(43.5M\) and \(10.9M\) log lines, respectively.
Data preprocessing of pretraining datasets consists of splitting tokens that are in camelcase format or tokens joined with periods or dash using regular expressions followed by converting tokens to lower case. The pretraining for BERTOps involved 8 epochs that spanned across a 20-day period. Huggingface's implementation1 was used for pretraining BERTOps on \(4\)\(A100\) GPUs with a batch size of \(256\). To obtain statistics for _Out-of-Vocabulary (OOV)_ words in the pretraining dataset, we calculated the frequency of \(<\)unk\(>\) tokens (which signify OOV). Our analysis revealed that after utilizing BERT's tokenizer, \(0.0062\%\) and \(0.0061\%\) of tokens in the training and validation data, respectively, correspond to OOV. Once pretraining of BERTOps model is complete, it is finetuned using a cross-entropy-loss \(L_{classification}\) for each of the task separately. More details for finetuning BERTOps are discussed in subsection III-C
Footnote 1: [https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_nlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_nlm.py)
## III Experimental Setup
The usefulness of a pretrained LLM hinges upon how well it performs across multiple downstream tasks. That is, how effectively can it apply or transfer the knowledge it gained through pre-training on large corpuses for the downstream tasks. This section introduces three downstream tasks for log analysis in AlOps domain, process of preparing labeled datasets for these tasks, and finetuning the pretrained LLM on these tasks. The three tasks are: Log Format Detection, Golden Signal Classification, and Fault Category Prediction.
### _Downstream Tasks_
To evaluate the effectiveness of the _BERTOps_ model and the other baselines, three downstream log analysis tasks are defined.
1. **Log Format Detection (LFD)** Identifying the format of logs is an important first step of any log analysis pipeline [19, 20, 21], which can help leverage the knowledge of its unique structures to parse the logs and aid in tasks like structure extraction [22], key entity/metric extraction, anomaly detection [23], etc. With multiple log variations within each log format (refer column Templates in Table IV), learning to distinguish them from a few training samples makes it a challenging task. In this task, given logs from varied sources, we train a multi-class classification model that learns to distinguish logs from \(16\) different log formats. Tables II and III provides the number of templates (indicating variations) per format.
2. **Golden Signal Classification (GSC)** Google SRE handbook for SREs [4] outlines basic principles and practices for monitoring applications hosted on cloud. Golden Signals are a set of key performance indicators (KPIs) used for monitoring log and metric events to analyze health of a system. One of the key benefits of monitoring the golden signals is that it offers a convenient and efficient method for detecting and troubleshooting errors. By monitoring these KPIs, it is possible to detect anomalies and trends that can indicate problems before they become critical. This enables teams to proactively respond to issues and ensure that their systems are running smoothly and efficiently. The four golden signals defined in the SRE handbook are latency, traffic, errors, and saturation. Latency measures the time it takes for a request to be processed, errors count the number of failed requests, saturation measures the degree to which system resources are being used, and traffic measure of how much demand is being placed on your system.
existing golden signals, we have defined an additional class for them called as "Information". Table V enumerates examples of each golden signal classification label.
3. **Fault Category Prediction (FCP)** Fault categories associated with logs serve as signals to detect anomalous system behaviours and provide clues for failure diagnosis. That is, a Fault category helps to understand why a service fails and may help in isolating the root causes of the failure. In addition, fault categories can also be used to route the ongoing fault/issue to appropriate teams for debugging and remediation. Through this task, we aim to build a fault categorization model that classifies a log line or raw log message into one of the \(7\) fault categories: Memory, Network, Authentication, I/O, Device, Application, and Other [24]. Table VI provides an example for each fault category.
### _Labeled Datasets for Downstream Tasks_
Training a model for a task from scratch requires huge amount of labeled data instances which are hard and expensive to generate. Since we aim to evaluate the effectiveness of the fine-tuned LLM for the downstream tasks with only a few labeled training examples, it is crucial to have an extensive labeled test set to validate its performance. This work aims to provide a labeled training dataset for fine-tuning models under a few-shot setting. Additionally, an annotated test dataset will be made available, which can serve as a suitable benchmark for these tasks, thereby enabling researchers to make comparisons and evaluate the performance of their models. Log data for the downstream tasks is sourced from LogHub [18] and proprietary data sources. While we had access to gold-standard annotations for the LFD task, we manually curated annotated data samples for the other two tasks.
For the LFD task, a dataset was prepared from the \(16\) formats consisting of Android, Apache, BGL, HDFS, HPC, Hadoop, HealthApp, Mac, Openstack, Proxifier, SSH, Syslog-Sendmail, Spark, Thunderbird, Websphere and Zookeeper. Out of the \(16\) log formats, the \(4\) log formats (HealthApp, SSH, Sendmail, WebSphere) were treated as held-out, i.e., these \(4\) log formats were not used for pretraining _BERTOps_ (refer Table II and III), but they will be required during finetuning the pretrained LLM models for the downstream tasks. The goal of holding out these \(4\) datasets is to test the generalizability of LLMs, including _BERTOps_, to handle unseen datasets for the downstream tasks.
Preparing labeled datasets for the GS Classification and the FC Predication tasks is a challenging process for two reasons: one, the amount of data is huge, and second, it is practically infeasible for human annotators to label each logline. To mitigate this situation, we templatized logs for each format using the Drain template miner [22]. Templatization allows to cluster logs into homogeneous groups that have the same structure. We distributed few instance of each log template among the annotators. For example, Table III shows that the number of log lines for MongoDB log format are approximately 150K which can be grouped into 14 templates. Instead of labeling each of the 150K log lines, the annotators were asked to label only 14 templates. The template labels were reverse mapped to actual log lines associated with each template. This reduced the human effort of manually labeling each log line by manifold while getting a good coverage.
For GS classification task, \(7\) subject matter experts were provided with \(272\) templates for labeling. The overlap of log templates among the annotators was \(66\%\). Similarly, for FC prediction task, \(394\) templates were provided for labeling with an overlap of \(43\%\). After labeling, it was observed that the Traffic class had a skewed distribution with very few examples. This skewed distribution ultimately diminished the overall quality of the dataset. As a result, the final labeled dataset does not include the Traffic class. Once the labeling of templates for GS and FC tasks was completed, those templates where annotators had disagreements were revisited to resolve the differences. During the disagreement resolution process, conflicts were resolved using majority vote. Kappa coefficient [25] is used as a metric for computing inter-annotator agreement. For the GSC and FCP, the inter-annotator agreement is \(60.62\) and \(65.80\), respectively. Inter-annotator agreement in the range of 60-65 indicates the complexity associated with these tasks.
To collate the k-shot training dataset for each task, where \(k\in\{10,20,30\}\), we randomly selected \(k\) templates with discernible variations. We curate 10, 20 and 30 shot training datasets for each task by finding 10, 20, 30 samples of each label for each task. For example, 10-shot training set for log format detection will have \(16\times 10\) log samples. We used the rest of the templates along with their group of instances as the test set. For each of the three downstream tasks, the data statistics of the labeled dataset are presented in Table VII.
### _Finetuning_
A widely used industry practice is to build an LLM which upon release is adopted and deployed for the downstream
tasks by application-focused developers and businesses. The pre-trained _BERTOps_ model prepared in Section II is further finetuned for each task separately. Finetuning a model for a particular task requires labelled data. In the real-world scenario, clients find it difficult to share large data payloads with private data to train models for various tasks. The other challenge is the availability of labeled data. Moreover, data annotation for each task is an expensive and laborious exercise. Few-shot learning mitigates that bottleneck by teaching the models to classify with very few data samples.
For the downstream tasks described in Section III-A, we finetune BERTOps in a few-shot setting to evaluate the effectiveness of our learned transformer representations. As shown in Figure 1 we add a classification layer on top of BERTOps consisting of a linear layer followed by softmax activation. We use simpletransformers2 library for performing experiments on downstream tasks. All the downstream tasks use Cross Entropy loss as the objective function. The finetuning training process updates all model parameters including the pretrained weights. These experiments are performed on a single \(A100\) GPU server for \(20\) epochs with AdamW optimizer [26] and learning rate of \(4e\)-\(5\).
Footnote 2: [https://simpletransformers.ai/](https://simpletransformers.ai/)
## IV Experiments
In this section, we present results of our experimental study, followed by its analysis. The experimental studies are designed to answer the following questions: (1) Is an AIOps domain specific LLM required? (2) How well does the proposed _BERTOps_ LLM performs in comparison to the other baseline models for the three downstream tasks, i.e. Log Format Detection, Golden Signal Classification, and Fault Category Prediction tasks? (3) How effective is the few-shot learning approach?
### _Baseline Models_
In this section, we list the baseline models that we use for comparison in our experiments. LLMs such as BERT are known to outperform LSTMs or RNNs [27, 28], therefore we do not include them in our study. To answer the first question, we compare the performance of _BERTOps_ with state-of-the-art LLMs such as ALBERT-base-v2 [29], ELECTRA [30], XLNET [31], RoBERTa [32], and BERT-base [5]. For most AIOps tasks, the state-of-the-art models are either rule-based or ML-based [33, 34], and to answer the second question, we include two ML models as baselines, Decision Tree(DT) and Stochastic Gradient Descent (SGD). These models were trained using scikit-learn library3 for each of the three downstream tasks. To answer the third question, we run experiments under the few-shot setting with \(k\in\{10,20,30\}\) (refer Section III-B and III-C for details on finetuning).
Footnote 3: [https://scikit-learn.org/stable/](https://scikit-learn.org/stable/)
### _Results_
Tables VIII - X presents weighted Precision, Recall and F1-scores obtained on \(10\), \(20\) and \(30\) shot datasets used for finetuning _BERTOps_ on the three downstream tasks. We observe that _BERTOps_ outperforms all pretrained LLMs and ML models on the three datasets, except on the Fault Category Prediction when only 10 examples are provided for finetuning.
For the LFD task, the proposed _BERTOps_ LLM learns faster with minimal training data as opposed to other models. Even with as few as \(10\) samples per format provided for finetuning, _BERTOps_ achieves an F1 score of \(97.23\%\). Whereas, when \(30\) examples were provided for finetuning, the best ML model (SGD) and the best LLM model(RoBERTa) has a weighted F1-score of \(90.34\%\) and \(98.3\%\) which is very close to _BERTOps_ trained with 10 examples. With \(30\) shot dataset, _BERTOps_ shows percentage improvement of \(9.98\%\) and \(1.08\%\) when compared with the best performing ML (SGD) and pre-trained LLM(RoBERTa), respectively.
In the GS Classification task, we observe significant improvements in F1 score with LLMs vis-a-vis the ML models. When the number of training samples per class increase from \(20\) to \(30\), the best performing ML model (SGD) improves by approximately \(4.1\) absolute points (F1 from \(67.55\) to \(71.65\)). Whereas, the best performing LLM model(RoBERTa) and _BERTOps_ has an improvement of \(7.92\) and \(8.81\) absolute points, respectively. These results indicate that _BERTOps_ being a domain specific LLM not only outperforms existing state-of-the-art models, but it also gets a good start (\(58.95\) vs \(50.91\) for SGD and \(58.95\) vs \(56.03\) for the BERT-base). The percentage improvements of _BERTOps_ with respect to the best performing ML(SGD) and LLM (RoBERTa) on \(30\) shot dataset are \(9.28\%\) and \(1.65\%\), respectively.
In the FC Prediction task, _BERTOps_ doesn't record a performance gain when finetuned on smaller dataset size of 10 examples, however, it outperforms all baseline methods when finetuned with a larger dataset size of 20 or 30 examples. Also, when finetuning with 10 examples, although ALBERT-base has the highest F1-score, _BERTOps_ outperforms all baseline models in terms of precision. This means _BERTOps_ exhibited higher precision in identifying the correct fault for a logline, which is an important metric in the FC Prediction task. Precision measures the ratio of number of correct predictions to the total number of predictions. A higher precision means that the BERTOps has less false positives as compared to other methods. On the other hand, Recall measures the ratio of total number of correct predictions to the actual ground truth. A lower recall and a higher precision indicates that BERTOps failed to identify some of the fault category classes, but among the ones that it identified, those were actually correct. Also,
it is impressive to observe that BERTOps quickly learned and corrected itself when provided with more training examples. That is, when 30 training examples were provided for finetuning, _BERTOps_ achieves an absolute increase of 23.84 points in recall, which is the highest improvement observed compared to all baseline models. Whereas, ALBERT-base improves only by 5.42 absolute points. We observe a similar trend in F1-score, where BERTOps and ALBERT-base improves by 21.42 and 3.98 absolute points, respectively.
Also worth noting is the inter-annotator Kappa score and the F1 score of the ML models for the GS and FC classification tasks. The inter-annotator agreement was in the range of 60-65, and the F1 score of ML models for both tasks is in the range of 50-70. Generally, it is believed that inter-annotator agreement places an upper bound on the performance of the ML models [35], which is also evident in these results. Our experiments also indicate an intriguing result where the classical ML models outperforms the state-of-the-art LLM baselines. Specifically, for the GS Classification task with 20-shot dataset for finetuning, the SGD outperforms most of the LLM models such as ALBERT-base, ELECTRA, XLNet, and BERT-base. A similar trend is observed for the FC Prediction task with 20-shot dataset for finetuning, where SGD outperforms both XLNet and BERT. One possible reason for this is the challenges associated with the log data which we also highlighted in the introduction section. Note that, BERT and other LLMs are pre-trained on natural language text, and hence they fail to correctly work with log data with few examples for finetuning on downstream tasks. Because _BERTOps_ is pre-trained on log data, it has an obvious advantage in working with log data for the downstream tasks. Moreover, when more training examples are provided to _BERTOps_, it learns quickly and shows significant jumps in performance as opposed to other LLMs. Higher performance of _BERTOps_ as compared to both ML and existing LLM models and its ability to learn quickly when more training examples are provided indicate that a LLM for logs is warranted. The experimental findings suggest that _BERTOps_ is capable of providing a more
accurate generalizable representation for a logline, which can be effectively utilized for various downstream tasks in AIOps.
## V Discussion
This section presents a detailed analysis of _BERTOps_ for each downstream task at the individual class level. The analysis includes a qualitative evaluation of _BERTOps_, finetuned using a 30-shot dataset. In addition, we provide examples of predictions generated by _BERTOps_ for the three tasks, juxtaposed with BERT's predictions.
For the LFD task, Table XI shows a truncated confusion matrix; to save space it is showing only 6 out of 16 classes where predictions are not 100% accurate, i.e., for the other 10 classes not shown in this table, the predictions were absolutely correct. Among the four held-out log formats (Section III-B) - the perfomance on HealthApp, SSH and SendMail is 100%, while on WebSphere(WS) the performance is 99.18%. One reason for this superior performance could be the pretraining of _BERTOps_ on varying log lines from 17 log formats that enabled it to learn robust and generalized representations. The LFD model is most confused between the following classes: Mac, Spark, and WebSphere. To understand this behavior, Table XIV analyzes 7 log lines with ground truth label, along with the predictions of _BERTOps_ and BERT-base.
While BERT-base misclassified log lines 1-3, _BERTOps_ identified subtle nuances in log data to make correct predictions, which is critical for accurately identifying log formats. For example, the term _websphere_ appears twice in the example log line number \(3\), BERT-base failed to recognize it and misclassified it as of type Spark. Whereas, _BERTOps_ predicted the correct label _Websphere_ for it. Note that the term _websphere_ is not appearing as an independent term in the log line, it is part of a phrase _com.ibm.websphere.security_. Existing pretrained LLMs such as BERT fail to interpret such tokens resulting in wrong predictions. We mentioned some of the challenges associated with log data in the Introduction Section I, one such challenge is the lexicon and grammatical structure of log lines. To address these challenges, we preprocessed log data for pretraining the BERTOps model(Section III-B), i.e., using regular expressions to split tokens that are in camelcase format or tokens joined by periods. The preprocessing has helped BERTOps to learn better representation for log lines which is the reason why it identified the log format correctly in these examples. These examples demonstrate the ability of _BERTOps_ to capture contextual information in log data, which is crucial for precise log format detection.
Among the five classes in the GS Classification task, _BERTOps_ had the least confusion in identifying the Information class (Table XII). To enhance incident resolution in AIOps, it is crucial to minimize false positives in the informational class as they can impede the fault diagnosis process (Section III-A). This result indicates that _BERTOps_ is able to reduce false alarm rate which is very useful for an SRE.
However, _BERTOps_ performed relatively poorly in correctly identifying log lines of type Availability, 21.13% of them are misclassified as Error class. The confusion matrix also shows that _BERTOps_ is confusing 12.12% Latency examples and 18.18% of Saturation examples with the Error class. To understand the reason for this behavior, we identified some of the examples listed in TableXV. It shows five log lines, its true labels and predictions from _BERTOps_ and BERT. The first three examples demonstrate that _BERTOps_ correctly identified golden signals by focusing on domain-specific keywords in the log lines, such as _"time out"_ (first example), _"unavailable state"_ (second example) and _"action error"_ or _"Error: Unknown"_ (third example). However, in the fourth example, _BERTOps_ might have paid more attention to the phrase _"Unexpected error"_ compared to the phrase _"BlockInfo not found"_, leading to an incorrect prediction class Error whereas the actual class is Availability. Similarly, in the fifth example, _BERTOps_ might have given more weight to phrase "service() exception", which resulted in misclassification. The predicted class is Error whereas the actual class is Saturation. Ideally, it should have focused on _"java.lang.OutOfMemoryError"_,
which is a more indicative signal of the saturation signal. The sixth example is particularly interesting because it contains phrases related to both availability (_Connect not available_) and latency (_timed out waiting_) classes, making it difficult for the model to assign a single golden signal. While the annotators chose availability as the correct label, _BERTOps_ predicted it as Latency, which highlights the ambiguity and complexity of this task. This example underscores the difficulty of precisely assigning golden signal labels to log lines that possess subtle nuances and multiple signals, to the point of even confounding human experts.
For the FC Prediction task (TableXIII), we found that _BERTOps_ was highly accurate in identifying log lines related to memory and I/O faults, and the second-best prediction is observed in authentication (Auth) class. _BERTOps_ also demonstrated similar accuracy in identifying application(App), device, and network faults. However, it was confused between the following pairs of class labels, Auth with Other(\(13.04\%\)), Network with App(\(8.54\%\)) and Other(\(9.76\%\)), and Other with App(\(10.04\%\)) and Network(\(11.09\%\)); it appears that the model was confused between the App, Other and Network classes. Table XVI shows examples of some of these cases with true labels along with predictions from _BERTOps_ and BERT-base. In the first two examples, _BERTOps_ correctly identified the fault category by leveraging domain-specific phrases, such as associating phrase _"sending a request"_ with class Network in the first example, and associating phrase _"User authentication canceled"_ with class Auth in the second example. However,
in the third example, BERTOps may have interpreted _SRAM_ as a type of memory, leading to a prediction of memory class, whereas the annotators labeled it as device class. This example highlights the complexity of this task, which can be difficult even for a human to classify it.
The last example related to FC prediction is worth noticing, _BERTOps_ excels in identifying fault category in log lines that exhibit abnormal behavior4 in log lines, but struggles to classify fault categories when there is no such behavior present. For example, the last example in Table XVI shows that _BERTOps_ identified the context (Network), but failed to identify the correct class label Other, this is because the log line is not related to a failure. Based on the above analysis, it can be inferred that while BERTOps performs commendably, and it even outperforms state-of-the-art baselines, there is still scope for improvement in handling the intricacies of AIOps tasks.
Footnote 4: Abnormal behavior may include errors, warnings, exceptions, and other unexpected behaviors that may indicate a problem or fault in the system
## VI Related Work
Existing LLMs are based on transformer architecture [36], and they can be broadly categorized into three types: encoder-based, decoder-based, and encoder-decoder-based. Encoder-based LLMs consist of an embedding layer followed by a stack of encoder layers, i.e., they use only the encoder layer of the transformer architecture. Examples of encoder-based LLMs are BERT [5], RoBERTa [32], Electra [30], XLNet [31], etc. Decoder-based LLMs consist of an embedding layer followed by a stack of decoder layers, for example, GPT3 [6], PaLM [7], BLOOM [37], etc. These models are autoregressive at each step, i.e. the input to the model is the output of the previous step, which it uses to generate the output for the following step. Encoder-Decoder-based LLMs consist of both the encoder layer and the decoder layer. Examples of encode-decoder based LLMs are BART [38], T5 [39], etc. Encoder-based LLMs are suited for Natural Language Understanding (NLU) tasks, such as classification. The decoder-based and encoder-decoder-based LLMs are suited for generation tasks such as summarization, translation, etc. Our proposed LLM Model for logs is an encoder-based model, this is because the majority of the downstream tasks only need representational information for classifying a log line into a relevant class.
Rule-based approaches are commonly applied for the three downstream tasks Log Format Detection, Golden Signal Classification, and Fault Category Detection. Log aggregators like LogDNA [40] and Splunk [41] use manually curated rules for log format detection and log parsing. Nagar et. al. [42] used a dictionary-based approach that built rules using automatically created dictionaries for each golden signal category. The dictionaries were built using an in-house IT domain-specific Glove word embedding model. Zou et. al. [24] proposed a fault category detection system using manually defined regular expressions and custom built dictionary. However, writing a set of regular expressions can be expensive and difficult to scale and maintain, requiring significant manual effort and engineering. The process of curating new rules for each new log format or previously unseen log can be arduous and time-consuming.
Recently, research works on using transformers for log analysis have emerged. LogBERT, BERT-Log, NeuralLog are BERT-based models trained specifically for the task of log anomaly detection [43, 44, 45]. These models are not truly large language models as they are pre-trained specifically for log anomaly detection task and are not applied to any other downstream tasks. We are the first to propose a LLM for logs, utilizing a transformer architecture and fine-tuning on multiple downstream tasks.
## VII Conclusion and Future Work
This paper proposes a Transformer-based Large Language Model for AIOps (_BERTOps_) which is trained on a large corpus of public and proprietary log data. We finetuned the pre-trained _BERTOps_ model on three downstream log analysis tasks: Log Format Detection, Golden Signal Classification, and Fault Category Prediction. Our experiments show that the proposed _BERTOps_ LLM is competitive to both traditional ML and state-of-the-art generic LLM models on the three downstream tasks, that too with minimum training data. And, we observe significant improvements as more training data is provided. For future work, we will apply _BERTOps_ on other AIOps tasks such as log parsing, log anomaly detection, incident prediction, and incident prioritization; this will also require defining additional auxiliary tasks for pretraining. In the AIOps domain, there are other modalities of data such as metrics and request traces. We also plan to add data from different modalities during training to produce a holistic and robust _BERTOps_ for all AIOps downstream tasks.
|
2308.09369 | Single Frame Semantic Segmentation Using Multi-Modal Spherical Images | In recent years, the research community has shown a lot of interest to
panoramic images that offer a 360-degree directional perspective. Multiple data
modalities can be fed, and complimentary characteristics can be utilized for
more robust and rich scene interpretation based on semantic segmentation, to
fully realize the potential. Existing research, however, mostly concentrated on
pinhole RGB-X semantic segmentation. In this study, we propose a
transformer-based cross-modal fusion architecture to bridge the gap between
multi-modal fusion and omnidirectional scene perception. We employ
distortion-aware modules to address extreme object deformations and panorama
distortions that result from equirectangular representation. Additionally, we
conduct cross-modal interactions for feature rectification and information
exchange before merging the features in order to communicate long-range
contexts for bi-modal and tri-modal feature streams. In thorough tests using
combinations of four different modality types in three indoor panoramic-view
datasets, our technique achieved state-of-the-art mIoU performance: 60.60% on
Stanford2D3DS (RGB-HHA), 71.97% Structured3D (RGB-D-N), and 35.92% Matterport3D
(RGB-D). We plan to release all codes and trained models soon. | Suresh Guttikonda, Jason Rambach | 2023-08-18T08:06:18Z | http://arxiv.org/abs/2308.09369v1 | # Single Frame Semantic Segmentation Using Multi-Modal Spherical Images
###### Abstract
In recent years, the research community has shown a lot of interest to panoramic images that offer a \(360^{\circ}\) directional perspective. Multiple data modalities can be fed, and complimentary characteristics can be utilized for more robust and rich scene interpretation based on semantic segmentation, to fully realize the potential. Existing research, however, mostly concentrated on pinhole RGB-X semantic segmentation. In this study, we propose a transformer-based cross-modal fusion architecture to bridge the gap between multi-modal fusion and omnidirectional scene perception. We employ distortion-aware modules to address extreme object deformations and panorama distortions that result from equirectangular representation. Additionally, we conduct cross-modal interactions for feature rectification and information exchange before merging the features in order to communicate long-range contexts for bi-modal and tri-modal feature streams. In thorough tests using combinations of four different modality types in three indoor panoramic-view datasets, our technique achieved state-of-the-art mIoU performance: \(60.60\%\) on Stanford2D3DS [2] (RGB-HHA), \(71.97\%\) Structured3D [44] (RGB-D-N), and \(35.92\%\) Matterport3D [5] (RGB-D) 1.
Footnote 1: We plan to release all codes and trained models soon.
## 1 Introduction
With the increased availability of affordable commercial 3D sensing devices, in recent years, researchers are more interested in working with omnidirectional images, also often referred to as \(360^{\circ}\), panoramic, or spherical images. In contrast to pinhole cameras, the captured spherical images provide an ultra-wide \(360^{\circ}\times 180^{\circ}\) field-of-view (FoV) allowing for the capture of more detailed spatial information of the entire scene from a single frame [14, 43]. Practical applications of such immersive and complete view perception include holistic and dense visual scene understanding [1], augmented- and virtual reality (AR/VR) [26, 37], autonomous driving [11], and robot navigation [6].
Generally, spherical images are represented using equirectangular projection (ERP) [38] or cubemap projection (CP) [31], which introduces additional challenges like scene discontinuities, large image distortions, object deformations, and lack of open-source datasets with diverse real-world scenarios. While extensive research has been conducted on pinhole based learning methods [4, 22, 34, 35, 34], approaches tailored for processing ultra-wide panoramic images and inherently accounting for spherical deformations remain ongoing research. Furthermore, the scarcity of labeled data, in indoor and outdoor scenarios, required for model training with panoramic images has slowed down the progress in this domain.
While previous panorama segmentation techniques have attained state-of-the-art performance for **RGB**-only images, they do not take advantage of the complementary modalities to develop discriminative features in situations when it is difficult to discriminate only based on texture information. With comprehensive cross-modal interactions for **RGB-X** modality [22], our work expands the current Trans4PASS+ [41] methodology for multi-modal panoramic semantic segmentation. For the Stanford2D3DS [2] dataset, we evaluate on \(4\) distinct multi-modal semantic segmentation tasks, including **RGB**, **RGB-Depth**, **RGB-Normal**, and **RGB-HHA**, and we reach a
Figure 1: Overview of our multi-modal panoramic segmentation architecture. The inputs are an combination of **RGB**, **Depth**, and **N**ormals.
state-of-the-art \(60.60\%\) with **RGB-H**HA semantic segmentation. We proposed a tri-modal fusion architecture and achieved top mIoU of \(75.86\%\) on Structure3D [44] (RGB-D-N) and \(39.26\%\) on Matterport3D [5] (RGB-D-N) for situations when HHA2 is not accessible. The performance of our system on the aforementioned indoor panoramic-view datasets is shown in Fig. 2.
Footnote 2: **H**orizontal disparity, **H**eight above ground, and normal **A**ngle to the vertical axis [16]
In summary, we provide the following contributions:
1. We investigate multi-modal panoramic semantic segmentation in four types of sensory data combinations for the first time.
2. We explore the multi-modal fusion paradigm in this study and introduce the tri-modal paradigm with cross-modal interactions for exploring texture, depth, and geometry information in panoramas.
3. On three indoor panoramic datasets that include RGB, Depth, Normal, and HHA sensor data combinations, our technique provides state-of-the-art performance.
## 2 Related Work
**Semantic segmentation** An encoder-decoder paradigm with two stages is typically used in existing semantic segmentation designs [3, 8]. A backbone _encoder_ module [15, 17, 36] creates a series of feature maps in the earlier stage in order to capture high-level semantic data. Later, a _decoder_ module gradually extracts the spatial data from the feature maps. Recent research has focused on replacing convolutional backbones with transformer-based ones in light of the success of vision transformer (ViT) in imagine classification [12]. Early studies mostly concentrated on the Transformer encoder design [9, 23, 45, 33], while later study avoided sophisticated decoders in favor of a lightweight All-MLP architecture [35], which produced results with improved efficiency, accuracy, and robustness.
**Panoramic segmentation** Early methods for interpreting a picture holistically centered on using perspective image-based models in conjunction with distorted-mitigated wide-field of view images. A distortion-mitigated locally-planar image grid tangents to a subdivided icosahedron is Eder _et al_. [13] novel proposal for a tangent image spherical representation. Lee _et al_. [21], on the other hand, uses a spherical polyhedron to symbolize comparable omnidirectional perspectives. Recent studies [25], however, use distortion-aware modules in the network architecture to directly operate on equirectangular representation. Sun _et al_. [30] suggests a discrete transformation for predicting dense features after an effective height compression module for latent feature representation. To improve the receptive field and learn the distortion distribution beforehand, Zheng _et al_. [46] combines the complimentary horizontal and vertical representation in the same line of research. In an encoder-decoder framework, Shen _et al_. [28] introduces a brand-new panoramic transformer block to take the place of the conventional block. Modern panoramic distortion-aware and deformable modules [10] have been added to the state-of-the-art UNet [27] and SegFormer [35] segmentation architectures to improve their performance in the spherical domain [40, 25, 41, 14].
**Multimodal semantic segmentation** Fusion strategies leverage the advantages of several data sources and show notable performance improvements for image-based semantic segmentation [7, 18]. The key contributions for comprehending **RGB-D** scenes concentrated on: 1) creating new layers or operators based on the geometric properties of **RGB-D** data [4, 7, 32], and 2) creating specialized architectures for combining the complimentary data streams in various stages [18, 20, 28, 30]. When modalities other than depth maps are employed, these approaches perform less well because they were created exclusively for **RGB-D** modality [42]. Recent studies have concentrated on establishing unique fusion algorithms for **RGB-X** semantic segmentation that are adaptable across various sensing modality combinations [34, 39, 22]. In the omnidirectional realm, however, the integration of several modalities with cross-modal interactions is still an unresolved issue. The main issue in this scenario is to recognize the distorted and deformed geometric structures in the ultra-wide \(360\)-degree images while taking advantage of a variety of comprehensive complementing information. To jointly use the many sources of information from **RGB**, **Depth**, and Normals equirectangular images, we propose our framework, which makes use of cross-modal interactions and panoramic perception abilities.
## 3 Methodology
Section 3.1 provides a summary of the framework we propose for panoramic multi-modal semantic segmentation.
Figure 2: Our cross-modal panoramic segmentation results with **RGB**, **Depth**, **N**ormals, and **H**HA combinations from Stanford2D3DS (_left_), Structure3D (_middle_) and Matterport3D (_right_) datasets.
Although our framework may be used for bi-modal and tri-modal input scenarios,for simplicity, we explain only the _encoder_ and _decoder_ architectures design for cross-modal (**RGB-Depth-N**ormals) panorama segmentation in Sec. 3.2 and Sec. 3.3, respectively. Our design is based on Trans4PASS+ [41] and uses an extension of CMX [22] for ternary modal streams feature extraction and fusion to learn object deformations and panoramic image distortions. We adopt a notation \(\mathbf{f}\) to represent multi-modal feature maps, \(\mathbf{f}\in\{\mathbf{f}_{rgb},\mathbf{f}_{depth},\mathbf{f}_{normal}\}\), in order to keep the notation simple and avoid the \(l\) notation for inputs and outputs to network modules in the \(l\)-th encoder-decoder stage.
### Framework Overview
In accordance with Xie _et al_. [35], we proposed the multi-modal panoramic segmentation architecture depicted in Fig. 1. The \(H\times W\times 3\) input image is first separated into patches. We provide panoramic hierarchical encoder stages to address the severe distortions in panoramas while allowing cross-modal interactions between **RGB-Depth**-Normals patch features, as described in Sec. 3.2. The encoder uses these patches as input to produce multi-level features at resolutions of \(\{1/4,1/8,1/16,1/32\}\) of the original image. Finally, our panoramic decoder (refer Sec. 3.3) receives these multi-level features in order to predict the segmentation mask at a \(H\times W\times N_{class}\) resolution, where \(N_{class}\) is the number of object categories.
### Panoramic Hierarchical Encoding
Each stage of our encoding process for extracting hierarchical characteristics is specifically designed and optimized for semantic segmentation. Figure 3 illustrates how our architecture incorporates recently proposed Cross-modal Feature Rectification (CM-FRM) and Feature Fusion (FFM) modules [22] as well as Deformable Patch Embeddings (DPE) module [40] to deal with the severe distortions in **RGB**, **D**epth, and **N**ormals panoramas caused by equirectangular representation.
**Deformable patch embedding** A typical Patch Embeddings (PE) module [12, 35] divides an input image or feature map of size \(\mathbf{f}\in\mathbb{R}^{H\times W\times C_{in}}\) into a flattened 2D patch sequence of shape \(s\times s\) each. In this patch, the position offset with respect to a location \((i,j)\) is defined as \(\mathbf{\Delta}_{(i,j)}\in\left[\frac{-s}{2},\frac{s}{2}\right]\times\left[ \frac{-s}{2},\frac{s}{2}\right]\), where \((i,j)\in[1,s]\). However, these fixed sample points fail to learn deformation-aware features and do not respect object shape distortions. To learn a data-dependent offset, we deploy a Deformable Patch Embeddings (DPE) module that was proposed by Zhang _et al_. [40]. We formulate Eq. (1), using the deformable convolution operation \(g(.)\)[10] with a hyperparameter of \(r=4\).
\[\mathbf{\Delta}_{(i,j)}^{DPE}=\begin{bmatrix}min(max(-\frac{H}{r},g(\mathbf{f })_{(i,j)}),\frac{H}{r})\\ min(max(-\frac{W}{r},g(\mathbf{f})_{(i,j)}),\frac{W}{r})\end{bmatrix} \tag{1}\]
**Cross-modal feature rectification** Measurements that are noisy are frequently present in the data from various complementing sensor modalities. By utilizing features from a different modality, the noisy information can be filtered and calibrated. Regarding this, Liu _et al_. [22] present a novel Cross-Modal Feature Rectification Module (CM-FRM) to execute feature rectification between parallel streams at each stage, throughout feature extraction process. In our work, we expand this calibration scheme using ternary features from **RGB**, **D**epth, and **N**ormals panorama stream, as seen in Fig. 4. Our two-stage CM-FRM processes the input features channel- and spatial-wise to address noises and uncertainties in **RGB-Depth**-N**ormals modalities, providing a comprehensive calibration for improved multi-modal feature extraction and interaction. While the spatial-wise rectification stage focuses on local calibration, the channel-wise rectification stage is
Figure 4: _Cross-modal feature rectification module_ to calibrate **RGB**, **D**epth, and **N**ormals features.
Figure 3: _Panoramic encoder stage_ to extract **RGB**, **D**epth, and **N**ormals features.
more concerned with global calibrations. Hyperparameters \(\lambda_{c},\lambda_{s}=0.5\) are utilized to rectify the noisy input multi-modal features as shown in Eq. (2) by using the channel \(\textbf{f}_{channel}^{rec}\) and spatial \(\textbf{f}_{spatial}^{rec}\) weights that have been obtained.
\[\textbf{f}^{rec}=\textbf{f}+\lambda_{c}\textbf{f}_{channel}^{rec}+\lambda_{s} \textbf{f}_{spatial}^{rec} \tag{2}\]
**Cross-modal feature fusion** To improve information interaction and combine the features into a single feature map the rectified multi-modal feature maps \(\textbf{f}^{rec}\) are passed through a two-stage Feature Fusion Module (FFM) at the end of each encoder stage. As seen in Fig. 5, we use a ternary multi-head cross-attention mechanism to expand Liu [22] information sharing stage by allowing for global information flow between the **RGB**, **D**epth, and **N**ormals modalities. In the fusion stage, a channel embedding [22] is utilized to combine ternary features to \(\textbf{f}^{fused}\) and passed through the decoding step for semantics prediction.
### Panoramic Token Mixer Decoder
The vanilla All-MLP decoder employed in earlier works [35] lacked adaptivity to object deformations, which weakens the token mixing of panoramic data. A novel deformable token mixer, the DMLPv2, was proposed by Zhang etal [41] and is demonstrated to be effective and lightweight for both spatial and channel-wise token mixing. We leverage the DMLPv2 token mixer approach at each \(l\)-th level of our framework, as depicted in Fig. 6, which is denoted as:
\[\hat{\textbf{f}}_{l} =\textbf{DPE}(\textbf{f}_{l}^{fused}) \tag{3}\] \[\hat{\textbf{f}}_{l} =\textbf{P}\textbf{X}(\hat{\textbf{f}}_{l})+\textbf{C}\textbf{X} (\hat{\textbf{f}}_{l})\] (4) \[\hat{\textbf{f}}_{l} =\textbf{DMLP}(\hat{\textbf{f}}_{l})+\textbf{C}\textbf{X}(\hat{ \textbf{f}}_{l})\] (5) \[\textbf{f}_{l}^{decoded} =\textbf{UpSample}(\hat{\textbf{f}}_{l}) \tag{6}\]
The Channel Mixer (CX) of the DMLPv2 considers space-consistent yet channel-wise feature reweighting, strengthening the feature by emphasizing informative channels. Focusing on spatial-wise sampling using fixed and adaptive offsets, respectively, the Pooling Mixer (PX) and Deformable MLP (DMLP) are used in DMLPv2. The non-parametric Pooling Mixer (PX) is implemented by an average pooling operator. The adaptive data-dependent spatial offset \(\mathbf{\Delta}_{(i,j,c)}^{DMLP}\) is predicted channel-wise.
Finally, to output the prediction for \(N_{class}\) semantics masks, the decoded features from the four steps are concatenated and given to a segmentation header module, depicted in Fig. 1.
## 4 Experiments
### Datasets
For the purpose of evaluating our suggested cross-modal framework for interior settings, we use three multi-modal equirectangular semantic segmentation datasets. In each of our tests, we resize the input image to \(512\times 1024\), and then we compute evaluation metrics, such as Mean Region Intersection Over Union (mIoU), Pixel Accuracy (aAcc), and Mean Accuracy (mAcc), using the MMSegmentation IoU script3.
Footnote 3: [https://mmsegmentation.readthedocs.io/en/0.x/](https://mmsegmentation.readthedocs.io/en/0.x/)
**Stanford2D3DS dataset**[2] contains \(1713\) multi-modal equirectangular images with \(13\) object categories. We split the data from area_1 to area_6 for training and validation in a manner similar to Armeni [2], using a 3-fold cross-validation scheme, and we give the mean values across the folds. Furthermore, the publicly accessible code4 is used to compute the panoramic HHA [16] modality using the appropriate depth and camera parameters.
Footnote 4: [https://github.com/charlesCXK/Depth2HHA-python](https://github.com/charlesCXK/Depth2HHA-python)
**Structured3D dataset**[44] offers \(40\) NYU-Depth-v2 [29] object categories, \(196515\) synthetic, multi-modal, equirectangular images with a variety of lighting setups. In line with Zheng [44], we establish typical training, validation, and test splits as follows: scene_00000 to scene_02999 for training, scene_03000 to scene_03249 for validation, and scene_03250 to scene_03499 for testing. For
Figure 5: _Cross-modal feature fusion module_ to fuse **RGB**, **D**epth, and **N**ormals features.
Figure 6: _Panoramic decoder stage_ with fused features from **RGB**, **D**epth, and **N**ormals modalities.
all of the tests we conduct, we use rendered raw lighting images with full furniture arrangements.
**Matterport3D dataset**[5] The \(10800\) panoramic views in the Matterport3D [5] collection are represented by \(18\) viewpoints per image frame, necessitating an explicit conversion to an equirectangular format. Second, the associated semantic annotations are spread among four files (xxx.house, xxx.ply, xxx.fsegs.json, and xxx.semseg.json). We employ the open-source matterport_utils5 code for post-processing, where the _mpview_ script is used to produce annotation images and the _preparepano_ script is used to stitch the \(18\) images that were taken into a \(360\)-degree panorama. For our trials using the \(40\) object categories, we created own training, validation, and test splits, refer to appendix.
Footnote 5: [https://github.com/atlantis-ar/matterport_utils](https://github.com/atlantis-ar/matterport_utils)
### Implementation Details
With an initial learning rate of \(6\)e-\(5\) programmed by the poly strategy with power \(0.9\) over the training epochs, we train our models using a pre-trained SegFormer MiT-B26 RGB backbone on the RTXA6000 GPU. For Stanford2D3DS [2], Structured3D [44], and Matterport3D [5] experiments, there are \(200\) training epochs, \(50\), and \(100\) respectively. The optimizer AdamW [19] is employed with the following parameters: batch size \(4\), epsilon \(1\)e-\(8\), weight decay \(1\)e-\(2\), and betas \((0.9,0.999)\). Random horizontal flipping, random scaling to scales of \(\{0.5,0.75,1,1.25,1.5,1.75\}\), and random cropping to \(512\times 512\) are added for image argumentations. Deformable Patch Embedding module (DPE), refer to Sec. 3.2, is used for the panoramic encoder stage-1 and a conventional Overlapping Patch Embedding (OPE) module [35], for the other stages of our framework. More specific settings are described in detail in the appendix.
Footnote 6: [https://github.com/huaaaliu/RGBX_Semantic_Segmentation](https://github.com/huaaaliu/RGBX_Semantic_Segmentation)
We conducted our tests for the following fusion configurations: **RGB**-only, **RGB**-Depth, **RGB**-**Normal, **RGB**-**HHA, and **RGB**-**Depth**-**Normal, **RGB**-**Depth**-**HHA, and **RGB**-**Normal-HHA. In our tests, we only use pathways and modules in our encoding-decoding stages and skip any unnecessary parts of our framework based on these combinations. For example, in the CM-FRM and FFM modules discussed in Sec. 3.2, we employ bi-directional features for cross-modal interactions for the **RGB**-**Depth scenario, whereas for the **RGB**-**Depth**-**Normal situation, we use routes that lead to tri-directional interactions across the features.
### Experiment Results and Analysis
We carry out comprehensive tests on multimodal segmentation datasets for indoor settings to demonstrate the effectiveness of our proposed architecture of cross-modal fusion using panoramas. We employ the aforementioned training epochs, random crop-size, and batch size variables to compare our method against the current state-of-the-art approaches Trans4PASS+ [41], HoHoNet [30], PanoFormer [28], CMNeXt [39], and TokenFusion [34]. For a detailed description of their implementation, see the corresponding works. While all other approaches have been reproduced using the conditions of our experiment, the CBFC [46] and Tangent [13] results described here are from the related original paper. In Figure 2, Figure 7 and Figure 8, as well as in Table 1 and Table 2, are visualizations of the quantitative results and comparisons to the state-of-the-art.
respectively. Additionally, by combining depth and normals data, we were able to outperform benchmark results for (validation, test) by \((+1.84,+1.83)\) for **RGB-Depth**, \((+2.44,+2.66)\) for **RGB-N**ormals, and \((+3.92,3.63)\) for **RGB-Depth**-Norms fusion.
**Results on Matterport3D** Table 2 shows further trials using Matterport3D [5] with comparable **RGB**, **Depth**, and **N**ormals combinations in addition to the Structured3D [44] dataset. Our method outperforms the current panoramic techniques in this case for both **RGB-only and **RGB-Depth** based semantic segmentation. Our validation and test pair mIoU metrics values for **RGB-only** and **RGB-Depth**, respectively, are \((35.15\%,31.30\%)\) and \((39.19\%,35.92\%)\), respectively, when compared to the benchmark. However, we discovered that the combination of the multi-modal fusion with normals did not result in the expected improvement in performance, as demonstrated in other tests, \((38.91\%,35.92\%)\) for **RGB-Normal** and \((39.26\%,35.52\%)\) for **RGB-Depth**-N**ormal. Our hypothesis is that the depth and normals data result in a limited amount of modal differences, and thus modal addition may be unnecessary.
### Qualitative Analysis
The segmentation outcomes of panoramic techniques are shown in Fig. 7, which displays the findings from left to right and from top to bottom across several indoor datasets. Overall, our approach is able to take advantage of depth and geometry data as well as textures from **RGB**, **D**epth and **N**ormal modalities and correctly identify object semantics with a better level of accuracy, as indicated. While our baseline Trans4PASS+ [41] accurately classifies the book shelf, sofa, and chair in the first row, the architecture was unable to predict the exact geometrical shapes. Using depth information, PanoFormer [28] and HoHoNet [30] were able to estimate the exact geometry of the chair and bookshelf, however, former method incorrectly guessed the object class of the sofa. The third row findings of the **RGB**-only and **RGB-Depth** based techniques show a similar trend. When compared to current state-of-the-art baselines, our method consistently predicted geometric shapes that were considerably clearer and had precise object semantics in these situations. The approach can even handle thin structures like the neck of a guitar and items on a dining table, as shown in the second row.
The qualitative results of different Stanford2D3DS [2] multi-modal combinations, including **RGB**-only, **RGB-Depth**, **RGB-Normal**, **RGB-H**HA, and **RGB-Depth-N**ormal, are shown in Fig. 8 using our paradigm. While in the scenarios shown in Fig. 8 (a) and Fig. 8 (b), using complementary data from other modalities is advantageous, this may not always be the case when the model cannot tell the difference between the distorted door and the wall (Fig. 8 (c)), or the distorted door and the bookshelf (Fig. 8 (d)). We hypothesize that these failed cases happened as a result of the scene objects' ambiguity, which makes it difficult to distinguish using any of the accessible modalities.
### Ablation Studies
In the context of panoramic semantic segmentation, we investigated the state-of-the-art fusion architectures CMX [22], CMNeXt [39], and TokenFusion [34]. Our architecture, which was expanded to include a tri-modal panoramas scenario, is inspired on CMX [22]. In order to address panorama distortions, Deformable Patch Embeddings (DPE) modules, which are detailed in Sec. 3.2, are added to these encoder's backbone. The stages of the panorama decoder, as defined in Sec. 3.3, have not changed. We employ two versions of CMNeXt [39], one with and one without a Self-Query Hub (SQ-Hub), with the former version demonstrating the ability to handle up to \(81\) modalities with minimal overhead and processing demands. Furthermore, it is expected that SQ-Hub will soft-select informative features while remaining robust to sensor failure.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Modal**} & \multicolumn{2}{c}{**Structured3D**} & \multicolumn{2}{c}{**Matterport3D**} \\ & & **Validation mIoU (\%)** & **Test mIoU (\%)** & **Validation mIoU (\%)** & **Test mIoU (\%)** \\ \hline \hline Trans4PASS+ [41] & & \(66.74\) & \(66.90\) & \(33.43\) & \(29.19\) \\ HoHoNet [30] & RGB & \(66.09\) & \(64.41\) & \(31.91\) & \(29.33\) \\ PanoFormer [28] & \(55.57\) & \(54.87\) & \(30.04\) & \(26.87\) \\ _OURS_ & & \(71.94\) & \(68.34\) & \(35.15\) & \(31.30\) \\ \hline HoHoNet [30] & & \(69.51\) & \(66.99\) & \(35.36\) & \(32.02\) \\ PanoFormer [28] & RGB-D & \(60.98\) & \(59.27\) & \(33.99\) & \(31.23\) \\ _OURS_ & & \(73.78\) & \(70.17\) & \(39.19\) & \(\mathbf{35.92}\) \\ \hline \multirow{2}{*}{_OURS_} & RGB-N & \(74.38\) & \(71.00\) & \(38.91\) & \(35.77\) \\ & RGB-D-N & \(\mathbf{75.86}\) & \(\mathbf{71.97}\) & \(\mathbf{39.26}\) & \(35.52\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on Structured3D [44] and Matterport3D [5] datasets.
Figure 8: Visualization of semantic segmentation results for our framework using Stanford2D3DS [2] for **RGB**-only, **RGB**-**Depth, **RGB**-**N**ormals, **RGB**-**H**HA, and **RGB**-**Depth-**N**ormals (top-to-bottom) combinations. By utilizing complementary traits, our method was successful in identifying deformed and visually identical building structures like doors in columns (a) and (b). Under ambiguity, we were unable to differentiable between the distorted door and the wall or the deformed door and the bookcase in columns (c) and (d), respectively.
Figure 7: Results of multi-modal panoramic semantic segmentation for the **RGB**-only, **RGB**-**Depth, and **RGB**-**Depth-**N**ormals methods are visualized. For **RGB** segmentation, we use Trans4PASS+ [41] baseline, which employs the same SegFormer MiT-B2 backbone [35] with Deformable Patch Embeddings (DPE) and DMLPV2 decoder as ours, as detailed in Sec. 3.3. PanoFormer [28] uses a cutting-edge panoramic transformer-based architecture for **RGB**-**Depth segmentation, while HoHoNet [30] is built on pre-trained ResNet-101 [17] in conjunction with a sophisticated horizon-to-dense module. Our strategy leverages **RGB**-**Depth-**N**ormal fusion to improve performance by utilizing all available features.
Table 3 compares \(\{\)**RGB-**Depth, **RGB-**Normals, and **RGB-HHA \(\}\)** bi-modal fusion, \(\{\)**RGB-Depth-Normal, **RGB-Depth-HHA**, and **RGB-Normal-HHA \(\}\)** tri-modal fusion, and \(\{\)**RGB-Depth-Normal-HHA \(\}\) quad-modal fusion. Overall, the CMX [22] technique we adopted had greater performance. Our methodology, which uses Token-Fusion [34] for feature extraction and fusion, performs well on the Matterport3D [5] dataset, although it lags behind Stanford3D2DS [2] and Structured3D [44] by a wider margin. Thanks to Self-Query Hub (SQ-Hub), our approach to using encoded features from CMNeXt [39] performs comparably across datasets with fewer computational overload. However, in the majority of cases, in our panoramic trials, we have observed similar outcomes without SQ-Hub.
## 5 Conclusion
In this work, we revisit multi-modal semantic segmentation at the pixel level for a holistic scene understating. Through a cutting-edge panoramic encoder design, we present the framework with distortion awareness and cross-modal interactions. Our encoder learns severe object deformations and panoramic image distortions with equirectangular representations, and leverages feature interaction and feature fusion for cross-modal global reasoning in RGB-X panoramic segmentation. Our architecture produces superior performance on indoor panoramic benchmarks using RGB-Depth, RGB-Normal, and RGB-HHA combinations. Furthermore, we rebuild our cross-modal panoramic encoder to learn textual, disparity, and geometrical features using tri-modal (RGB-Depth-Normals) fusion, hence removing the requirement to compute HHA representations while maintaining the same performance. One major drawback of our method is that having two or more input streams active at once typically results in a large rise in complexity, refer to appendix. We'll look for techniques to combine multi-modal panoramas and 3D LiDAR data in the future with the least amount of processing effort possible.
**Acknowledgement.** This work was partially funded by the EU Horizon Europe Framework Program under grant agreement 101058236 (HumanTech).
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Modal**} & \multicolumn{2}{c}{**Stanford2D3DS**[2]} & \multicolumn{2}{c}{**Structured3D**[44]} & \multicolumn{2}{c}{**Matterport3D**[5]} \\ & & **mIoU (\%)** & **mAcc (\%)** & **mIoU (\%)** & **mAcc (\%)** & **mIoU (\%)** & **mIoU (\%)** & **mAcc (\%)** \\ \hline \hline _OURS_ - TokenFusion [34] & & 58.88 & 68.57 & 62.58 & 70.54 & **36.48** & 49.30 \\ _OURS_ - CMNeXt (S) [39] & & 56.49 & 66.27 & 68.35 & 76.54 & 35.38 & 49.71 \\ _OURS_ - CMNeXt [39] & & 54.27 & 64.13 & 69.31 & 78.12 & 34.99 & 49.42 \\ _OURS_ & & 55.49 & 66.02 & 70.17 & 77.88 & 35.92 & 49.24 \\ _OURS_ - TokenFusion [34] & & 57.86 & 67.39 & 62.76 & 70.91 & 35.71 & 48.92 \\ _OURS_ - CMNeXt (S) [39] & & 53.61 & 63.26 & 68.47 & 76.82 & 33.10 & 46.32 \\ _OURS_ - CMNeXt [39] & & 50.47 & 60.83 & 68.62 & 76.99 & 33.80 & 47.02 \\ _OURS_ & & 58.24 & 68.79 & 71.00 & 78.68 & 35.77 & **50.39** \\ \hline _OURS_ - TokenFusion [34] & & 59.06 & 68.07 & \(-\) & \(-\) & \(-\) & \(-\) \\ _OURS_ - CMNeXt (S) [39] & & 55.70 & 65.79 & \(-\) & \(-\) & \(-\) & \(-\) \\ _OURS_ - CMNeXt [39] & & 52.48 & 62.78 & \(-\) & \(-\) & \(-\) & \(-\) \\ _OURS_ & & **60.60** & **70.68** & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline _OURS_ - CMNeXt (S) [39] & & 57.62 & 67.80 & \(-\) & \(-\) & \(-\) & \(-\) \\ _OURS_ - CMNeXt [39] & & RGB-D-H & 54.54 & 64.22 & \(-\) & \(-\) & \(-\) & \(-\) \\ _OURS_ & & 59.99 & 70.44 & \(-\) & \(-\) & \(-\) & \(-\) \\ _OURS_ - CMNeXt (S) [39] & & 55.72 & 65.86 & 69.55 & 77.50 & 35.18 & 49.79 \\ _OURS_ - CMNeXt [39] & & 54.65 & 64.53 & 69.11 & 77.54 & 35.55 & 50.09 \\ _OURS_ & & 59.43 & 69.03 & **71.97** & **79.67** & 35.52 & 50.01 \\ \hline _OURS_ - CMNeXt (S) [39] & & 55.45 & 65.24 & \(-\) & \(-\) & \(-\) & \(-\) \\ _OURS_ - CMNeXt [39] & & RGB-N-H & 52.50 & 62.19 & \(-\) & \(-\) & \(-\) & \(-\) \\ _OURS_ & & 60.24 & 70.62 & \(-\) & \(-\) & \(-\) & \(-\) \\ _OURS_ - CMNeXt (S) [39] & & 55.55 & 65.33 & \(-\) & \(-\) & \(-\) & \(-\) \\ _OURS_ - CMNeXt [39] & & 54.48 & 64.21 & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: An analysis of the various cross-modal fusion techniques applied to the encoder stages of our multi-modal panoramic architecture.
## Appendix A Experimentation details
### Matterport3D dataset
To divide the \(10800\) panoramic equirectangular images in the Matterport3D [5] dataset, we create standard training, evaluation, and test splits. The \(90\) building-scale scenarios, which included a range of scene types like residences, offices, and churches, were divided into an \(80\)-\(10\)-\(10\) split. For all our segmentation experiments using the \(40\) object categories, we use these training, validation, and test splits.
## Appendix B Qualitative analysis
### Multi-modal panoramic semantic segmentation
Figure 10 and Figure 9, which come from the Stanford2D3DS [2] evaluation set and the Structured3D [44] test set, respectively, show further qualitative comparisons between various fusion combinations for our proposed framework. In Fig. 10 (a) and (b), our tri-model (**RGB-D-N**) is able to give better segmentation results in the categories denoted by the black dashed rectangles, such as the _Door_, _Window_, and _Bookshelf_, while the baseline (**RGB**-only) model struggles to recognize these significantly distorted objects. The **RGB**-only baseline models wrongly segment the _Door_ in figure Fig. 9 (c) as a part of the _Wall_. Our tri-model (**RGB-D-N**) in this case achieves the correct segmentation results with greater accuracy than **RGB-D** techniques. The same conditions apply to the _Cabinet_ in Fig. 9 (a) and the support between the _Bed_ and _Cabinet_ in Fig. 9 (b). Compared to other approaches, In Fig. 9 (d), along with the precise geometry shapes for objects placed inside the _Cabinet_ structure, a better segmentation result from our multi-modal (**RGB-D-N**) is displayed. However, due to visual ambiguity, the category is incorrectly predicted by all models.
Figure 9: Structured3D [44] segmentation visualizations. Zoom in for better view.. |
2301.12524 | Lepton portal dark matter at muon colliders: Total rates and generic
features for phenomenologically viable scenarios | Lepton portal dark matter (DM) models are a class of models where the DM
candidates solely couple to charged leptons through a mediator carrying a
lepton number. These models are very interesting since they avoid constraints
from direct detection experiments even for coupling of order ${\cal O}(1)$,
they have small annihilation cross sections, and can be probed efficiently at
lepton colliders. In this work, we consider a minimal lepton portal DM model
which consists of extending the SM with two $SU(2)_L$ singlets: a charged
scalar singlet and an electrically neutral right-handed fermion. We
systematically study the production mechanisms of DM at multi-TeV muon
colliders. After considering all the possible theoretical and experimental
constraints and studying the phenomenology of lepton flavour violation and DM
in the muon-philic scenario, we analyse the production rates of 54 channels (26
channels for prompt DM production and 28 channels for charged scalar
production) at multi-TeV muon colliders. Finally, we discuss the possible
collider signatures of some channels and the corresponding backgrounds. We find
that at least 9 channels for DM production can be very efficient in testing DM
with masses up to about $1$ TeV. | Adil Jueid, Salah Nasri | 2023-01-29T19:38:44Z | http://arxiv.org/abs/2301.12524v1 | Lepton portal dark matter at muon colliders: Total rates and generic features for phenomenologically viable scenarios
###### Abstract
Lepton portal dark matter (DM) models are a class of models where the DM candidates solely couple to charged leptons through a mediator carrying a lepton number. These models are very interesting since they avoid constraints from direct detection experiments even for coupling of order \(\mathcal{O}(1)\), they have small annihilation cross sections, and can be probed efficiently at lepton colliders. In this work, we consider a minimal lepton portal DM model which consists of extending the SM with two \(SU(2)_{L}\) singlets: a charged scalar singlet and an electrically neutral right-handed fermion. We systematically study the production mechanisms of DM at multi-TeV muon colliders. After considering all the possible theoretical and experimental constraints and studying the phenomenology of lepton flavour violation and DM in the muon-philic scenario, we analyse the production rates of 54 channels (26 channels for prompt DM production and 28 channels for charged scalar production) at multi-TeV muon colliders. Finally, we discuss the possible collider signatures of some channels and the corresponding backgrounds. We find that at least 9 channels for DM production can be very efficient in testing DM with masses up to about 1 TeV.
+
Footnote β : preprint: CTPU-PTC-2023-02
## I Introduction
Supported by various astrophysical and cosmological observations, it is now widely accepted that dark matter (DM) exists in the universe (see _e.g._[1; 2; 3; 4] for comprehensive reviews). On the other hand, the measurements of the anisotropies in the cosmic microwave background (CMB) implies that DM is the dominant component of the matter budget in the universe with a density of \(\Omega_{\rm DM}h^{2}=0.1198\pm 0.0015\)[5]. The standard theories of structure formation require that the DM should be non-relativistic at the matter-radiation equality. In particle physics models, this can be easily realised by extending the SM with weakly interacting massive particles (WIMPs) under the standard thermal freeze-out mechanism. The search for WIMPs was one of the major programmes at the Large Hadron Collider (LHC). A special characteristic of WIMPs production at the LHC is that one can probe it through the recoil of a SM particle against a large missing transverse energy (\(E_{T}^{\rm miss}\)). Examples of these processes are mono-jet [6], mono-\(Z\)[7; 8] or mono-Higgs [9] among others. Unfortunately various searches for WIMPs at the LHC were unsuccessful to find such signals and limits were put on the production cross section versus the DM mass [10; 11; 12; 13; 14] which were interpreted in various particle physics realizations. Furthermore, these constraints were even more stringent when the void bounds from direct detection experiments [15; 16] are included [17; 18]. The situation is not very different in the case the DM production is mediated through colored mediators or leptoquarks with the main mechanisms for DM density in the early universe being the co-annihilation or conversion-driven freezeout mechanisms [19; 20; 21; 22; 23]. The interpretation of these searches exclude DM masses of about 0.1-1 TeV and mediator masses of about 0.5-5 TeV depending on the theoretical model.
In the light of this current situation, an important question arises: what if DM only couples to the lepton sector? From the theoretical standpoint, there is _a priori_ no fundamental principle that can prevent DM from coupling to leptons only. This class of models has been proposed some time ago in ref. [24] and was widely studied in the literature [25; 26; 27; 28; 29; 30; 31]. There are many interesting implications for these models. First, the scattering of the DM off the nucleus is induced at the one-loop order and therefore these models can evade easily direct detection constraints even for model parameters of order \(\mathcal{O}(1)\). Second, except for electron-philic scenarios, constraints from positron indirect detection searches are also not important since their annihilation is dominated by \(p\)-wave amplitudes which are suppressed by the square of the DM velocity. Finally, the DM can be produced at the LHC through the decay of charged scalars and therefore the corresponding bounds are not as strong as in the case of mono-X searches especially in the case of \(SU(2)_{L}\) gauge singlet mediators [29]. Therefore an efficient probe of this category of models is through leptonic colliders such as the International Linear Collider (ILC), Chinese Electron Positron Collider (CEPC), and the future muon colliders. Recently, future muon colliders are attracting high interest due to their capability to probe new physics beyond the SM at very high scales [32; 33; 34] and therefore competing with the future circular colliders (FCC-hh). On the other hand, these machines can achieve very high energies thanks to
the expected excellent cooling systems and the weaker synchrotron radiation. Finally at very high energies, muon colliders are necessarily vector-boson colliders where the dominant production channels are through vector-boson fusion (VBF) [35; 36]. Phenomenology of both the SM and beyond at muon colliders has been extensively studied in the literature (see _e.g._[37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56] and references therein).
In this work, we study the production of DM at muon colliders within the minimal lepton portal DM model in which we extends the SM with two \(SU(2)_{L}\) singlets: a charged scalar that plays the role of the mediator and a neutral right-handed fermion (or, equivalently, Majorana particle) that plays the role of the DM candidate. We first comprehensively the impact of the different theoretical and experimental constraints on the model parameter space in the muon-philic scenario, _i.e._ the scenario where the DM couples predominantly to muons. We then select a few benchmark points that define phenomenologically viable scenarios that can be probed at high energy muon colliders. We study the production cross sections and the expected backgrounds for a set of production channels totaling 26 production channels for DM and 28 production channels for the charged singlet scalar. A particular feature of this model is that the DM is a Majorana fermion and therefore does not couple to gauge bosons directly and therefore the direct production of DM does not receive any contribution from VBF channels. We select a few production channels that can have high discovery potential and discuss the possible signatures and the associated backgrounds. This work is an introduction for future projects where a complete exploration of the model at muon colliders will be performed.
The remainder of this paper is organized as follows. We discuss the model and its UV completion in section II along with the constraints from LEP searches, \(H_{\rm SM}\to\gamma\gamma\) and theoretical constraints. In section III we discuss the constraints from charged lepton flavour violation in \(\ell_{\alpha}\to\ell_{\beta}\gamma\), \(\ell_{\alpha}\to 3\ell_{\beta}\) and \(H_{\rm SM}\to\ell_{\alpha}\bar{\ell}_{\beta}\). A detailed analysis of DM phenomenology in this model is presented in section IV where we discuss the DM relic density, direct detection constraints and Higgs invisible decays. A study of DM production at muon colliders, the interesting signatures and the associated backgrounds is performed in section V. In section VI we study the production of charged scalars at muon colliders. We draw our conclusions in section VII.
## II Theoretical Setup
### The model
We consider a minimal extension of the SM by two gauge singlet fields: a charged scalar (\(S\)) and a right-handed fermion (\(N_{R}\)). We further assume that the two extra singlets are odd under \(Z_{2}\) symmetry while all the SM particles are even; _i.e.,_\(\{S,N_{R}\}\to\{-S,-N_{R}\}\) and \(\{\ell,q,\nu,\Phi,V^{\mu}\}\to\{\ell,q,\nu,\Phi,V^{\mu}\}\). To ensure that the \(N_{R}\) state is a suitable DM candidate within our model, we impose the condition \(M_{N_{R}}<M_{S}\). Furthermore, the charged singlet is assumed to carry a lepton number and therefore couples only to charged leptons.1 The full Lagrangian is given by
Footnote 1: This charged singlet is also called a scalar lepton [57] and the relevant interaction Lagrangian is similar to the case of interaction of supersymmetric lepton with a neutralino and a charged lepton. The difference here is that we assume a single charged scalar to couple to all the leptons instead of three scalars, usually denoted by \(\bar{\epsilon}_{R},\bar{\mu}_{R},\text{ and }\bar{\tau}_{R}\), where each scalar couples to a specific lepton generation.
\[\mathcal{L}=\mathcal{L}_{\rm SM}+\mathcal{L}_{S}-V(\Phi,S), \tag{1}\]
where \(\Phi\) refers to the SM Higgs doublet, \(\mathcal{L}_{S}\) is the interaction Lagrangian for the singlet scalar (including the kinetic term), and \(V(\Phi,S)\) is the scalar potential. The interaction Lagrangian for the \(S\) field is given by
\[\mathcal{L}_{S}=\sum_{\ell=e,\mu,\tau}Y_{\ell N}\overline{\ell}_{R}^{c}SN_{R}+ (\mathcal{D}^{\mu}S)^{\dagger}(\mathcal{D}_{\mu}S)+\text{h.c.}, \tag{2}\]
with \(\mathcal{D}_{\mu}S=(\partial_{\mu}-ig_{2}Y_{S}B_{\mu}/2)S\) being the covariant derivative, \(Y_{S}=2\) is the hypercharge of the scalar singlet and \(g_{2}\) is the \(U(1)_{Y}\) gauge coupling. The kinetic term in equation (2) gives rise to interaction with \(A_{\mu}\) and \(Z_{\mu}\) which are given, after field rotations, by
\[\mathcal{L}_{S;\text{gauge}} = -(eA^{\mu}-e\tan\theta_{W}Z^{\mu})S^{\dagger}\bar{\partial}_{\mu} S+e^{2}A_{\mu}A^{\mu}S^{\dagger}S\] \[+ e^{2}\tan^{2}\theta_{W}Z_{\mu}Z^{\mu}S^{\dagger}S-2e^{2}\tan \theta_{W}A_{\mu}Z^{\mu}S^{\dagger}S,\]
where \(e=\sqrt{4\pi\alpha_{\text{EM}}}\) is the electric charge, \(\theta_{W}\) is the Weinberg mixing angle, and \(A\bar{\partial}_{\mu}B\equiv A(\partial_{\mu}B)-(\partial_{\mu}A)B\). The most general _CP_-conserving, renormalizable and gauge invariant scalar potential is given by
\[V(\Phi,S)=-M_{11}^{2}|\Phi^{\dagger}\Phi|+M_{22}^{2}|S^{\dagger}S|+\lambda_{1}| \Phi^{\dagger}\Phi|^{2}+\lambda_{2}|S^{\dagger}S|^{2}+\lambda_{3}|\Phi^{ \dagger}\Phi||S^{\dagger}S|. \tag{3}\]
All the parameters of the scalar potential are assumed to be real valued as a consequence of _CP_ conservation. The
process of electroweak symmetry breaking leads to three physical scalars: \(H_{\rm SM}\) identified with the recently discovered 125 GeV SM Higgs boson and a pair of charged scalars denoted by \(H^{\pm}\). Their masses are given at the lowest order in perturbation theory by
\[M_{H_{\rm SM}}^{2}=\lambda_{1}v^{2}=-2M_{11}^{2},\quad M_{H^{\pm}}^{2}=M_{22}^{ 2}+\frac{1}{2}\lambda_{3}v^{2}, \tag{4}\]
with \(\upsilon\) being the vacuum expectation value (VEV) of the SM Higgs doublet. This model involves seven additional free parameters which we parametrise as follows
\[\{M_{H^{\pm}},M_{N_{R}},\lambda_{2},\lambda_{3},Y_{eN},Y_{\mu N},Y_{\tau N}\}. \tag{5}\]
For convenience we define the combination of the couplings \(Y_{\ell N}\) by2
Footnote 2: This is equivalent to a definition of a system of spherical coordinates wherein the new parameters are \(Y_{\ell N},\theta\) and \(\varphi\) such that \(\theta\in[0,\pi]\) and \(\varphi\in[0,2\pi]\). The couplings in equation (2) are defined here as \(\mathcal{V}_{eN}=Y_{\ell N}\cos\varphi\sin\theta\), \(Y_{\mu N}=\mathcal{V}_{\ell N}\sin\varphi\sin\theta\) and \(Y_{\tau N}=Y_{\ell N}\cos\theta\).
\[Y_{\ell N}=\sqrt{Y_{eN}^{2}+Y_{\mu N}^{2}+Y_{\tau N}^{2}},\]
which is a very good parametrisation in case the charged leptons are assumed to be massless.
### Theoretical and experimental constraints
The parameters of the model in equation (5) are subject to various theoretical and experimental constraints. We start with a brief discussion of the constraints influencing the scalar potential parameters and \(M_{H^{\pm}}\) where more details can be found in [29]. The width of the SM Higgs boson is only affected by the rate of its decay to \(\gamma\gamma\). In this model, this process receives new contributions from charged singlet scalar which give rise to destructive or constructive contributions depending on the sign of \(\lambda_{3}\)[58; 59; 60]3. In the present work, we have used the most recent ATLAS-CMS combined measurement of \(|\kappa_{\gamma}|\)[61]
Footnote 3: We have found a typo in the analytical expression in ref. [59] which may influences their numerical results.
\[|\kappa_{\gamma}|\equiv\sqrt{\Gamma(H\rightarrow\gamma\gamma)/\Gamma(H \rightarrow\gamma\gamma)_{\rm SM}}=0.87^{+0.14}_{-0.09}.\]
We assume the theoretical prediction to be in agreement with the experimental measurement at the \(2\sigma\)-level. We found that the enhancement of \(|\kappa_{\gamma}|\) always occur for \(\lambda_{3}<0\) which excludes charged scalars with masses up to \(\sim 380\) GeV [60]. For \(\lambda_{3}>0\), we get three possible regimes: _(i)_ large and negative contribution that implies an enhancement of \(\kappa_{\gamma}\), _(ii)_ positive but small contribution which makes \(\kappa_{\gamma}\) consistent with the experimental measurement and _(iii)_ exact or almost exact cancellation between the \(H^{\pm}\) and the \(W\)-boson contributions which make \(\kappa_{\gamma}\) very small. Therefore, for \(\lambda_{3}>0\), charged singlet masses up to 380 GeV are excluded but with small region where the constraints completely vanish.
In addition to constraints from Higgs decays, the parameters of the scalar potential are subject to a number of theoretical constraints. We note that the bounds on the scalar potential of this model can be obtained from those in _e.g._ the inert doublet model by setting \(\lambda_{4}=\lambda_{5}=0\). In this study, we impose constraints from vacuum stability conditions (or boundness-from-below) [62], perturbativity, perturbative unitarity [63; 64] and false vacuum [65]. The false vacuum condition plays a very important role in constraining the parameters \(\lambda_{2},\lambda_{3}\) and \(M_{H^{\pm}}\). We get
\[M_{H^{\pm}}^{2}\geqslant\frac{1}{2}\bigg{(}\lambda_{3}v^{2}-M_{H_{\rm SM}}^{2 }\sqrt{\frac{\lambda_{2}}{\lambda_{1}}}\bigg{)}. \tag{6}\]
We found that: _(i)_\(\lambda_{3}\) cannot be larger 5 for all charged scalar masses and _(ii)_ there is a parabola in the plane defined by \(\lambda_{3}\) and \(M_{H^{\pm}}\) which simply tells us that the smaller is the minimum allowed value of \(M_{H^{\pm}}\) the smaller is the maximum allowed value of \(\lambda_{3}\). These conclusions are mildly dependent on the choice of \(\lambda_{2}\) and, therefore, we choose \(\lambda_{2}=2\) in the remainder of this manuscript without loss of generality.
The model can be constrained by using the null results of LEP and LHC searches for supersymmetric particles
Figure 1: Summary of the collider constraints on the parameter space of the model displayed on the plane of \((M_{H^{\pm}},M_{N_{R}})\). We show the constraints from LEP searches of sleptons and charginos (red), LHC searches for sleptons in the compressed regime (blue) and constraints from LHC searches of sleptons and charginos for large mass splittings (green). Here, we assume that the charged singlet scalar decays to \(\mu^{\pm}N_{R}\) with a branching fraction of 100% and assume the Narrow Width Approximation (NWA) by selecting parameters for which we have \(\Gamma_{H^{\pm}}/M_{H^{\pm}}<0.15\). The gray dashed line corresponds to the kinematical boundary above which the \(N_{R}\) particle is not a suitable dark matter candidate.
[66; 67; 68]. The OPAL collaboration of the LEP experiment has searched for charginos decaying into a charged lepton and the lightest supersymmetric neutralino using \(680~{}{\rm pb}^{-1}\) of integrated luminosity [66]. Assuming that the branching ratio of \(H^{\pm}\to\mu^{\pm}N_{R}\) is 100%, the production of charged singlet pairs occurs through gauge interactions (\(s\)-channel diagrams with the exchange of \(\gamma^{*}/Z^{0}\)). This search constrain the mass of the charged singlet to be not heavier than 100 GeV for any value of \(Y_{\mu N}\). This can be seen clearly in the red contour of figure 1. The ATLAS collaboration at the LHC has also searched for sleptons and charginos assuming 100% branching fraction to a charged lepton and neutralino. These searches targeted large mass splitting \(\Delta=m_{\tilde{\ell}}-m_{\chi^{0}}\geq 80\) GeV [67] and compressed spectra for a mass splitting as low as 0.55 GeV [68]. The two searches utilized a total luminosity of \(139~{}{\rm fb}^{-1}\). The first search constrained scalar singlet masses to be lighter than about 440 GeV while the search for compressed spectra constrain the whole compressed region for \(M_{H^{\pm}}\) up to 150 GeV (see the blue and green contours of figure 1).
### Examples of UV completions
In this section, we discuss the UV completions of this minimal framework. In general, there are two ways to UV complete the first term in \(\mathcal{L}_{S}\): _(i)_ assume it to be a part of a radiative neutrino mass model or _(ii)_ embed it in a grand-unified theory - \(SU(5)\) for example -. We start by the radiative neutrino mass models. The most economical way to extend this model is through the so-called Krauss-Nasri-Trodden (KNT) three-loop radiative neutrino mass model [69]. In addition to \(S\) and \(N_{R}\), the KNT model extends the SM with an additional scalar singlet that is even under \(Z_{2}\). Another possibility is through the so-called the scotogenic model which extends the SM with one inert doublet and three right-handed fermions [70]. The phenomenology of the scotogenic model has been widely studied in the literature [71; 72; 73; 74; 75; 76; 77; 78]. The relevant interaction becomes
\[\mathcal{L}\supset h_{\alpha\beta}\bar{L}_{L\alpha}(i\sigma_{2})\Phi_{\rm{IDM }}N_{\beta}\supset h_{\alpha\beta}\bar{\ell}_{L\alpha}SN_{\beta}, \tag{7}\]
where \(\Phi_{\rm{IDM}}=(S,(h_{2}+ia_{2})/\sqrt{2})^{T}\), and \(\alpha,\beta\) are generation indices. Identifying (7) with the first term in \(\mathcal{L}_{S}\) we have \(Y_{eN}=h_{11},Y_{\mu N}=h_{21},Y_{\tau N}=h_{31}\). We must stress out that the gauge interactions of the singlet scalar in this model are different from the scotogenic model due to the fact that \(S\) is a member of \(SU(2)_{L}\) doublet while it is a singlet in the present model.
The first term in equation (2) can be obtained from a grand-unified theory; For example, by embeding the SM into a \(SU(5)\) gauge group with the matter fields belonging to the \(\mathbf{10}_{F}\) and \(\mathbf{\bar{5}}_{F}\) representations, the charged singlet belongs to the \(\mathbf{10}_{H}\) representation, and the right handed neutrino belongs to the singlet representation \(\mathbf{1}_{\alpha}\), which in this case we can write
\[\mathcal{L}_{\rm{int}}=g_{\alpha\beta}\overline{\mathbf{10}}_{\alpha}\otimes \mathbf{10}_{H}\otimes\mathbf{1}_{N_{\beta}}\supset g_{\alpha\beta}\ell_{R \alpha}^{T}CN_{\beta}S^{+}. \tag{8}\]
In addition to the minimal \(SU(5)\), we can obtain the first term of equation (2) from a flipped-\(SU(5)\otimes U(1)_{X}\) grand-unified theory. Here, the right-handed charged lepton field is a singlet under \(SU(5)\) while the right-handed neutral fermion (\(N_{R}\)) is a member of the \(\mathbf{10}_{\alpha}\) representation. In this case, we have
\[\mathcal{L}_{\rm{int}} =\frac{h_{\alpha\beta}}{\Lambda}\overline{\mathbf{10}}_{\alpha} \otimes\tilde{\mathbf{1}}_{\beta}\otimes\mathbf{10}_{H}\otimes\mathbf{1}_{S}+h.c.\] \[\supset\frac{h_{\alpha\beta}\langle\mathbf{10}_{H}\rangle}{ \Lambda}N^{T}C\ell_{R}S^{-}, \tag{9}\]
where we integrated out a heavy intermediate state with a scale \(\Lambda\gg\Lambda_{\rm{GUT}}\).
## III Charged lepton flavour violation
The interaction Lagrangian in equation (2) conserves total lepton number to all orders in perturbation theory since the charged singlet possesses a lepton number4. However, the charged singlet scalar can give rise to processes violating flavor lepton numbers \(L_{\alpha};\alpha=e,\mu,\tau\) at the one-loop order. These processes called charged lepton flavor violating (CLFV) processes are categorised into three categories: (_i_) \(\ell_{\alpha}=\ell_{\beta}\gamma\), (_ii_) \(\ell_{\alpha}\to\ell_{\beta}\ell_{\beta}\bar{\ell}_{\beta}\) and (_iii_) \(e\)-\(\mu\) conversion in nuclei. In this section we discuss the impact of the CLFV constraints on the model parameter space. The most stringent bounds on the couplings \(Y_{\ell_{\alpha}N}\) come from the branching ratio of \(\mu\to e\gamma\) decay. The analysis of the CLFV decays in this work are heavily based on the results of refs. [79; 80; 81; 82]. A summary of the current and future bounds on the CLFV decays is shown in Table 1.
\begin{table}
\begin{tabular}{l c c} \hline CLFV decay & Present limit & Future sensitivity \\ \hline \(\mu\to e\gamma\) & \(5.7\times 10^{-13}\)[83] & \(6\times 10^{-14}\)[84] \\ \(\tau\to e\gamma\) & \(3.3\times 10^{-8}\)[85] & \(\sim 10^{-8}-10^{-9}\)[86] \\ \(\tau\to\mu\gamma\) & \(4.4\times 10^{-8}\)[85] & \(\sim 10^{-8}-10^{-9}\)[86] \\ \(\mu\to eee\) & \(1.0\times 10^{-12}\)[87] & \(\sim 10^{-16}\)[88] \\ \(\tau\to eee\) & \(2.7\times 10^{-8}\)[89] & \(\sim 10^{-9}-10^{-10}\)[86] \\ \(\tau\to\mu\mu\) & \(2.1\times 10^{-8}\)[90] & \(\sim 10^{-9}-10^{-10}\)[86] \\ \(H_{\rm{SM}}\to\mu\tau\) & \(1.5\times 10^{-3}\)[90] & \(-\) \\ \(H_{\rm{SM}}\to e\tau\) & \(2.2\times 10^{-3}\)[90] & \(-\) \\ \(H_{\rm{SM}}\to e\mu\) & \(3.5\times 10^{-4}\)[91] & \(-\) \\ \hline \end{tabular}
\end{table}
Table 1: Current experimental bounds and future sensitivities for low-energy CLFV decays and high-energy Higgs boson LFV decays.
### \(\ell_{\alpha}\to\ell_{\beta}\gamma\)
The radiative decays of charged leptons (\(\ell_{\alpha}\to\ell_{\beta}\gamma\)) receive contributions from the exchange of the charged singlet scalar and Majorana DM. After computing the one-loop integrals we get the effective magnetic dipole operator \(\mu_{\beta\alpha}^{M}\overline{\ell}_{\beta}\sigma^{\mu\nu}\ell_{\alpha}F_{\mu \nu}/2\) with \(\mu_{\beta\alpha}=em_{\alpha}A_{\rm M}/2\) and \(A_{\rm M}\) is given by
\[A_{\rm M}=\frac{Y_{\ell_{\beta}N}Y_{\ell_{\alpha}N}}{2(4\pi)^{2}}\frac{1}{M_{H ^{\pm}}^{2}}\mathcal{F}(\xi),\]
where \(\xi=M_{N_{R}}^{2}/M_{H^{\pm}}^{2}\) and \(\mathcal{F}(x)=(1-6x+3x^{2}+2x^{3}-6x^{2}\log x)/(6(1-x)^{4}\) is the one-loop function which have the following limits \(\mathcal{F}(x)\to 1/6\) (\(1/12\)) for \(x\to 0\) (\(1\)). The resulting decay branching ratio can be computed easily to give
\[\text{BR}\left(\ell_{\alpha}\to\ell_{\beta}\gamma\right)=\frac{3(4\pi)^{3} \alpha_{\rm EM}}{4G_{F}^{2}}|A_{\rm M}|^{2}\times\text{BR}\left(\ell_{\alpha} \to\ell_{\beta}\nu_{\alpha}.\overline{\nu}_{\beta}\right), \tag{10}\]
Here, \(G_{F}=1.166\times 10^{-5}\) GeV\({}^{-2}\), \(\alpha_{\rm EM}=1/137\), and \(\text{BR}\left(\ell_{\alpha}\to\ell_{\beta}\nu_{\alpha}\overline{\rho_{\beta}}\right)\) is the SM decay branching ratios. We choose \(\text{BR}(\mu\to e\nu\bar{\nu}),\text{BR}(\tau\to e\nu\bar{\nu}),\text{BR}( \tau\to\mu\nu\bar{\nu})\approx 1,0.1783,0.1741\)[92].
Using the most recent experimental bounds on \(\text{BR}(\ell_{\alpha}\to\ell_{\beta}\gamma)\) from the Meg[83] and BaBar[85] experiments, we can use equation (10) to derive the following bounds on the products of the couplings:
\[|Y_{eN}Y_{\mu N}| <\left(\frac{2.855\times 10^{-5}}{\text{GeV}}\right)^{2}\frac{M_{H ^{\pm}}^{2}}{|\mathcal{F}(\xi)|},\] \[|Y_{eN}Y_{\tau N}| <\left(\frac{4.428\times 10^{-4}}{\text{GeV}}\right)^{2}\frac{M_{H ^{\pm}}^{2}}{|\mathcal{F}(\xi)|}, \tag{11}\] \[|Y_{\tau N}Y_{\mu N}| <\left(\frac{4.759\times 10^{-4}}{\text{GeV}}\right)^{2}\frac{M_{H ^{\pm}}^{2}}{|\mathcal{F}(\xi)|}.\]
Since the one-loop function varies roughly between \(1/12\) and \(1/6\), the upper bound on the coupling \(Y_{\ell_{\alpha}N}Y_{\ell_{\beta}N}\) is proportional to the square of the charged singlet mass with almost no dependence on \(M_{N_{R}}\). Therefore, limits are expected to be strong for light \(H^{\pm}\) and become very weak for heavy \(H^{\pm}\). This can be clearly seen in figure 2 where the maximum allowed values of \(|Y_{\ell_{\alpha}N}Y_{\ell_{\beta}}N|\) by the CLFV decays \(\text{BR}(\ell_{\alpha}\to\ell_{\beta}\gamma)\) are shown as a function of \(\xi=M_{N_{R}}^{2}/M_{H^{\pm}}^{2}\) for \(M_{H^{\pm}}=500,1000,\text{ and }5000\text{ GeV}\). As expected, the bounds on \(|Y_{eN}Y_{\mu N}|\) are the strongest ones while the bounds on \(|Y_{\tau N}Y_{\mu N}|\) and \(|Y_{eN}Y_{\tau N}\) are similar. We must stress out that the CLFV decays do not constraints the \(Y_{\ell_{\alpha}N}\) couplings _per se_ but only their products. Following this finding, there is some freedom regarding the choice of the couplings which we call here benchmark scenarios (see next sections). Given that this study is mainly concerned about the phenomenology of the leptophilic DM models at muon colliders, we choose a scenario where the coupling of dark matter to the muon is quite large while the other couplings are chosen such that they fulfill the experimental bounds on CLFV decays: \(Y_{\mu N}\simeq\mathcal{O}(1)\gtrsim Y_{\tau N}\gg Y_{eN}\).
### \(\ell_{\alpha}\to\ell_{\beta}\ell_{\beta}\bar{\ell}_{\beta}\)
It is noteworthy to discuss the constraints from the CLFV decays \(\ell_{\alpha}\to\ell_{\beta}\ell_{\beta}\bar{\ell}_{\beta}\). These processes receive four contributions at the one-loop order: penguin diagrams with the exchange of \(\gamma\), \(Z\) and \(H_{\rm SM}\) and box diagrams. The contribution of the SM Higgs boson is suppressed due to the smallness of the Higgs-lepton Yukawa coupling. The corresponding branching ratio is given by [82]
\[{\rm BR}(\ell_{\alpha}\to\ell_{\beta}\ell_{\beta}\ell_{\beta}) =\frac{3(4\pi)^{2}\alpha_{\rm EM}}{8G_{F}^{2}}\bigg{[}\overbrace{ \big{|}A_{\rm ND}|^{2}+|A_{\rm M}|^{2}\left(\overbrace{16}^{\gamma~{}{\rm penguin }}\left(\frac{m_{\alpha}}{m_{\beta}}\right)-\frac{22}{3}\right)}^{\gamma~{}{ \rm penguin}}+\overbrace{\frac{1}{3}(2|Z_{\rm RR}|^{2}+|Z_{\rm RL}|^{2})}^{Z~{}{ \rm penguin}}\] \[+\frac{1}{6}|B_{\rm box}|^{2}+\underbrace{2~{}{\rm Re}\left(-2A_{ \rm ND}A_{\rm M}^{*}+\frac{1}{3}A_{\rm ND}B_{\rm box}^{*}-\frac{2}{3}A_{\rm M }B_{\rm box}^{*}\right)}_{{\rm Interference}}\bigg{]}\times{\cal B}, \tag{12}\]
where \({\cal B}\equiv{\rm BR}(\ell_{\alpha}\to\ell_{\beta}\nu_{\alpha}\bar{\nu}_{\beta})\). The contribution of the \(\gamma\)-penguins consist of the magnetic or dipole (\(A_{\rm M}\)) and the non-dipole (\(A_{\rm ND}\)) contributions. The dipole contribution is the same as of \({\rm BR}(\ell_{\beta}\to\ell_{\beta}\gamma)\) but enhanced by a factor of \(16\times(\log(m_{\alpha}/m_{\beta})-22)/3\) which varies between 7 and 36 for \(\tau\to 3\mu\) and \(\tau\to 3e\) respectively. The non-dipole contribution is given by
\[A_{\rm ND}=\frac{Y_{\ell_{\alpha}N}Y_{\ell_{\beta}N}}{6(4\pi)^{2}}\frac{1}{M_ {H^{\pm}}^{2}}{\cal G}(\xi),\]
with \({\cal G}(x)=(2-9x+18x^{2}-11x^{3}+6x^{3}\log x)/(6(1-x)^{4})\) being the one-loop function for the non-dipole \(\gamma\)-penguin. This function has the following limits: \(\lim_{x\to 0}{\cal G}(x)=1/3\) and \(\lim_{x\to 1}{\cal G}(x)=1/4\). Therefore, the dipole \(\gamma\)-penguin contribution is large as compared to the non-dipole contributions; \(\lim_{x\to 0}\,(\lim_{x\to 1})A_{\rm M}/A_{\rm ND}\times(16/3\log(m_{\alpha}/m_{ \beta})-22/3)\approx\{3.5,11,18\}\) (\(\{2,7,12\}\)) for \(\tau\to 3\mu,\mu\to 3e\), and \(\tau\to 3e\) respectively. The \(Z\)-penguin contribution is given by
\[Z_{\rm RR}=\frac{g_{R}^{\ell}Z_{\rm ND}}{g_{1}^{2}\sin^{2}\theta_{W}M_{Z}^{2}},~{}Z_{\rm RL}=\frac{g_{L}^{\ell}Z_{\rm ND}}{g_{1}^{2}\sin^{2}\theta_{W}M_{Z}^ {2}}, \tag{13}\]
where \(g_{R}^{\ell},g_{L}^{\ell}\) are the right and left-handed components of the \(Z\)-boson couplings to charged leptons, \(g_{1}\) is the \(SU(2)_{L}\) gauge coupling, \(\sin\theta_{W}\) is the sine of the Weinberg mixing angle, and \(Z_{\rm ND}\) is the momentum-independent \(Z\)-boson form factor which is given by
\[Z_{\rm ND}=\frac{Y_{\ell_{\alpha}N}Y_{\ell_{\beta}N}}{2(4\pi)^{2}}\frac{m_{ \alpha}m_{\beta}}{M_{H^{\pm}}^{2}}\frac{g_{1}}{\cos\theta_{W}}{\cal F}(\xi).\]
We can see that the \(Z\)-penguin contribution involves an extra suppression by a factor of \(m_{\alpha}m_{\beta}\) as compared to the dipole \(\gamma\)-contribution. Finally, the box contribution is given by
\[B_{\rm box}=\frac{Y_{\ell_{\alpha}N}Y_{\ell_{\beta}N}^{3}}{2^{7}\pi^{3}\alpha_ {\rm EM}M_{H^{\pm}}^{2}}\bigg{[}{\cal D}_{1}(\xi)+2\xi{\cal D}_{2}(\xi)\bigg{]}, \tag{14}\]
where \({\cal D}_{1,2}(x)\) are the one-loop box functions given by \({\cal D}_{1}(x)=(-1+x^{2}-2x\log x)/(1-x)^{3}\) and \({\cal D}_{2}(x)=(-2+2x-(1+x)\log x)/(1-x)^{3}\). The contribution of the box diagrams, contrarily to penguins, has an extra factor of \(Y_{\ell_{\beta}N}^{2}\). Therefore, it may dominate for large couplings of the daughter lepton to DM. In this work, we check that the benchmark scenarios satisfy the bounds from the \(\ell_{\alpha}\to 3\ell_{\beta}\) decays (see Table 2).
### \(H_{\rm SM}\to\ell_{\alpha}\bar{\ell}_{\beta}\)
We close this section by a brief discussion of the CLFV decays of the SM Higgs boson. These decays have been searched for by the ATLAS and the CMS collaborations with the most strongest bounds are reported on by CMS collaboration [90; 91]. In this model, the CLFV decays of the SM Higgs boson are degenerate to the radiative CLFV decays of the charged leptons. The constraints from CLFV of charged leptons imply that the CLFV decays of the SM Higgs boson are extremely suppressed and may even be beyond the future reach of the LHC and future colliders. The SM Higgs boson decay into \(\ell_{\alpha}\ell_{\beta}\) is given by [93]
\[{\rm BR}(H_{\rm SM}\to\ell_{\alpha}\ell_{\beta})\simeq 1.2\times 10^{3}\times|y_{ \ell_{\alpha}}Y_{\ell_{\alpha}N}Y_{\ell_{\beta}N}|^{2}\bigg{(}\frac{\lambda_{3 }}{4\pi}\bigg{)}^{2}\bigg{(}\frac{v}{M_{H^{\pm}}}\bigg{)}^{4}, \tag{15}\]
with \(y_{\ell_{\alpha}}=m_{\ell_{\alpha}}/(\sqrt{2}v)\) is the Higgs-lepton Yukawa coupling of the heavier lepton (chosen here to be \(\ell_{\alpha}\)). In this formula, the contribution of the lighter lepton is neglected. We expect the bounds from \(H_{\rm SM}\to\ell_{\alpha}\bar{\ell}_{\beta}\) searches to be very weak. This can be clearly seen in from table 2 for the benchmark points we have used in this study.
## IV Dark matter
In this section, we discuss the DM phenomenology within this model. We start with the calculation of the relic density of the\(N_{R}\) particles in section IV.1 and then move to a detailed analysis of the spin-independent DM-nucleus scattering cross section in section IV.2. Next, we derive the constraint on the couplings \(Y_{lN}\) by analysing the the Higgs invisible decays and conclude by a selection
of the benchmark points that are compatible with all the theoretical and the experimental constraints in section IV.4.
### Relic density
The relic density of the \(N_{R}\) particles receives contributions from both the annihilation and the co-annihilation. The co-annihilation becomes active when the mass splitting \(\Delta\equiv M_{H^{\pm}}-M_{N_{R}}<0.1\times M_{N_{R}}\) while the annihilation contributes for the whole parameter space. For the annihilation, there are two major contributions: _(i)_\(N_{R}N_{R}\rightarrow\ell^{+}_{\alpha}\ell^{-}_{\beta}\) from the exchange of the charged scalar singlet in \(t\)- and _u_-channels, and _(ii)_\(N_{R}N_{R}\rightarrow\sum_{X\in\mathrm{SM}}X\overline{X}\) which arises from the exchange of the SM Higgs boson via \(s\)-channel diagrams. Note that \(s\)-channel contributions to the relic density are negligible in our model if one demands perturbativity of the couplings. The reason is that the leading order contribution to the \(s\)-channel annihilation amplitudes arises at the one-loop order. To obtain the relic density of the \(N_{R}\) particles, one must solve the Boltzmann equations given by [94; 95; 1]
\[\frac{\mathrm{d}n_{N_{R}}}{\mathrm{d}t}+3Hn_{N_{R}}=-2\langle \sigma_{N_{R}}v\rangle\bigg{[}(n_{N_{R}})^{2}-(n_{N_{R}}^{\mathrm{eq}})^{2} \bigg{]} \tag{16}\]
with \(H=\dot{a}/a\), \(n_{N_{R}}\) is the number density of the \(N_{R}\) particle and \(n_{N_{R}}^{\mathrm{eq}}\approx g_{N_{R}}\left(\frac{M_{N_{R}}T}{2\pi}\right)^ {3/2}e^{-M_{N_{R}}/T}\) is its number density at the thermal equilibrium. Note that in the absence of interactions that change the number density of \(N_{R}\), the right handed side of equation (16) would be equal to zero and \(n_{N_{R}}\propto a^{-3}\). This equation can be solved to give approximately
\[\Omega_{\mathrm{DM}}h^{2}\simeq\frac{3\times 10^{-27}\ \mathrm{cm}^{3}\mathrm{s }^{-1}}{\langle\sigma(x_{f})v\rangle}, \tag{17}\]
where \(\langle\sigma(x_{f})v\rangle\) is the thermally-averaged annihilation cross section for the \(N_{R}\) particle
\[\langle\sigma(x_{f})v\rangle=\frac{1}{8M_{N_{R}}^{4}T_{f}K_{2}^{2} (M_{N_{R}}/T_{f})}\sum_{\alpha,\beta}\int_{4M_{N_{R}}^{2}}^{\infty}\mathrm{d} \hat{s}\sqrt{\hat{s}-4M_{N_{R}}^{2}}K_{1}(\sqrt{\hat{s}}/T_{f})\sigma_{N_{R}N _{R}\rightarrow\ell_{\alpha}\ell_{\beta}}(\hat{s}), \tag{18}\]
where \(K_{1}(x)\) and \(K_{2}(x)\) are the modified Bessel functions of the second kind and \(\sigma_{N_{R}N_{R}\rightarrow\ell_{\alpha}\ell_{\beta}}(\hat{s})\) is the annihilation cross section into charged lepton which is given by
\[\sigma_{N_{R}N_{R}\rightarrow\ell_{\alpha}\ell_{\beta}}(\hat{s}) =\frac{1}{2^{3}\pi}\frac{|Y_{\ell_{\alpha}N}Y_{\ell_{\beta}N}|^{2}}{\hat{s} \ \hat{\kappa}_{1}}\bigg{[}(m_{\ell_{\alpha}}^{2}+m_{\ell_{\beta}}^{2})(\hat{s}-2 M_{N_{R}}^{2})+\frac{1}{6}\frac{\hat{\kappa}_{2}}{\hat{\kappa}_{1}}\hat{s}(\hat{s}-4M_{N _{R}}^{2})\bigg{]}, \tag{19}\]
where \(\hat{\kappa}_{i}\equiv\hat{\kappa}_{i}(M_{H^{\pm}}^{2},M_{N_{R}}^{2},\hat{s})\), \(\hat{\kappa}_{1}(x,y,z)=(2x+2y-z)^{2}\) and \(\hat{\kappa}_{2}(x,y,z)=(4x-4y+z)^{2}-2z^{2}\). To simplify the discussion about the relic density, we consider the annihilation cross section in the limit \(\hat{s}\to 4M_{N_{R}}^{2}\)
\[\sigma_{N_{R}N_{R}\rightarrow\ell_{\alpha}\ell_{\beta}}\approx \frac{|Y_{\ell_{\alpha}N}Y_{\ell_{\beta}N}|^{2}}{2^{6}\pi M_{H^{\pm}}^{4}}(m_{ \ell_{\alpha}}^{2}+m_{\ell_{\beta}}^{2})\bigg{(}1+\frac{M_{N_{R}}^{2}}{M_{H^{ \pm}}^{2}}\bigg{)}^{-2}.\]
This equation simply tells us that the contribution of the annihilation to the relic density becomes very small for very heavy charged singlet scalar and one needs to have large \(Y_{\ell N}\) to produce the correct relic density. On the other hand, for large values of the mass splitting and heavy charged singlet scalar one cannot reproduce the correct relic abundance if one demands perturbativity of the couplings. The co-annihilations are more involved in this model as we can have additional contributions that have different dependence on the model parameters. There are two generic co-annihilation channels: \(N_{R}H^{\pm}\rightarrow\mathrm{SM}\) and \(H^{\pm}H^{\mp}\rightarrow\mathrm{SM}\). Below, we list the individual contributions and the overall dependence of the corresponding cross section
\[N_{R}H^{\pm}\rightarrow\ell^{\pm}_{\alpha}H_{\mathrm{SM}}: \sigma\propto\lambda_{3}^{2}Y_{\ell_{\alpha}N}^{2},\] \[N_{R}H^{\pm}\rightarrow\ell^{\pm}_{\alpha}Z,\ell^{\pm}_{\alpha} \gamma,\nu W^{\pm}: \sigma\propto Y_{\ell_{\alpha}N}^{2},\] \[H^{\pm}H^{\mp}\rightarrow\ell^{\pm}_{\alpha}\ell^{\mp}_{\beta}: \sigma\propto|Y_{\ell_{\alpha}N}Y_{\ell_{\beta}N}|^{2}\mathcal{A }_{1}+|Y_{\ell_{\alpha}N}Y_{\ell_{\beta}N}|\mathcal{A}_{2}+\mathcal{A}_{3},\] \[H^{\pm}H^{\mp}\rightarrow q\bar{q}: \sigma\propto\lambda_{3}^{2}\mathcal{B}_{1}+\lambda_{3}\mathcal{B }_{2}+\mathcal{B}_{3},\] \[H^{\pm}H^{\mp}\to ZZ,H_{\mathrm{SM}}Z,W^{\pm}W^{\mp}: \sigma\propto\lambda_{3}^{2}\mathcal{C}_{1}+\lambda_{3}\mathcal{C}_{2}+ \mathcal{C}_{3},\] \[H^{\pm}H^{\mp}\rightarrow H_{\mathrm{SM}}H_{\mathrm{SM}}: \sigma\propto\lambda_{3}^{4}\mathcal{D}_{1}+\lambda_{3}^{2}\mathcal{D}_{2},\]
with \({\cal A}_{i},{\cal B}_{i},{\cal C}_{i}\) and \({\cal D}_{i}\) are real-valued coefficients that depend on the dark matter mass, the charged singlet scalar mass and the final-state particles. The co-annihilation becomes very active for quite large \(\lambda_{3}\) and \(Y_{\ell N}\) and may even drive the relic density to very small values (\(\sim\) orders of magnitudes smaller than the observed abundance). In general, the co-annihilation is dominated by contributions of the following two processes \(H^{\pm}H^{\mp}\to 2H_{\rm SM}\) and \(N_{R}H^{\pm}\to\ell_{\alpha}^{\pm}H_{\rm SM}\). In the presence of co-annihilations, the Boltzmann equations become
\[\frac{{\rm d}n_{N_{R}}}{{\rm d}t}+3Hn_{N_{R}} = -2\langle\sigma_{\rm eff}v_{r}\rangle\bigg{[}(n_{N_{R}})^{2}-(n_ {N_{R}}^{\rm eq})^{2}\bigg{]}+N\Gamma_{H^{\pm}}n_{H^{\pm}}, \tag{20}\] \[\frac{{\rm d}n_{H^{\pm}}}{{\rm d}t}+3Hn_{H^{\pm}} = -\Gamma_{H^{\pm}}n_{H^{\pm}}, \tag{21}\]
where \(N\) is the mean number of \(N_{R}\) particles, \(n_{H^{\pm}}\) is the number density of \(H^{\pm}\) and \(\Gamma_{H^{\pm}}\) is its total width. Note that here we have replaced the thermally-averaged annihilation cross section in equation (16) by the effective cross section
\[\langle\sigma_{\rm eff}v_{r}\rangle=\sum_{i,j\in\{N_{R},H^{\pm}\}} \langle\sigma(ij\to{\rm SM})v_{r}\rangle\frac{n_{i}^{\rm eq}n_{j}^{\rm eq}} {(n_{N_{R}}^{\rm eq})^{2}}. \tag{22}\]
The relic density of \(N_{R}\) is obtained from the numerical solutions of the coupled Boltzmann equations (21). MadDM version 3.0 is used to solve the Boltzmann equations and compute the relic density of \(N_{R}\)[96]. In figure 3, we show the values of the coupling \(Y_{\ell N}\) consistent with the measurement of the relic density by the Planck collaboration projected on the mass of the dark matter and the mass of the charged singlet scalar. We can see that the relic abundance of the \(N_{R}\) is consistent with the Planck measurement only for very specific regions. If the mass splitting between \(H^{\pm}\) and \(N_{R}\) is large, we need large values of the \(Y_{\ell N}\). However, even for \(Y_{\ell N}\) near the perturbativity bound the mass splitting can not be arbitrary large: \(\Delta_{\rm max}\approx 600\) (2000) GeV for \(M_{N_{R}}=10\) (100) GeV. The relic density becomes almost independent of \(Y_{\ell N}\) for large \(M_{N_{R}}\) in the co-annihilation regions. We conclude this section by noting that the model can not reproduce the correct relic density with the standard freeze-out mechanism for the region marked in blue in figure (3) as it breaks the perturbativity of the coupling \(Y_{\ell N}\).
### Direct detection
We turn now into a discussion of the constraints from direct detection experiments on the model parameter space. In this model, the scattering cross section of \(N_{R}\) off the nucleus with atomic number (\(A\)) occurs at the one-loop order where the SM Higgs boson plays the role of a portal. The generic formula for the spin-independent cross section is given by5
Footnote 5: The spin-dependent cross section is very small in our model as the exchanged particle is the SM Higgs boson which is a scalar particle with \(J^{P}=0^{+}\). Nevertheless, we will compute this ob
Figure 4: The spin-independent cross section as function of the dark matter mass \(M_{N_{R}}\) while the colored scatter points correspond to the charged singlet scalar mass \(M_{H^{\pm}}\). In the same plot, we show the current bounds from Xenon 1T [15] in dashed sienna, and the future expectations from Xenon nT [102], LUX LZ [103] and DarkSide G2 [104]. The shaded orange area marked by βneutrino floorβ corresponds to the backgrounds from the coherent scattering with solar neutrinos, atmospheric neutrinos and supernova neutrinos [105]. The spin-independent cross section was scaled by a factor of \(\xi_{\text{Planck}}=\Omega_{N_{R}}h^{2}/\Omega_{\text{Planck}}h^{2}\) with \(\Omega_{\text{Planck}}h^{2}\approx 0.12\). All the calculations were performed for \(Y_{\text{\tiny{LN}}}=2\) and \(\lambda_{3}=4\).
Figure 3: Values of the coupling \(Y_{\ell N}\) consistent with the measurement of the relic density by the Planck collaboration projected on the mass of the dark matter and the mass of the charged singlet scalar. The isolines corresponding to \(\Omega h^{2}\approx 0.12\) are shown for \(Y_{\ell N}=1,2,3,4,5,7.5\) and \(4\pi\). The blue shaded area corresponds to the region where the perturbativity is broken while the shaded gray region correspond to the kinematically forbidden region \(M_{N_{R}}>M_{H^{\pm}}\) in which \(N_{R}\) is not stable and therefore not a suitable dark matter candidate.
The parton-level scattering amplitude is
\[\mathcal{M}_{qN_{R}\to qN_{R}}=\mathcal{A}_{q}\bar{\psi}_{q}(p_{\rm out})\psi_{q}( p_{\rm in}), \tag{25}\]
where \(\mathcal{A}_{q}\) is connected to the non-hadronic part of the amplitude. The term \(\bar{\psi}_{q}(p_{\rm out})\psi_{q}(p_{\rm in})\) should be incorporated in a hadronic current \(\langle\mathcal{N}|\cdot|\mathcal{N}\rangle\)
\[\langle\mathcal{N}|\bar{\psi}_{q}\psi_{q}|\mathcal{N}\rangle=\left\{\begin{array} []{ll}\frac{m_{\mathcal{N}}}{m_{q}}\cdot\mathcal{S}_{\mathcal{N}}^{q},&\text{ for }q=u,d,s,\\ \frac{\pi}{2t}\frac{m_{\mathcal{N}}}{m_{q}}\cdot\mathcal{S}_{\mathcal{N}}^{g},& \text{ for }q=c,b,t,\end{array}\right. \tag{26}\]
where \(\mathcal{N}=p,n\). The model-dependent non-hadronic form factor is given by
\[\mathcal{A}_{q}=\frac{\tilde{y}(Q^{2}\approx 0)}{M_{H_{\rm SM}}^{2}}\cdot \frac{m_{q}}{v}\bar{\psi}_{N_{R}}(k_{\rm out})\psi_{N_{R}}(k_{\rm in}), \tag{27}\]
here \(\tilde{y}(Q^{2}\approx 0)\) is the effective \(H_{\rm SM}N_{R}N_{R}\) coupling computed in the low energy limit. With the help of the Package X [107], we can obtain it from equation (30)
\[\tilde{y}(Q^{2}\approx 0)\simeq-\frac{\lambda_{3}v|Y_{\ell N}|^{2}}{16\pi M_{H ^{\pm}}}\frac{1}{\varrho_{N}}\bigg{[}1-\left(1-\varrho_{N}^{-2}\right)\log \left(1-\varrho_{N}^{2}\right)\bigg{]}\equiv-\frac{\lambda_{3}v|Y_{\ell N}|^{2 }}{16\pi M_{H^{\pm}}}\mathcal{H}(\varrho_{N}), \tag{28}\]
where \(\varrho_{N}=M_{N_{R}}/M_{H^{\pm}}\). \(\mathcal{H}(x)\) is monotonous and increasing function of \(x\) in the interval \([0,1]\) and has the following limits \(\lim_{x\to 0}\mathcal{H}(x)=0\) and \(\lim_{x\to 1}\mathcal{H}(x)=1\). Note that the first limit correspond to a small dark matter mass and a heavy charged scalar for which the model cannot reproduce the correct relic abundance while the second limit corresponds to the nearly degenerate scenario where co-annihilation is the most active component in the relic abundance calculation. In addition the effective coupling involves an extra suppression by \(1/M_{H^{\pm}}\) which simply means that the direct detection spin-independent cross section is always below the neutrino floor for heavy \(H^{\pm}\). From equation (28) one also expect that the spin-independent cross section is always proportional to \(|Y_{\ell N}|^{4}\). Therefore, large \(Y_{\ell N}\) regions with large \(\sigma_{\rm SI}\) would also correspond to small relic density (which is proportional to \(1/|Y_{\ell N}|^{4}\))6 and for these scenarios \(\sigma_{\rm SI}\) needs to be scaled by a factor \(\xi_{\rm Planck}\equiv\Omega_{N_{R}}h^{2}/\Omega_{\rm Planck}h^{2}\). This means that the spin-independent cross section would always be consistent with the current Xenon 1T bounds [15] for most regions of the parameter space as we can see clearly in figure 4.
Footnote 6: This is consistent with our previous finding in [29] where a strong anti-correlation between \(\sigma_{\rm SI}\) and \(\Omega_{N_{R}}h^{2}\) was observed.
### Higgs invisible decay
The Higgs invisible decay occurs at the one-loop order with the exchange of charged scalar and right-handed fermion. The partial decay width is given by
\[\Gamma(H_{\rm SM}\to N_{R}N_{R})=\frac{M_{H_{\rm SM}}|\tilde{y}|^{2}}{8\pi} \bigg{(}1-\frac{4M_{N_{R}}^{2}}{M_{H_{\rm SM}}^{2}}\bigg{)}^{3/2}, \tag{29}\]
with \(\tilde{y}\) is the one-loop induced effective \(H_{\rm SM}\)-\(N_{R}\)-\(N_{R}\) coupling which is given by
\[\tilde{y}=\frac{\lambda_{3}vM_{N_{R}}}{16\pi^{2}}\sum_{\ell}|Y_{\ell N}|^{2}(C _{0}+C_{2}), \tag{30}\]
with \(C_{i}\equiv C_{i}(M_{N_{R}}^{2},M_{H_{\rm SM}}^{2},M_{N_{R}}^{2},m_{\ell}^{2},M_{H^{\pm}}^{2},M_{H^{\pm}}^{2}),i=0,2\) being the Passarino-Veltman three-point functions [108]. The computation of the Feynman amplitudes has been performed using FeynArts, FormCalc, and LoopTools[109; 110]. We have used a Python interface to LoopTools to evaluate numerically the one-loop integrals7. We define the Higgs invisible branching ratio as
Footnote 7: pylooptools is a Python binding to LoopTools and can be found in this github directory: [https://github.com/djukanovic/pylooptools.git](https://github.com/djukanovic/pylooptools.git).
\[B_{\rm inv}\equiv\frac{\Gamma(H_{\rm SM}\to N_{R}N_{R})}{\Gamma(H_{\rm SM} \to N_{R}N_{R})+\Gamma_{H}^{\rm SM}}, \tag{31}\]
where \(\Gamma_{H}^{\rm SM}=4.07\ {\rm MeV}\). Using equations (29), (30) and (31), we can obtain bounds on the coupling \(Y_{\ell N}\). The bound is analytically defined by
\[Y_{\ell N}<\left(\frac{2048\pi^{5}\Gamma_{H}^{\rm SM}}{\beta_{N}^{3/2}M_{H_{\rm SM }}\lambda_{3}^{2}v^{2}M_{N_{R}}^{2}|C_{0}+C_{2}|^{2}\left(\frac{1}{B_{\rm bound} }-1\right)}\right)^{1/4},\]
where \(\beta_{N}\equiv(1-4M_{N_{R}}^{2}/M_{H_{\rm SM}}^{2})\) and \(B_{\rm bound}\) is the upper bound on \({\rm BR}_{\rm inv}\).
Searches for Higgs invisible decays have been carried by the ATLAS and the CMS collaborations [111; 112; 113]. The strongest and up-to-date stringent bound on \({\rm B}_{\rm inv}\) was reported by the Cms collaboration using a combination
of previous Higgs to invisible decay searches at \(7,8\) and \(13\) TeV, where it has been found \(B_{\rm inv}<B_{\rm bound}=0.19\) at \(95\%\) CL [113] assuming that the rates of the Higgs boson production are equal to the SM predictions. On the other hand, several groups have carried global analyses using recent Higgs boson measurements and obtained stringent limits [114; 115]. Finally, several studies have been devoted to the projected sensitivities of the future collider experiments to Higgs invisible decays from HL-LHC [116], FCC-ee [117], ILC [118], CEPC [119] and FCC-hh [120]. In figure 5, we show the excluded values of \(Y_{\ell N}\) from present and future bounds on \({\rm BR}_{\rm inv}\) assuming \(M_{H^{\pm}}=500\) GeV and \(\lambda_{3}=4\)8. As we can see clearly that the present bounds are extremely weak which excludes \(Y_{\ell N}\sim 6\) for \(M_{N_{R}}\sim 49\) GeV. The future experiments are expected to exclude smaller values of \(Y_{\ell N}\); \(e.g.\) FCC-hh can exclude values up to \(0.7\) for \(M_{N_{R}}\sim 49\) GeV.
Footnote 8: This choice of the charged singlet scalar mass is consistent with the limits from searches of sleptons at the LHC. We note that increasing \(M_{H^{\pm}}\) would weaken the bounds on \(Y_{\ell N}\) from Higgs invisible decays.
\begin{table}
\begin{tabular}{c c c c c} \hline Benchmark point & BP1 & BP2 & BP3 & BP4 \\ \hline \multicolumn{5}{c}{_Parameters_} \\ \hline \(M_{N_{R}}\) (GeV) & 50 & 200 & 598 & 1000 \\ \(M_{H^{\pm}}\) (GeV) & 500 & 500 & 600 & 1500 \\ \(Y_{Ne}\) & \(10^{-4}\) & \(5\times 10^{-4}\) & \(10^{-3}\) & \(5\times 10^{-3}\) \\ \(Y_{N\mu}\) & 2.8 & 1.6 & 1 & 2 \\ \(Y_{N\tau}\) & \(5\times 10^{-2}\) & \(5\times 10^{-1}\) & \(5\times 10^{-1}\) & 2 \\ \(\lambda_{3}\) & 4 & 5 & 5 & 6 \\ \hline \multicolumn{5}{c}{_Decays of \(H^{\pm}\)_} \\ \hline \({\rm BR}(H^{\pm}\to eN_{R})\) & \(1.27\times 10^{-9}\) & \(8.89\times 10^{-8}\) & \(8.98\times 10^{-7}\) & \(3.12\times 10^{-6}\) \\ \({\rm BR}(H^{\pm}\to\mu N_{R})\) & \(99.96\times 10^{-2}\) & \(91.10\times 10^{-2}\) & \(89.70\times 10^{-2}\) & \(50.0\times 10^{-2}\) \\ \({\rm BR}(H^{\pm}\to\tau N_{R})\) & \(3.18\times 10^{-4}\) & \(8.89\times 10^{-2}\) & \(10.29\times 10^{-2}\) & \(49.99\times 10^{-2}\) \\ \(\Gamma_{H^{\pm}}\) (GeV) & 76.45 & 19.72 & \(5.88\times 10^{-4}\) & 73.68 \\ \(\Gamma_{H^{\pm}}/M_{H^{\pm}}\) & \(15.29\times 10^{-2}\) & \(3.94\times 10^{-2}\) & \(9.81\times 10^{-7}\) & \(4.91\times 10^{-2}\) \\ \hline \multicolumn{5}{c}{\({\rm BR}(\ell_{\alpha}\to\ell_{\beta}\gamma)\) and \({\rm BR}(\ell_{\alpha}\to 3\ell_{\beta})\)} \\ \hline \({\rm BR}(\mu\to e\gamma)\) & \(2.68\times 10^{-14}\) & \(1.51\times 10^{-13}\) & \(4.31\times 10^{-14}\) & \(1.89\times 10^{-13}\) \\ \({\rm BR}(\tau\to e\gamma)\) & \(1.52\times 10^{-18}\) & \(2.64\times 10^{-15}\) & \(1.92\times 10^{-15}\) & \(3.38\times 10^{-14}\) \\ \({\rm BR}(\tau\to\mu\gamma)\) & \(1.17\times 10^{-9}\) & \(2.64\times 10^{-8}\) & \(1.87\times 10^{-9}\) & \(5.28\times 10^{-9}\) \\ \hline \multicolumn{5}{c}{\({\rm BR}(\mu\to eee)\)} & \(1.47\times 10^{-16}\) & \(8.21\times 10^{-16}\) & \(2.27\times 10^{-16}\) & \(1.01\times 10^{-15}\) \\ \({\rm BR}(\tau\to eee)\) & \(1.51\times 10^{-20}\) & \(2.58\times 10^{-17}\) & \(1.85\times 10^{-17}\) & \(3.29\times 10^{-16}\) \\ \({\rm BR}(\tau\to\mu\mu\mu)\) & \(1.21\times 10^{-8}\) & \(9.79\times 10^{-9}\) & \(2.63\times 10^{-12}\) & \(1.17\times 10^{-9}\) \\ \hline \multicolumn{5}{c}{\({\rm BR}(H_{\rm SM}\to\ell_{\alpha}\ell_{\beta})\)} \\ \hline \({\rm BR}(H_{\rm SM}\to\mu\tau)\) & \(2.31\times 10^{-8}\) & \(1.18\times 10^{-6}\) & \(2.22\times 10^{-7}\) & \(5.24\times 10^{-7}\) \\ \({\rm BR}(H_{\rm SM}\to e\tau)\) & \(2.95\times 10^{-17}\) & \(1.15\times 10^{-13}\) & \(2.22\times 10^{-13}\) & \(3.27\times 10^{-12}\) \\ \hline \multicolumn{5}{c}{_Dark matter observables_} \\ \hline \(\Omega_{N_{R}}h^{2}\) & \(9.84\times 10^{-2}\) & \(9.25\times 10^{-2}\) & \(2.11\times 10^{-3}\) & \(8.53\times 10^{-2}\) \\ \(\langle\sigma v\rangle\) (cm\({}^{2}\)) & \(2.40\times 10^{-9}\) & \(2.55\times 10^{-9}\) & \(7.32\times 10^{-8}\) & \(2.69\times 10^{-9}\) \\ \(\sigma^{p}_{\rm SI}\) (cm\({}^{2}\)) & \(1.60\times 10^{-47}\) & \(3.45\times 10^{-47}\) & \(2.28\times 10^{-48}\) & \(1.47\times 10^{-46}\) \\ \(\sigma^{p}_{\rm SD}\) (cm\({}^{2}\)) & \(6.51\times 10^{-62}\) & \(6.29\times 10^{-62}\) & \(1.98\times 10^{-65}\) & \(8.29\times 10^{-60}\) \\ XENON1T & β & β & β & β \\ PICO & β & β & β & β \\ DarkSide G2 & β & X & β & X \\ LZ & X & X & β & X \\ Neutrino floor & X & X & X & \\ \hline \end{tabular}
\end{table}
Table 2: Characteristics of the four benchmark points in our model. Here, we show the values of the independent parameters, the decay branching ratios and total width of the charged singlet scalar, the CLFV decay branching ratios and dark-matter observables. A checkmark (\(\checkmark\)) indicate that the parameter point yields a smaller \(\sigma_{\rm SI}\) than the experimental bound (present or expected) while a cross mark (X) indicates that \(\sigma_{\rm SI}\) is above the experimental bound.
### Benchmark points
From the discussions in sections II and IV, we can conclude the following:
* The scalar singlet cannot be lighter than 440 GeV for mass splittings with the dark matter of order \(\leq 80\) GeV.
* CLFV can constrain only the product of the Yukawa-type couplings and not their individual values. Therefore, benchmark points have to be chosen.
* DM direct detection constraints are not very strong as expected since the spin-independent cross section is one-loop induced.
* The constraints from the consistency with the measurement of the DM relic density forbids large mass splittings if the Yukawa-type couplings are of order \(\mathcal{O}(1)\).
The benchmark points used in the discussion of the general features of DM production at Muon colliders are shown in Table 2. There are four of these benchmarks and each one has its own phenomenological implications.
Bp1This benchmark point is characterised by a relatively light DM (\(M_{N_{R}}=50\) GeV) and a charged singlet mass in near the exclusion limit reported on by the LHC (see figure 1). On the other hand, the Yukawa-type couplings are chosen such that \(Y_{\mu N}\gg Y_{\tau N}>Y_{eN}\). This choice leads to a charged singlet decaying predominantly into \(\mu N_{R}\) with a branching fraction approaching 100%. On the other hand, the charged lepton flavour violating decays of charged leptons are such that BR(\(\tau\to e\gamma\)) is well below the sensitivity reach of future experiments in the foreseeable future. The other branching ratios, are below the current experimental bounds but can be tested in the near future. For DM observables, the relic density for this BP is about 90% of the observed abundance and the spin-independent DM-nucleon cross section is below Xenon1T bound and the expected DarkSide G2 bound but can be excluded or discovered by LZ.
Bp2For this point, we choose \(M_{H^{\pm}}=500\) GeV and \(M_{N_{R}}=200\) GeV. The Yukawa-type couplings are chosen using the same hierarchy as BP1 but with relatively different values, _i.e._, \(Y_{\mu N}=1.6\), \(Y_{\tau N}=5\times 10^{-1}\) and \(Y_{eN}=5\times 10^{-4}\). This leads to the following branching ratios BR(\(H^{\pm}\rightarrow\mu^{\pm}N_{R}\)) \(\simeq 91\%\), BR(\(H^{\pm}\rightarrow\tau^{\pm}N_{R}\)) \(\simeq 9\%\) and BR(\(H^{\pm}\to e^{\pm}N_{R}\)) \(\simeq 0\%\). The charged singlet is narrow in this case as \(\Gamma_{H^{\pm}}/M_{H^{\pm}}\simeq 0.04\). The CLFV decays of charged leptons exhibits similar features as in BP1 with the exception that BRs of \(\tau\rightarrow\mu N_{R}\) and \(\mu\to e\gamma\) can be probed in the future experiments as they are slightly below the current bounds. The spin-independent DM-nucleon cross section can be tested by the DarkSide G2 experiment.
Bp3For this point, we choose the following values of the particle masses: \(M_{H^{\pm}}=600\) GeV and \(M_{N_{R}}=598\) GeV and therefore a small mass splitting of 2 GeV. We choose the following values for the Yukawa-type couplings: \(\{Y_{\mu N},Y_{\tau N},Y_{eN}\}=\{1,0.5,10^{-3}\}\) which leads to the following branching fractions: BR(\(H^{\pm}\rightarrow\mu^{\pm}N_{R}\)) \(\simeq 90\%\), BR(\(H^{\pm}\rightarrow\tau^{\pm}N_{R}\)) \(\simeq 10\%\) and BR(\(H^{\pm}\to e^{\pm}N_{R}\)) \(\simeq 0\%\). On the other hand, the branching ratios of CLFV decays of \(\mu\to e\gamma\) and \(\tau\rightarrow\mu\gamma\) can be tested in future experiments. Since the mass splitting is equal to 2 GeV, the most active component in the calculation of the relic density comes from coannihilation-based freezeout and therefore the choice of \(\lambda_{3}\) is pivotal in this case. We found that for this BP, the relic density of the \(N_{R}\) is below 2% of the total observed DM relic density. Finally, this BP is not sensitive to the direct detection experiments and the cross section is above the neutrino floor.
Bp4Here, we choose relatively heavy DM and charged singlet scalar; \(M_{N_{R}}=1000\) GeV and
Figure 5: The present and future exclusions on values of \(Y_{\ell N}\) for \(M_{H^{\pm}}=500\) GeV and \(\lambda_{3}=4\). Here we show the contours obtained from the LHC (navy), HL-LHC (turquoise), FCCβee (magenta), ILC (orange), CEPC (dark red) and FCCβhh (gray). All the bounds were obtained assuming SM Higgs boson mass of \(M_{H_{\rm SM}}=125\) GeV, and SM Higgs boson production rates. The Higgs diphoton rate is assumed to be equal to the SM prediction at LO.
1500 GeV. The Yukawa-type couplings are chosen such that \(Y_{\mu N}=Y_{\tau N}=2\gg Y_{eN}=5\times 10^{-3}\). With this choice, one gets: \(\text{BR}(H^{\pm}\to\mu^{\pm}N_{R})\simeq\text{BR}(H^{\pm}\to\tau^{\pm}N_{R}) \simeq 50\%\) while \(\text{BR}(H^{\pm}\to e^{\pm}N_{R})\). Similar features to BP1 and BP2 are observed for CLFV and DM phenomenology.
## V Production of dark matter at muon colliders
### Total cross sections
In this section, we discuss the general features of DM production at muon colliders9. In this model, DM can be produced through a variety of processes:
Footnote 9: The cross sections for both DM and charged scalar production at muon colliders are computed at Leading order using Madgraph_aMC@NLO [121] with a UFO model file [122] that can be found in the FeynRules model database [https://feynrules.impm.ucl.ac.be/wiki/ModelDatabaseMainPage](https://feynrules.impm.ucl.ac.be/wiki/ModelDatabaseMainPage).
* DM production in association with one SM particle dubbed as mono-X. Given the nature of the interaction Lagrangian and the fact that the initial state has a zero total electric charge, DM can only produced in association with one neutral boson. Therefore, we have mono-\(\gamma\), mono-\(Z\) and mono-Higgs (a full analysis of these channels will be done in future work [123]).
* DM production in association with two SM particles. For this category, we have seven different processes. The rates of those processes are slightly smaller than the mono-X production channels. However, these processes have smaller backgrounds (a full analysis of these channels will be done in future work [124]).
* DM production in association with three SM particles. The rates of the \(N_{R}N_{R}\) in association with three SM particles are even smaller than the other two categories. The signal-to-background optimisation for these channels are even more complicated while the backgrounds, on the other hand, are extremely small.
In figure 6 we show the total cross sections for DM production in \(\mu\mu\) collisions as function of the center-of-mass energy (\(\sqrt{s_{\mu\mu}}\)) for the four benchmark points defined in table 1. Starting with mono-X processes, it is clear that the mono-\(\gamma\) channel has the highest rate which varies from \(\simeq 1\) pb for \(\sqrt{s_{\mu\mu}}=3\) TeV to about 80 fb for \(\sqrt{s_{\mu\mu}}=30\) TeV in BP10. Mono-\(Z\) production has the second highest cross sections which varies between 200 fb for \(\sqrt{s_{\mu\mu}}=3\) TeV and about 2 fb for \(\sqrt{s_{\mu\mu}}=30\) TeV. Finally, mono-Higgs production has the lowest rates among all the mono-X processes with cross section approaching 63 fb for \(\sqrt{s_{\mu\mu}}=3\). The rates for mono-X decrease by about a factor of 10 for BP2, by a factor of 100 for BP3 and by a factor of 10 for BP4. Notice that the decrease in the production cross sections is not due to the DM mass but also to the change in the value of \(Y_{\mu N}\) since the total rates are proportional to \(Y_{\mu N}^{4}\). An exception to this rule is in the mono-Higgs production cross section which decrease by factors of 6-200 since it scales as \(\lambda_{3}^{2}Y_{\mu N}^{4}\).
Footnote 10: Note that for mono-\(\gamma\), we have applied some generator-level cuts by requiring that \(p_{T}^{2}>25\) GeV and \(|\eta^{\gamma}|<2.5\).
The rates of the production of DM in association with two SM particles are shown in figure 6. We can see that, as expected, they are suppressed as compared to the case of mono-X channels. The process with the highest is \(N_{R}N_{R}+\gamma\gamma\) whose cross section is between 50 fb and 2 fb. This process is followed by \(N_{R}N_{R}\gamma Z\) and \(N_{R}N_{R}W^{+}W^{-}\) whose cross sections are slightly smaller. An interesting process is the production of DM in association with two SM Higgs bosons whose cross sections is about 1-3 fb depending on the center-of-mass energy. We note that the rates of these processes decrease as the DM mass increase, _i.e._ by a factor of 10 for BP2.
Finally, the production cross sections of DM in association with three SM particles are shown in figure 6 for BP1-BP4. It is clear that these rates are suppressed as compared to those of the DM production in association with one SM particle and two SM particles respectively. The maximum being about 1 fb for \(N_{R}N_{R}W^{+}W^{-}\gamma\) and \(N_{R}N_{R}N_{\gamma}\gamma\gamma\) at \(\sqrt{s_{\mu\mu}}=3\) TeV. We note that the dependence on \(\sqrt{s_{\mu\mu}}\) of the cross sections for the production of \(N_{R}N_{R}\) in association with three SM particles is not a strong as in case of other processes. Despite the smallness of these cross section, these processes may have a high sensitivity reach due to the smallness of the associated backgrounds.
### Expected event yields and dominant backgrounds
After discussing the total cross sections for all the possible production channels of dark matter at muon colliders, it is instructive to discuss both the total expected number of events for specific decay channels of the SM particles and the associated backgrounds. In this subsection, we focus on two categories of DM production channels (_i_) DM production in association with one SM particle where we consider four processes: \(N_{R}N_{R}\gamma\), \(N_{R}N_{R}Z(\to\ell\ell)\), \(N_{R}N_{R}Z(\to q\bar{q})\) and \(N_{R}N_{R}H_{\text{SM}}(\to b\bar{b})\) and (_ii_) DM production in association with two SM particles where we consider five processes: \(N_{R}N_{R}N_{\gamma}\gamma\), \(N_{R}N_{R}\gamma Z(\to\ell\ell)\), \(N_{R}N_{R}Z(\to\ell\ell)Z(\to\ell\ell)\), \(N_{R}N_{R}V(\to q\bar{q})V(\to q\bar{q})\) and \(N_{R}N_{R}H_{\text{SM}}(\to b\bar{b})H_{\text{SM}}(\to b\bar{b})\). The re
sults are shown in tables 3 and 4. The discussion will be restricted for the following center-of-mass energies and integrated luminosities
\[\sqrt{s_{\mu\mu}} = 3,\ 10,\ \text{and 30 TeV}\] \[\int\mathrm{d}t\mathcal{L} = 1,\ 10,\ \text{and 100 ab}^{-1}, \tag{32}\]
where we follow ref. [40] assuming that the luminosity has a linear scaling with the center-of-mass energy. The expected number of events for mono-X processes is calculated using the following equation
\[\mathcal{N}=\sigma_{N_{R}N_{R}X}\times\mathrm{BR}_{X\to x_{1}x_{2}} \times\int\mathrm{d}t\mathcal{L}. \tag{33}\]
For the production of DM in association with two SM particles, we have
\[\mathcal{N}=\sigma_{N_{R}N_{R}XY}\times\mathrm{BR}_{X\to x_{1}x_{2}} \times\mathrm{BR}_{Y\to y_{1}y_{2}}\times\int\mathrm{d}t\mathcal{L}. \tag{34}\]
Figure 6: Production cross section of \(N_{R}N_{R}+X\) as a function of the center-of-mass energy (\(\sqrt{s_{\mu\mu}}\)) for the benchmark points BP1 (left upper panel), BP2 (right upper panel), BP3 (left lower panel) and BP4 (left lower panel). For each pane, we show the production cross section for \(N_{R}N_{R}\) plus one SM particle, plus two SM particles and in association with three SM particles.
#### iv.2.1 \(\mu^{+}\mu^{-}\to N_{R}N_{R}+X\)
\(N_{R}N_{R}\gamma\).This process leads to the final state comprising of a highly energetic photon and a large missing transverse energy (\(E_{T}^{\rm miss}\)). In addition one could have a few additional charged leptons, or photons that are emitted from the radiation of either the initial-state muons or the final-state photon. The dominant backgrounds for this signal process are the production of two or four neutrinos in association with a photon. The production of two neutrinos proceeds via muon-muon annihilation - \(\mu^{+}\mu^{-}\to Z(\to\nu\bar{\nu})\gamma\) - and VBF - \(VV\to Z(\to\nu\bar{\nu})\gamma\) - with cross sections varying from 2.98 pb for \(\sqrt{s_{\mu\mu}}=3\) TeV to 3.27 pb for \(\sqrt{s_{\mu\mu}}=30\) TeV. The production of four neutrinos in association with hard photon has an extremely cross section with the maximum being 1.5 fb for \(\sqrt{s_{\mu\mu}}=30\) TeV. It is worth noting from table 3 that the signal significance can easily 511. For the other benchmark points, a more detailed selection is required to reach a signal significance of 5 if one can achieve an acceptance times efficiency (\(A\times\epsilon\)) of about 15% for the signal in the signal region while the background is having \(A\times\epsilon\) of about \(\mathcal{O}(10^{-3})\).
Footnote 11: The signal significance is defined as \(\mathcal{S}/\sqrt{\mathcal{B}}\) with \(\mathcal{S}\) is the number of signal events and \(\mathcal{B}\) is the number of background events.
\(N_{R}N_{R}Z(\to\ell\ell)\).This process leads to a very clean final state containing two same-flavour opposite-sign (SFOS) charged leptons from the decay of the \(Z\)-boson in association with large missing energy. The dominant backgrounds are found to be the production of two \(Z\)-bosons with one decaying two charged leptons and the other decaying invisibly. We note that there is another background originated from the production of two \(W\)-bosons both decaying leptonically which can significantly be reduced using the requirement of two SFOS leptons whose invariant mass is close to the \(Z\)-boson mass. The cross sections for the \(ZZ\) production varies from 0.4 fb to 26 fb for the muon-muon annihilation (decreases while the center-of-mass energy increases) and from 56 fb to about 430 fb (increases with the center-of-mass energy). On the other hand, the cross section for \(WW\) production is larger and varies between 8.5 fb and 466 fb in the muon annihilation channel and between 150 fb and 858 fb in the VBF channels. The expected number of events for this signal process is about \(\mathcal{O}(10^{3}\)-\(10^{4})\). Given the differences in the topology of the signal and backgrounds, it is easy
\begin{table}
\begin{tabular}{l c c c c} \multicolumn{5}{c}{\(\sigma\times\text{BR [fb] (number of events)}\)} & Dominant backgrounds \\ \hline \hline \(\sqrt{s_{\mu\mu}}\) [TeV] & 3 & 10 & 30 & \\ \hline \hline \(N_{R}N_{R}\gamma\) & \(1.11\times 10^{3}\)\((1.11\times 10^{6})\) & \(1.80\times 10^{2}\)\((1.80\times 10^{6})\) & \(2.65\times 10^{1}\)\((2.65\times 10^{6})\) & \\ \(N_{R}N_{R}\gamma\) & \(1.13\times 10^{2}\)\((1.13\times 10^{5})\) & \(1.88\times 10^{1}\)\((1.88\times 10^{5})\) & \(2.83\times 10^{9}\)\((2.83\times 10^{5})\) & \(\nu\bar{\nu}+\gamma,2\nu\bar{\nu}+\gamma\) \\ \(N_{R}N_{R}\gamma\) & \(1.18\times 10^{1}\)\((1.18\times 10^{3})\) & \(2.65\times 10^{9}\)\((2.65\times 10^{4})\) & \(0.41\times 10^{9}\)\((4.10\times 10^{4})\) & \\ \(3.92\times 10^{4}\)\((3.95\times 10^{4})\) & \(3.20\times 10^{1}\)\((3.20\times 10^{5})\) & \(5.94\times 10^{9}\)\((5.94\times 10^{5})\) & \\ \hline \(N_{R}N_{R}Z(\to\ell\ell)\) & \(1.68\times 10^{1}\)\((1.68\times 10^{4})\) & \(4.44\times 10^{9}\)\((4.44\times 10^{4})\) & \(9.91\times 10^{9}\)\((9.10\times 10^{4})\) & \\ \(N_{R}N_{R}Z(\to\ell\ell)\) & \(1.62\times 10^{9}\)\((1.62\times 10^{3})\) & \(0.46\times 10^{9}\)\((4.58\times 10^{3})\) & \(9.39\times 10^{2}\)\((9.39\times 10^{3})\) & \(\gamma/Z(\to\ell\ell)+\nu\bar{\nu}\) \\ \(N_{R}N_{R}Z(\to\ell\ell)\) & \(0.13\times 10^{9}\)\((0.13\times 10^{3})\) & \(0.58\times 10^{-1}\)\((0.58\times 10^{3})\) & \(1.30\times 10^{-2}\)\((1.30\times 10^{3})\) & \(W(\to\ell\nu_{l})W(\to\ell\nu_{l})\) \\ \(0.28\times 10^{9}\)\((0.28\times 10^{3})\) & \(0.61\times 10^{9}\)\((0.61\times 10^{4})\) & \(0.17\times 10^{9}\)\((1.70\times 10^{4})\) & \\ \hline \(N_{R}N_{R}Z(\to q\bar{q})\) & \(1.59\times 10^{2}\)\((1.59\times 10^{5})\) & \(4.20\times 10^{1}\)\((4.20\times 10^{5})\) & \(8.61\times 10^{9}\)\((8.61\times 10^{5})\) & \\ \(N_{R}N_{R}N_{R}Z(\to q\bar{q})\) & \(1.53\times 10^{1}\)\((1.53\times 10^{4})\) & \(4.33\times 10^{0}\)\((4.33\times 10^{4})\) & \(0.89\times 10^{9}\)\((8.89\times 10^{4})\) & \(\gamma/Z(\to q\bar{q})+\nu\bar{\nu},H_{\rm SM}(\to b\bar{b})+\nu\bar{\nu}\) \\ \(1.26\times 10^{6}\)\((1.26\times 10^{3})\) & \(0.55\times 10^{9}\)\((5.54\times 10^{3})\) & \(0.12\times 10^{9}\)\((1.23\times 10^{4})\) & \(W(\to\ell\nu_{l})W(\to q\bar{q}),t\bar{t}\) \\ \(2.67\times 10^{9}\)\((2.67\times 10^{3})\) & \(5.73\times 10^{9}\)\((5.73\times 10^{4})\) & \(1.57\times 10^{9}\)\((1.57\times 10^{5})\) & \\ \hline \(N_{R}N_{R}N_{R}H_{\rm SM}(\to b\bar{b})\) & \(2.05\times 10^{1}\)\((2.05\times 10^{4})\) & \(1.02\times 10^{0}\)\((1.02\times 10^{4})\) & \(3.67\times 10^{-2}\)\((3.67\times 10^{3})\) & \\ \(N_{R}N_{R}N_{R}H_{\rm SM}(\to b\bar{b})\) & \(5.83\times 10^{9}\)\((5.83\times 10^{9}\)\((5.83\times 10^{9})\)\((3.01\times 10^{4})\) & \(1.12\times 10^{-2}\)\((1.12\times 10^{3})\) & \(H_{\rm SM}(\to b\bar{b})Z(\to\nu\bar{\nu}),H_{\rm SM}\nu_{\mu}\bar{\nu}_{\mu}\) \\ \(0.47\times 10^{9}\)\((0.47\times 10^{3})\) & \(0.47\times 10^{-1}\)\((0.47\times 10^{3})\) & \(1.81\times 10^{-3}\)\((1.81\times 10^{2})\)\(t\bar{t},Z(\to\nu\bar{\nu})Z(\to q\bar{q})\) \\ \(0.11\times 10^{9}\)\((0.11\times 10^{3})\) & \(0.21\times 10^{9}\)\((0.21\times 10^{4})\) & \(1.47\times 10^{-2}\)\((1.47\times 10^{3})\) & \\ \hline \hline \end{tabular}
\end{table}
Table 3: The total cross sections times the branching ratio (\(\sigma\times\text{BR}\)) and the expected number of signal events for the \(N_{R}N_{R}N_{R}\) production in association with \(\gamma\), \(Z(\to\ell\ell)\), \(Z(\to q\bar{q})\), and \(H_{\rm SM}(\to b\bar{b})\). We consider three
to achieve a significance of \(5\sigma\) by suitable event selection.
\(N_{R}N_{R}Z(\to q\bar{q})\) and \(N_{R}N_{R}H_{\rm SM}(\to b\bar{b})\).This category of channels involves two hadronic jets in association with missing energy. For the case of the \(Z\)-boson, the main decay channel is into \(q\bar{q};q=u,d,s,c,b\) with \({\rm BR}(Z\to q\bar{q})=69.911\%\)[125]. For the SM Higgs boson, the main decay is into \(b\bar{b}\) with \({\rm BR}(H_{\rm SM}\to b\bar{b})=57\%\). The dominant backgrounds to these signal processes come from \(q\bar{q}\) production in association with two neutrinos, SM Higgs boson production, \(t\bar{t}\) production with one top quark decaying leptonically and the other top decaying hadronically and \(WW\) production where one \(W\)-boson decays leptonically and the other decaying hadronically. In the last two backgrounds, one needs that the charged lepton escapes the detection volume. Since the both the hadronically decaying \(Z\)- and Higgs-bosons are accompanied with very large missing energy, their decays are not always resolved as two well separated two jets but rather a fat jet with specific characteristics. We expect for these channels a decent statistics and the backgrounds are under control with suitable selection.
#### iv.2.2 \(\mu^{+}\mu^{-}\to N_{R}N_{R}+XY\)
\(N_{R}N_{R}\gamma\gamma\).In this model, there is a possibility to produce DM pairs in association with two hard photons. The expected final state would consist of two hard photons in addition to large missing energy. Contrary to mono-\(\gamma\) channel, this process does not have large backgrounds where the main backgrounds are the production of two photons in association with two neutrinos: non-resonant and resonant (through the decay of the SM Higgs boson). The resonant backgrounds can be easily suppressed via suitable requirements on the invariant mass of the diphoton system, _i.e._ removing photons that are within the SM Higgs mass window. For all the benchmark points we expect a decent statistics for the signal events, _i.e._ of about \(\mathcal{O}(10^{3}\)-\(10^{5})\). This would imply that this process would one of the golden modes to probe DM at muon colliders which will be studied in details in a future work.
\(N_{R}N_{R}\gamma Z(\to\ell\ell)\).This is also one of the unique processes to probe DM at muon colliders. The final state consists of one hard photons, two charged leptons and large missing transverse energy. The associated background is manageable since it consists of the
production of one photon and one or two gauge bosons. The expected number of signal events is quite large as well, _i.e._ of about \(\mathcal{O}(10^{2}\)-\(10^{4})\).
\(N_{R}N_{R}Z(\rightarrow\ell\ell)Z(\rightarrow\ell\ell)\).This one of the most cleanest final states that can be used to probe DM at muon colliders. The signature consists of four charged leptons in association with missing energy. The corresponding is even smaller than for the other signal processes. We note that enough statistics can be only achieved at \(\sqrt{s_{\mu\mu}}=30\) TeV where we expect about \(\mathcal{O}(10^{2}\)-\(10^{4})\) events. The major backgrounds arise from the production of three gauge bosons or from the production of the SM Higgs boson decaying into \(VV^{*},V=W,Z\) in association with one or two gauge bosons.
Figure 7: Production cross section of \(H^{\pm}H^{\mp}+X\) as a function of the center-of-mass energy (\(\sqrt{s_{\mu\mu}}\)) for the benchmark points BP1 (left upper panel), BP2 (right upper panel), BP3 (left lower panel) and BP4 (left lower panel). For each pane, we show the production cross section for \(H^{\pm}H^{\mp}\) plus one SM particle, plus two SM particles and in association with three SM particles.
\(N_{R}N_{R}V(\to q\bar{q})V(\to q\bar{q})\) and \(N_{R}N_{R}H_{\rm SM}(\to b\bar{b})H_{\rm SM}(\to b\bar{b})\).The production of two gauge bosons or two SM Higgs bosons in association with DM pairs lead to purely hadronic final states (either four resolved jets or two fat jets) in association with large missing energy. The dominant backgrounds for these signal processes consist of the production of two SM neutrinos in association with two gauge bosons, two SM Higgs bosons, or one Higgs boson and one gauge boson decaying hadronically. This process will be studied in great detail in a future work.
## VI Production of charged scalars at muon colliders
In this section, we discuss the production of charged scalar pairs at muon colliders. Similarly to the production of DM, charged scalars can be produced either in association with one SM particle, with two SM particles or three SM particles. In addition, we could have the production of charged scalar pairs with non SM particles (\(H^{\pm}H^{\pm}\)) or the production of four charged scalars. An interesting feature about the production of charged scalars is that the appearance of at least two leptons in association with missing energy in addition to the decay products of the SM particles. For example, the production of charged scalar pairs in association with a SM Higgs boson would lead to two hard charged leptons, missing energy and two \(b\)-tagged jets (or one fat jet). On the other hand, the charged scalar production receives contributions from VBF thanks to their couplings to \(\gamma/Z\). The results of the production cross sections for the different processes involving charged scalars as a function of the center-of-mass energy are shown in figure 7. Below, we list the possible production channels for the charged scalars:
\(\mu\mu\to H^{\pm}H^{\mp}/H^{\pm}H^{\mp}H^{\pm}H^{\mp}\).These processes lead to signatures of either two charged lepton and MET or four charged leptons and MET. Charged scalar pair production proceeds through either \(s\)-channel diagrams with the exchange \(\gamma/Z\)-bosons or \(t\)-channel diagram with the exchange of the Majorana DM. The cross section for charged scalar pair production ranges from about \(10^{4}\) fb to about \(10^{1}\) fb. It is worth noting that for the benchmark point BP3 has the smallest cross section due to the tiny mass splitting of about 2 GeV between the charged scalar and the DM candidate. In all the case, the number of events for this process is quite large. The cross section for charged scalar pair production has a \(1/\sqrt{s_{\mu\mu}}\) scaling. The cross section for the production of four charged scalars is smaller as expected due to phase suppression. It is however quite decent as can be seen in fig. 7 and ranges from \(10^{-2}\) fb to \(10^{2}\) depending on the benchmark scenarios. The most notable signatures are 4 muons plus MET (BP1, BP2, BP3) and 2 muons and 2 tau leptons (BP4). These two channels will be studied in great detail in a future work [126].
\(\mu\mu\to H^{\pm}H^{\mp}+X\).In this case, we have three production channels: \(H^{+}H^{-}\gamma\), \(H^{+}H^{-}Z\) and \(H^{+}H^{-}H_{\rm SM}\). There are three contributions to \(H^{+}H^{-}\gamma\): \(s\)-channel contributions through \(\gamma/Z\) with the photon being emitted from the \(H^{+}H^{-}\) vertex and \(t\)-channel contribution through the exchange of \(N_{R}\). The final-state signature for this process consists of two charged leptons in association with one hard photon (the kinematics is quite different from \(N_{R}N_{R}\gamma Z\) production). We can see in fig. 7 that the cross section ranges from \(10^{1}\) fb to \(10^{3}\) fb depending on the center-of-mass energy and the benchmark point. Secondly, we can have the production of charged scalar pairs in association with one \(Z\)-boson which would lead to very rich signatures: 2\(\ell\) + MET, 2\(\ell\) + 2 jets + MET or 4\(\ell\) + MET. The cross sections for these processes are shown in fig. 7 where it is clear that the rates are quite important from \(10^{0}\) to \(10^{2}\) fb. Finally, the charged scalar pairs can be produced in association with a SM Higgs boson. The rates for this interesting channel are also quite important and range between \(10^{0}\) fb and \(10^{2}\) fb.
\(\mu\mu\to H^{\pm}H^{\mp}+XY\).For this category we have seven production channels: \(H^{+}H^{-}\gamma\gamma\), \(H^{+}H^{-}\gamma Z\), \(H^{+}H^{-}ZZ\), \(H^{+}H^{-}W^{+}W^{-}\), \(H^{+}H^{-}H_{\rm SM}H_{\rm SM}\), \(H^{+}H^{-}H_{\rm SM}Z\) and \(H^{+}H^{-}t\bar{t}\). The rates for this channels are quite smaller but still at the noticeable level, _i.e._ from \(10^{-2}\) fb to \(10^{2}\) fb depending on the center-of-mass energy and the benchmark point. It is worth noting that the production of charged scalar pairs in association with two SM particles leads to even much more richer signatures with very small backgrounds, _it.e._ 6 leptons plus MET, 4 leptons plus 4 jets plus MET and so on.
\(\mu\mu\to H^{\pm}H^{\mp}+XYZ\).This is the most complicated category of processes where we can have 16 processes with many more final-state signatures. The rates for these processes are much more smaller with the maximum being about 3 fb for \(H^{+}H^{-}\gamma H_{\rm SM}H_{\rm SM}\) and \(H^{+}H^{-}\gamma\gamma H_{\rm SM}\) at \(\sqrt{s_{\mu\mu}}=3\) TeV.
We close this section by a brief discussion of the contribution of VBF to the production of charged scalars in this model. As mentioned earlier the charged scalar couples to the photon and the \(Z\)-boson and therefore may receive pure gauge VBF contributions to the total production cross section. In this model, we can have the production of charged scalars through \(\gamma\gamma H^{+}H^{-}\), \(\gamma ZH^{+}H^{-}\), \(ZZH^{+}H^{-}\), \(ZZ\to H_{\rm SM}\to H^{+}H^{-}\) and \(W^{+}W^{-}\to\gamma/Z\to H^{+}H^{-}\) vertices. We take examples of production of \(H^{+}H^{-}\), \(H^{+}H^{-}\gamma\) and \(H^{+}H^{-}H_{\rm SM}\) and show the corresponding results for the four benchmark points in fig. 8. We can see that the cross sections increase with center-of-mass energy but do not go above 2 fb for \(H^{+}H^{-}\) in BP2. Therefore, the muon annihilation channels are the most important in our model thanks to the \(Y_{\mu N}^{4}\) dependence of the cross section.
## VII Conclusions
In this work we have studied the production of DM and charged scalars at high energy muon colliders within the minimal lepton portal DM model. The model consists of extending the SM with two \(SU(2)_{L}\) gauge singlets: a charged singlet scalar and a neutral right-handed fermion (or equivalently a Majorana fermion). We first discussed in details the phenomenology of the model at the LHC and the corresponding constraints from direct detection, relic density measurement, and lepton flavour violating decays of charged leptons and the SM Higgs boson. Then we have selected a few benchmark points that define some phenomenologically viable scenarios and which can be tested at future muon colliders. For these benchmark points, we have calculated the cross sections for the production of DM in association with SM particles and of charged scalars of the models in association with SM particles as a function of the center-of-mass energy. For DM production in association with SM particles, we have studied the total rates of 26 possible channels for the benchmark points considered in this study. Furthermore, we studied the total number of events and the associated backgrounds for 9 prominent channels and found that they are very important for the discovery DM at muon colliders for masses up to \(\sim 1\) TeV. We furthermore analysed the production of charged scalar production in association with SM particles (about 28 channels). The potential discovery for DM through charged scalar production at muon colliders is also an interesting as for direct production of DM. Further investigations of this model at muon colliders are ongoing where a full signal-to-background optimisation will be carried out for a number of selected channels.
## Acknowledgements
The work of AJ is supported by the Institute for Basic Science (IBS) under the project code, IBS-R018-D1.
|
2303.07036 | Knot-Quiver correspondence for double twist knots | We obtain a quiver representation for a family of knots called double twist
knots $K(p,-m)$. Particularly, we exploit the reverse engineering of
Melvin-Morton-Rozansky(MMR) formalism to deduce the pattern of the charge
matrix for these quivers. | Vivek Kumar Singh, Sachin Chauhan, Aditya Dwivedi, P. Ramadevi, B. P. Mandal, Siddharth Dwivedi | 2023-03-13T11:54:23Z | http://arxiv.org/abs/2303.07036v3 | # Knot-Quiver correspondence for double twist knots
###### Abstract
We obtain a quiver representation for a family of knots called double twist knots \(K(p,-m)\). Particularly, we exploit the reverse engineering of Melvin-Morton-Rozansky(MMR) formalism to deduce the pattern of the charge matrix for these quivers.
+
Footnote β : preprint: hep-th/0503018
## 1 Introduction
Knot-quiver correspondence (KQC) conjectured by Kucharski-Reineke-Stosic-Sulkowski [1] provides a new encoding of HOMFLY-PT invariants of knots in terms of the representation theory of quivers. Such a correspondence was motivated by studying the supersymmetric quiver quantum mechanics description of BPS states in brane systems describing knots [2].
Quivers are denoted as directed graphs with finite number of vertices connected by oriented edges. According to the conjecture, at least one quiver graph \(Q_{K}\) is associated to every knot \(K\). Particularly, rewriting generating series for colored HOMFLY-PT polynomials (in variables \(A\) and \(q\)) for a knot \(K\) as a motivic generating series involving Donaldson-Thomas (DT) invariants, we can extract the information of the oriented edges (_adjacency matrix or charge matrix_) on the quiver \(Q_{K}\). The procedure to obtain such a motivic series for any knot is still an open question.
For a class of torus knots \((2,2p+1)\), twist knots and knots upto 7-crossings, the quiver presentation were obtained [1, 3]. Except for unknot and trefoil, knots have more than one quiver presentation with same number of nodes indicating that the correspondence of knots to quivers is not unique. In Ref. [4], equivalent quivers with same number of nodes were shown as vertices on a permutohedra graph giving a systematic enumeration of such equivalent quivers. There are also quivers with different number of nodes which describe the same physics i.e., pool of dualities in 3d \(\mathcal{N}=2\) theory [5].
The physical as well as the geometrical interpretation of the conjectural KQC was addressed [6] within the framework of Ooguri-Vafa large \(N\) duality. From the physics perspective, the motivic generating series of \(Q_{K}\) matches the vortex partition function of 3d \(\mathcal{N}=2\) theory \(T[Q_{K}]\). On the geometrical side [6], the spectrum of holomorphic curves with boundary on the conormal Lagrangian \(L_{K}\) of the knot in the resolved conifold encodes the quiver data. That is the basic holomorphic disks correspond to the nodes of the quiver \(Q_{K}\) and the linking of their boundaries to the quiver arrows.
With double fat diagram description for arborescent knots, \(r\)-colored HOMFLY-PT invariant: \(P_{r}^{\mathcal{K}}(A=q^{N},q)\) (\(r\) refers to symmetric color \(\underbrace{\raisebox{-1.29pt}{\includegraphics[height=1.29pt]{fig/KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_KQC_K_KQC_KQC_K_KQC_K_KQC_K_KQC_K_K_QC_K_K_K_QC_K_K_K_QC_K_K_K_QC_K_
is to conjecture the charge matrix form for \(K(p,-m)\), we focus on rewriting \(r\)-colored Jones polynomial (\(A=q^{N=2}\)) :
\[J_{r}(K(p,-m),q)\equiv P_{r}^{K(p,-m)}(A=q^{2},q)\,\]
as a motivic series. Particularly, we obtain quiver charge matrix \(C^{K(p,-m)}_{i,j}\) associated with the \(K(p,-m)\) for \(m\leq 3\). We conjecture that the charge matrix \(C^{K(m,-m)}_{i,j}\) is sufficient to recursively generate charge matrix for all the double twist knots \(K(p\neq m,-m)\).
We follow the route of reverse engineering of MMR expansion [13] to derive the motivic series form for \(P_{r}^{K(p,-m)}(A=q^{N},q)\Big{|}_{N=2}\). We will now briefly review the reverse engineering formalism, which will set the notation and procedure we follow for \(K(p,-m)\) in the next section.
### Reverse Engineering of Melvin-Morton-Rozansky (MMR) expansion
Melvin-Morton-Rozansky(MMR) expansion states that the symmetric \(r\)-colored HOMFLYT-PT has the following semiclassical expansion:
\[\lim_{h\to 0,r\to\infty}P_{r}^{\mathcal{K}}(A,q=e^{h})\simeq\frac{1}{ \Delta^{\mathcal{K}}(x)^{N-1}}+\sum_{k=1}^{\infty}\bigg{(}\frac{R_{k}^{ \mathcal{K}}(x,N)}{\Delta^{\mathcal{K}}(x)}\bigg{)}^{N+2k-1}\,\hbar^{k}, \tag{1}\]
with the leading term being Alexander polynomial \(\Delta(x)\) and the variable \(x\) in terms of color \(r\) is \(x=q^{r}=\text{ const}\). The reverse approach is to obtain \(P_{r}^{\mathcal{K}}(A,q)\) using the Alexander polynomial \(\Delta(x)\)[13]. This approach also has obstacles to lift the \(\hbar\to 0\) expansion to \(q\) dependent \(P_{r}^{\mathcal{K}}(A,q)\) but can be fixed for some situations by comparing with the data of symmetric \(r\)-colored HOMFLY-PT polynomials known for \(r=1,2,3\). We will briefly highlight the steps involved in the reverse engineering formalism of MMR expansion [13]:
1. We rewrite the Alexander polynomial in new variable \(X=\frac{(1-x)^{2}}{x}\). Thus, Alexander polynomial takes the following form: \[\Delta(x)=1-\sum_{i=1}^{s}a_{i}\frac{(1-x)^{2i}}{x^{i}}\equiv 1-g(X)\]
2. Now we use the following inverse binomial theorem \[\frac{1}{(1-u)^{n}}=\sum_{m=0}^{\infty}\binom{n+m-1}{m}u^{m}\] to write the first term of MMR expansion (1) as follows: \[\frac{1}{\Delta(x)^{N-1}} = \sum_{m=0}^{\infty}\binom{N+m-2}{m}g(X)^{m}=\sum_{k=0}^{\infty}c_ {k}X^{k}\] (2) \[= \sum_{k=0}^{\infty}(k!c_{k})\frac{X^{k}}{k!}\]
3. We make the following quantum deformation to get the quantum-deformed polynomial: \[\frac{X^{k}}{k!}=\frac{(1-x)^{2k}}{k!x^{k}}\rightsquigarrow\begin{bmatrix}r\\ k\end{bmatrix}_{q}q^{-rk}(-Aq^{r}t^{3};q)_{k}.\] Here the factor in ordinary parentheses denote \(q\)-Pochhammer and the square bracket term are the \(q\)-binomials defined as: \[\begin{bmatrix}r\\ k\end{bmatrix}_{q}=\frac{(q;q)_{r}}{(q;q)_{k}(q;q)_{r-k}}\qquad(x;q)_{k}=\prod_ {i=0}^{k-1}(1-xq^{i}).\]
4. Further, the coefficient \(k!c_{k}\xrightarrow{\text{q--deformation}}\tilde{c}_{k}^{\mathcal{K}}\) depends on the knot \(\mathcal{K}\) and must be written in terms of \(q\)-Pochammers, \(q\)-binomials, and \((q,A)\)- dependent powers so that \[P_{r}^{\mathcal{K}}(A,q)=\sum_{k=0}^{r}\begin{bmatrix}r\\ k\end{bmatrix}q^{-rk}(-Aq^{r}t^{3};q)_{k}\tilde{c}_{k}^{\mathcal{K}}\] can be transformed into the following form to deduce the corresponding quiver \(Q_{\mathcal{K}}\): \[P_{r}^{\mathcal{K}}(A,q)=\sum_{d_{1},d_{2},\ldots,d_{m}}(-1)^{\sum_{i}\gamma _{i}d_{i}}\frac{q^{\sum_{i,j}c_{i,j}^{\mathcal{K}}d_{i}d_{j}}(q^{2};q^{2})_{r }}{\prod_{i=1}^{m}(q^{2};q^{2})_{d_{i}}}q^{\sum_{\alpha_{i}}d_{i}}A^{\sum_{ \beta_{i}}d_{i}}.\] (3) Here \(C_{i,j}^{\mathcal{K}}\) is quiver charge matrix and the variables \(\alpha_{i},\ \beta_{i}\) and \(\gamma_{i}\) are real parameters. The sets \(\{d_{i}\}\) must obey \(r=d_{1}+d_{2}\ldots+d_{m}\). Even though such a transformation is motivated by comparing Ooguri-Vafa partition function [14] with the motivic generating series [15; 16; 17], it is still a hard problem of obtaining \(\tilde{c}_{k}^{\mathcal{K}}\) for any knot.
Note that \(C_{i,j}^{\mathcal{K}}\) depends on the quadratic power of \(q\) and independent of \(A=q^{N}\). Hence, we will work with the colored Jones polynomials \(J_{r}(\mathcal{K},q)\) of knot \(\mathcal{K}\) to extract quiver charge matrix using the reverse engineering techniques of MMR formalism replacing \(A\to q^{2}\) in eqn.(3)1.
Footnote 1: Theorem 1.1, in Ref. [3] indicates the colored Jones polynomials of rational links also admit generating functions in quiver form.
The plan of the paper is as follows: In section 2.1, we briefly discuss the colored Jones polynomials of double twist knot \(K(p,-m)\) obtained from the reverse engineering techniques of MMR expansion. In section 3, we conjecture \(C_{i,j}^{\mathcal{K}}\) for \(\mathcal{K}=K(p,-m)\) and validate it for some double twist knots. We conclude in section 4 presenting various open questions and future directions.
## 2 Double twist knots
The double twist knots \(K(p,-m)\) is generated by two positive twist parameters \(p\) and \(m\) as shown in Fig.(1). Table 1 lists some of the double twist knots:
As these double twist knots belong to arborescent family, the symmetric \(r\)-colored HOMFLY-PT polynomials can be obtained for every \(r\) from Chern-Simons theory [18, 19]. In fact, colored HOMFLY-PT for arbitrary \(r\) in closed form is given in Ref. [11]. Hence our aim is not to reconstruct \(r\)-colored HOMFLY-PT for double twist knots.
We will now present the reverse engineering of MMR formalism (1) to rewrite \(r\)-colored Jones as a motivic series to extract the charge matrix of the quiver \(Q_{K(p,-m)}\).
### Colored HOMFLY-PT polynomials for class of Double twist knots \(K(p,-m)\)
For a given positive integer \(p,m\), the Alexander polynomial of double twist knot type \(K(p,-m)\) takes the form
\[\Delta^{K(p,-m)}(x)=1-(p\ m)X. \tag{2}\]
Here \(X=\frac{(1-x)^{2}}{x}\). Such a linear expression appeared in many knots [13] suggesting the inverse
\begin{table}
\begin{tabular}{|c|c|} \hline \(K(p,-m)\) & Knots \\ \hline \(K(p,-1)\) & Twist Knots \\ \hline K(2,-2) & \(8_{3}\) \\ \hline K(3,-2) & \(10_{3}\) \\ \hline \end{tabular}
\end{table}
Table 1: Examples of double twist knots \(K(p,-m)\)
Figure 1: Double twist knots (\(m,p\) denote number of full-twists)
binomial expansion to take the following form:
\[\frac{1}{\Delta^{K(p,-m)}(x)^{N-1}}=\sum_{0\leq k_{1}\leq\ldots\leq k_{2mp}}^{ \infty}\binom{N+k_{2mp}-2}{k_{2mp}}\prod_{i=1}^{2mp-1}\binom{k_{i+1}}{k_{i}} \frac{(1-x)^{2k_{2mp}}}{x^{k_{2mp}}}. \tag{2.2}\]
Further using the quantum deformation procedure discussed in [13] and taking \(A\to q^{2}(N=2)\) in eqn.(2.2), the colored Jones polynomial can be written as
\[J_{r}(K(p,-m),q)=\sum_{d_{1}+d_{2}+\ldots+d_{4mp+1}=r}(-1)^{\sum_{i}\gamma_{i} d_{i}}\frac{q^{\sum_{i,j}C_{i,j}^{K(p,-m)}d_{i}d_{j}}(q^{2};q^{2})_{r}}{\prod_{i=1 }^{m}(q^{2};q^{2})_{d_{i}}}q^{\sum\xi_{i}d_{i}}. \tag{2.3}\]
where \(C_{i,j}^{K(p,-m)}\) is \((4pm+1)\times(4pm+1)\) charge matrix for quiver \(Q_{K(p,-m)}\). Note that \(\xi_{i}\) and \(\gamma_{i}\) are real parameters which can be fixed by comparing them with \(r=1,2,3,\ldots\) colored Jones polynomials [11, 20]. By this approach, we explicitly determined \(\{\xi_{i}\}\), \(\{\gamma_{i}\}\) parameters (2.3) for \(K_{(2,-2)}=\mathbf{8_{3}}\) knot:
\[J_{r}(8_{3},q) = \sum_{d_{1}+d_{2}+\ldots+d_{17}=r}(-1)^{d_{11}+d_{13}+d_{14}+d_{1 6}+d_{3}+d_{5}+d_{6}+d_{8}}(q^{2};q^{2})_{r}\frac{q^{\sum_{i,j}C_{i,j}d_{i}d_ {j}}}{\prod_{i=1}^{17}(q^{2};q^{2})_{d_{i}}}\] \[q^{(d_{11}-2d_{12}-d_{13}+d_{14}+2d_{15}+3d_{16}+4d_{17}-2d_{2}- d_{3}-4d_{4}-3d_{5}-d_{6}+d_{8}+2d_{9})}\]
where the quiver charge matrix \(C^{K(2,-2)}\) is
\[C^{8_{3}}\equiv\left(\begin{array}{ccccccccccccc}0&-1&-1&-1&-1&0&0&0&0&-1&- 1&-1&-1&0&0&0&0\\ -1&0&0&-1&-1&0&0&1&1&-2&-1&-2&-1&-1&0&0&1\\ -1&0&1&0&0&1&1&2&2&-2&-1&-2&-1&-1&0&0&1\\ -1&-1&0&-2&-2&-1&-1&1&1&-3&-2&-4&-3&-2&-1&0&1\\ -1&-1&0&-2&-1&0&0&2&2&-3&-2&-4&-3&-2&-1&0&1\\ 0&0&1&-1&0&1&1&2&2&-2&-1&-3&-2&-1&0&1&2\\ 0&1&2&1&2&2&3&3&3&-1&0&-1&0&0&1&1&2\\ -1&-2&-2&-3&-3&-2&-2&-1&-1&-2&-2&-3&-3&-2&-2&-1&-1\\ -1&-1&-1&-2&-2&-1&-1&0&0&-2&-1&-2&-2&-1&-1&0&0\\ -1&-2&-2&-4&-4&-3&-3&-1&-1&-3&-2&-4&-4&-3&-3&-1&-1\\ -1&-1&-1&-3&-3&-2&-2&0&0&-3&-2&-4&-3&-2&-2&0&0\\ 0&-1&-1&-2&-2&-1&-1&0&0&-2&-1&-3&-2&-1&-1&0&0\\ 0&0&0&-1&-1&0&0&1&1&-2&-1&-3&-2&-1&0&1&1\\ 0&0&0&0&1&1&1&1&-1&0&-1&0&0&1&1&1\\ 0&1&1&1&1&2&2&2&2&-1&0&-1&0&0&1&1&2\end{array}\right). \tag{2.4}\]
The polynomial invariants matches with the closed form [11] for large value of \(r\) as well confirming that the above \(8_{3}\) quiver data is indeed correct. Such an exercise for \(K(2,-2)\) suggested that we could propose and conjecture \(C^{K(p,-m)}\) for the double twist knot family. We discuss them in the following section.
## 3 Knot-Quiver Correspondence of double twist knots K(p,-m)
We observe that the quiver charge matrix has a pattern by performing similar analysis of the previous section for other examples of the double twist knots \(K(p,-m)\). Our explicit computation suggests the following proposition.
**Proposition:**
\(r\)_-colored Jones polynomial for double twist knots can be expressed in the quiver representation:_
\[J_{r}(K(p,-m);q)=\sum_{d_{1},d_{2}\ldots d_{4pm+1}}(-1)^{\Lambda_{(p,-m)}}\frac {(q^{2};q^{2})_{r}}{\prod_{i=1}^{4pm+1}(q^{2};q^{2})_{d_{i}}}q^{\sum_{i,j}C^{K( p,-m)}d_{i}d_{j}+\Xi_{(p,-m)}} \tag{11}\]
_where the linear term \(\Xi_{(p,-m)}\equiv\sum_{i}\xi_{i}d_{i}\), phase factor \(\Lambda_{(p,-m)}\equiv\sum_{i}\gamma_{i}d_{i}\)._
The pattern of the charge matrix \(C^{K(p,-m)}\) for some examples leads to the following conjecture:
**Conjecture:** _The generic structure of quiver charge matrix will take the form_
\[C^{K(p,-m)}=\left[\begin{array}{c|c|c|c|c|c|c}F_{0}&F_{1}&\tilde{F}_{1}& \cdots&\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\
our charge matrix conjecture assumes \(p\geq m\). Hence, our explicit computations of set \(X_{1}\) for \(m=2,3\) is not derivable from the \(C^{K(p,-1)}\).
In the following subsections, we will give some examples to validate our proposition and conjecture. Specifically, we work out the \(X_{1}\) set matrices for double twist knots \(K(m,-m)\) for \(m=1,2,3\). This is sufficient to obtain the explicit quiver presentations for all the double twist knots \(K(p,-m)\) where \(m=1,2,3\).
### Knot-Quiver correspondence for twist knots \(K(p,-1)\)
\(K(p,-1)\) are known in the literature as 'twist knots' which is the simplest class of double twist knots. In this case, we fix the parameter \(m=1\) and vary the other parameter \(p\). The simplest example, we consider \(p=1\), i.e \(K(-1,1)={\bf 4_{1}}\) knot. Using eqn.(10), we obtained the quiver form of \({\bf 4_{1}}\) as
\[J_{r}(4_{1};q) = \sum_{d_{1}+d_{2}+d_{3}+d_{4}+d_{5}=r}(-1)^{d_{3}+d_{4}}\frac{(q^ {2};q^{2})_{r}}{(q^{2};q^{2})_{d_{1}}(q^{2};q^{2})_{d_{2}}(q^{2};q^{2})_{d_{3}} (q^{2};q^{2})_{d_{4}}(q^{2};q^{2})_{d_{5}}}\] \[q^{-4d_{2}-2d_{1}d_{2}-2d_{2}^{2}-3d_{3}-2d_{1}d_{3}-4d_{2}d_{3} -d_{3}^{2}-d_{4}-2d_{2}d_{4}+d_{4}^{2}-2d_{2}d_{5}+2d_{4}d_{5}+2d_{5}^{2}+2r}.\]
Thus, the quiver charge matrix
\[C^{4_{1}}=\left(\begin{array}{ccccc}0&-1&-1&0&0\\ -1&-2&-2&-1&-1\\ -1&-2&-1&0&0\\ 0&-1&0&1&1\\ 0&-1&0&1&2\end{array}\right).\]
Similarly, we obtained other charge matrices for \(p=2,3\) i.e
\[C^{6_{1}}=\left(\begin{array}{ccccccccc}0&-1&-1&0&0&-1&-1&0&0\\ -1&-2&-2&-1&-1&-2&-2&-1&-1\\ -1&-2&-1&0&0&-1&-1&0&0\\ 0&-1&0&1&1&0&0&1&1\\ 0&-1&0&1&2&1&1&2&2\\ -1&-2&-1&0&1&0&1&2&2\\ 0&-1&0&1&2&1&2&3&3\\ 0&-1&0&1&2&1&2&3&4\end{array}\right),\]
\[C^{8_{1}}=\left(\begin{array}{cccccccccccc}0&-1&-1&0&0&-1&-1&0&0&-1&-1&0&0\\ -1&-2&-2&-1&-1&-2&-2&-1&-1&-2&-2&-1&-1\\ -1&-2&-1&0&0&-1&-1&0&0&-1&-1&0&0\\ 0&-1&0&1&1&0&0&1&1&0&0&1&1\\ 0&-1&0&1&2&1&1&2&2&1&1&2&2\\ -1&-2&-1&0&1&0&0&1&1&0&0&1&1\\ -1&-2&-1&0&1&0&1&2&2&1&1&2&2\\ 0&-1&0&1&2&1&2&3&3&2&2&3&3\\ 0&-1&0&1&2&1&2&3&4&3&3&4&4\\ -1&-2&-1&0&1&0&1&2&3&2&2&3&3\\ -1&-2&-1&0&1&0&1&2&3&2&3&4&4\\ 0&-1&0&1&2&1&2&3&4&3&4&5&5\\ 0&-1&0&1&2&1&2&3&4&3&4&5&6\end{array}\right).\]
These three examples confirms our conjecture for \(m=1\). For clarity, the explicit quiver charge matrix for any twist knot \(K(p,-1)\) is
\[C^{K(p,-1)}=\left(\begin{array}{c|c|c|c|c|c|c}F_{0}&F_{1}&\tilde{F}_{1}& \cdots&\omit&F_{p}&\tilde{F}_{p}\\ \hline\hline F_{1}^{\top}&U_{1}&R_{1}&\tilde{R}_{1}&\cdots&\omit&\omit\tilde {R}_{1}&R_{1}\\ \hline\tilde{F}_{1}^{\top}&R_{1}^{\top}&\tilde{U}_{1}&T_{1}&\tilde{T}_{1}& \cdots&T_{1}&\tilde{T}_{1}\\ \hline F_{1}^{\top}&R_{1}^{\top}&T_{1}^{\top}&U_{2}&R_{2}&\cdots&\omit\tilde {R}_{2}&R_{2}\\ \hline\vdots&\vdots&\ddots&\vdots&\cdots&\vdots&\vdots\\ \hline F_{p}^{\top}&R_{1}^{\top}&\cdots&U_{i}&\cdots&\omit\tilde{R}_{i}&R_{ i}\\ \hline\tilde{F}_{i}^{\top}&\tilde{R}_{1}^{\top}&\cdots&R_{i}^{\top}&\tilde{U}_{i}& \cdots&T_{i}&\tilde{T}_{i}\\ \hline\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ \hline F_{p}^{\top}&\tilde{R}_{1}^{\top}&T_{1}^{\top}&\tilde{R}_{2}^{\top}& \cdots&U_{p}&R_{p}\\ \hline F_{p}^{\top}&R_{1}^{\top}&T_{1}^{\top}&R_{2}^{\top}&\cdots&R_{p}^{\top} &U_{p}\end{array}\right), \tag{3.4}\]
where the generators \((X_{1})\) as follows:
\[\begin{array}{c}U_{1}=\left[\begin{array}{c|c}-2&-2\\ -2&-1\end{array}\right],\tilde{U}_{1}=\left[\begin{array}{c|c}\frac{1}{1} \framebox{1}\\ \hline 1\framebox{2}\end{array}\right],\]
\[\begin{array}{c}R_{1}=\left[\begin{array}{c|c}-1&-1\\ \hline 0&-1\end{array}\right],\tilde{R}_{1}=\left[\begin{array}{c|c}-2&-2\\ -1&-1\end{array}\right],\\ T_{1}=\left[\begin{array}{c|c}0&0\\ \hline 1&1\end{array}\right],\tilde{T}_{1}=\left[\begin{array}{c|c}\frac{1}{2} \framebox{1}\\ \hline 2&2\end{array}\right],\end{array}\]
and \(F_{k}=\left[-1,\,-1\right],\tilde{F}_{k}=\left[0,\,0\right],\) and \(F_{0}=\left[0\right].\) These results agree with the quiver charge matrix of twist knots obtained in Ref. [1].
### Knot-Quiver correspondence for \(K(p,-2)\)
We have already worked out \(K(2,-2)\equiv 8_{3}\) knot in section 2.1. Further, we explicitly worked out colored Jones for \(K(p=3,-2)\equiv 10_{3}\) and obtained the following \(C^{K(p=3,-2)}\) :
\[\left(\begin{array}{
and \(F_{k}=\left[-1,\,-1,\,-1\,-1\right],\tilde{F}_{k}=\left[0,\,0,\,0\,0\right],\)and \(F_{0}=[0].\) We further worked out for \(p=4,5\) as well and verified that our conjecture (3.2) is obeyed. From these computations, we can deduce the general form of linear term \(\Xi_{(p,-2)}\) and phase factor \(\Lambda_{(p,-2)}\) in the proposition(3.1) for arbitrary \(p\) as:
\[\Xi_{(p,-2)} = \sum_{i}^{2p}(\tau_{1}(i)d_{4i+1}+\tau_{2}(i)d_{4i}+\tau_{3}(i)d_{4 i-1}+\tau_{4}(i)d_{4i-2}),\] (3.6) where \[\tau_{1}(i) = -2+2(-1)^{i}+i,\,\,\tau_{2}(i)=-3+2(-1)^{i}+i,\,\,\tau_{3}(i)=-2+i, \,\,\tau_{4}(i)=-3+i\]
and the phase factor is
\[\Lambda_{(p,-2)}=\sum_{i=2}^{4p+1}d_{(-3+4i-(-1)^{(\lfloor\frac{i}{2}\rfloor) \frac{1}{2}})}. \tag{3.7}\]
Using the above data, we can write the colored Jones polynomial for any \(K(p,-2)\) in quiver presentation with the quiver charge matrix consistent with the conjecture (3.2). So far, we have obtained the set of matrices \(X_{1}\) for \(m=1,2\). With the hope to deduce the pattern for the set \(X_{1}\) for any \(m\), we will investigate double twist knots with \(m=3\) in the following subsection.
### Knot-Quiver correspondence for \(K(p,-3)\)
Following reverse MMR, we could write the quiver presentation for \(K(3,-3)\) and obtain the following quiver matrix \(C^{K(3,-3)}\):
\[\left(\begin{array}{
The generators \((X_{1})\) of quiver charge matrix can be read off comparing with the conjectured form (3.2):
\[U_{1} = \left[\begin{array}{c
Ideally, it would be beneficial if we can find the set of matrices \(X_{1}\) for any \(m\) as well as closed form for \(\Xi_{p,-m}\) and \(\Lambda_{p,-m}\). The size of the quiver matrix \((4mp+1)\times(4mp+1)\) makes the computations difficult.
## 4 Conclusion and discussion
Double twist knot \(K(p,-m)\) dependent on two full twist parameters \(p,m\) belongs to the arborescent family (see Fig.1). Finding a quiver with charge matrix (10) associated to each of the double twist knots was attempted using reverse engineering of Melvin-Morton-Rozansky expansion. We observed the Alexander polynomial form to be \(\Delta(X)=1-pmX\) which was almost similar to twist knots \(K(p,-1)\) studied in Ref. [1]. Comparing the structure of twist knot quiver, we put forth a proposition (11) for colored Jones in quiver presentation as well as conjectured (10) the structure of the quiver charge matrix \(C^{K(p,-m)}\) for any double twist knot \(K(p,-m)\). We have explicitly worked out some double twist knots to validate our proposition and the conjecture for \(m=1,2,3\).
There is a pretzel family of knots whose \([r]\) colored Jones and HOMFLY-PT are known. It will be interesting exercise if we can explicitly write a quiver presentation and deduce the charge matrix for the pretzel. From our double twist knot results, it may be straightforward to attempt quiver presentation for knots whose Alexander polynomial takes the form \(\Delta(X)=1\pm(m_{1}m_{2}\dots m_{p})X\). We hope to address these problems in future.
**Acknowledgements** The work of VKS is supported by "Tamkeen under the NYU Abu Dhabi Research Institute grant CG008 and ASPIRE Abu Dhabi under Project AARE20-336". VKS would like to thank P. Sulkowski, Q. Chen, Hisham Sati and Urs Schreiber for helpful discussion. PR would like to thank SERB (MATRICS) MTR/2019/000956 funding which enabled her visit to University of Warsaw and present these results. PR would also like to acknowledge the ICTP's Associate programme which helped her to complete this project during the visit as senior associate. SC and PR would like to thank all the speakers as well as the organisers of the Learning workshop on BPS states and 3-manifolds for discussions and interactions on 'knot-quiver' correspondence. BPM acknowledges the research grant for faculty under IoE Scheme (Number 6031) of Banaras Hindu University. AD would like to thank UGC for research fellowship. |
2302.10663 | RealFusion: 360Β° Reconstruction of Any Object from a Single Image | We consider the problem of reconstructing a full 360{\deg} photographic model
of an object from a single image of it. We do so by fitting a neural radiance
field to the image, but find this problem to be severely ill-posed. We thus
take an off-the-self conditional image generator based on diffusion and
engineer a prompt that encourages it to "dream up" novel views of the object.
Using an approach inspired by DreamFields and DreamFusion, we fuse the given
input view, the conditional prior, and other regularizers in a final,
consistent reconstruction. We demonstrate state-of-the-art reconstruction
results on benchmark images when compared to prior methods for monocular 3D
reconstruction of objects. Qualitatively, our reconstructions provide a
faithful match of the input view and a plausible extrapolation of its
appearance and 3D shape, including to the side of the object not visible in the
image. | Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi | 2023-02-21T13:25:35Z | http://arxiv.org/abs/2302.10663v2 | # RealFusion
###### Abstract
We consider the problem of reconstructing a full 360\({}^{\circ}\) photographic model of an object from a single image of it. We do so by fitting a neural radiance field to the image, but find this problem to be severely ill-posed. We thus take an off-the-self conditional image generator based on diffusion and engineer a prompt that encourages it to "dream up" novel views of the object. Using the recent DreamFusion method, we fuse the given input view, the conditional prior, and other regularizers in a final, consistent reconstruction. We demonstrate state-of-the-art reconstruction results on benchmark images when compared to prior methods for monocular 3D reconstruction of objects. Qualitatively, our reconstructions provide a faithful match of the input view and a plausible extrapolation of its appearance and 3D shape, including to the side of the object not visible in the image.
## 1 Introduction
We consider the problem of obtaining a 360\({}^{\circ}\) photographic reconstruction of _any_ object given a _single image_ of it. The challenge is that a single image _does not_ contain sufficient information for 3D reconstruction. Without access to multiple views, an image only provides weak evidence about the 3D shape of the object, and only for one side of it. Even so, there is proof that this task _can_ be solved: any skilled 3D artist can take a picture of almost any object and, given sufficient time and effort, create a plausible 3D model of it. The artist can do so by tapping into her vast knowledge of the natural world and of the objects it contains, making up for the information missing in the image.
To solve this problem algorithmically, one must then marry visual geometry with a powerful statistical model of the 3D world. The recent explosion of 2D image generators like DALL-E [36], Imagen [42], and Stable Diffusion [40] suggests that such models might not be far behind. By using diffusion, these methods can solve highly-ambiguous generation tasks, obtaining plausible 2D images from textual descriptions, semantic maps, partially-complete images, or simply unconditionally from random noise. Clearly, these models possess high-quality priors--if not of the 3D world, then at least of the way it is represented in 2D images. Hence, in theory, a 3D diffusion model trained on vast quantities of 3D data should be capable of producing 3D reconstructions, either unconditionally or conditioned on a 2D image. However, training such a model is infeasible because, while one can access billions of 2D images [43], the same cannot be said about 3D data.
The alternative to training a 3D diffusion model is to extract 3D information from an existing 2D model. A 2D image generator can in fact be used to sample or validate multiple views of a given object; these multiple views can then be used to perform 3D reconstruction. With early GAN-based generators, authors showed some success for simple data like faces and synthetic objects [3, 9, 12, 30, 31, 54]. With the availability of large-scale models like CLIP [34] and, more recently, diffusion models, increasingly complex results have been obtained. The most recent example is DreamFusion [33], which generates high-quality 3D models from textual descriptions alone.
Despite these advances, the problem of single-image 3D reconstruction remains largely unsolved. In fact, these recent methods do not solve this problem. They either sample random objects, or, like in the case of DreamFusion, start from a textual description.
A problem in extending generators to reconstruction is _coverage_ (sometimes known as mode collapse). For example, high-quality face generators based on GANs are usually difficult to invert: they may be able to generate _many_ different high-quality images, and yet are usually unable to generate _most_ images [1]. Conditioning on an image provides a much more detailed and nuanced specification of the object than, say, a textual description. It is not obvious if the generator model would be able to satisfy all such constraints.
In this paper, we study this problem in the context of diffusion models. We express the object's 3D geometry and appearance by means of a neural radiance field. Then, we train the radiance field to reconstruct the given input image by minimizing the usual rendering loss. At the same time, we sample random other views of the object, and constrain them with the diffusion prior, using a technique similar to DreamFusion.
We find that, out of the box, this idea does not work well. Instead, we need to make a number of improvements and modifications. The most important change is to adequately condition the diffusion model. The idea is to configure the prior to "dream up" or sample images that may _plausibly constitute other views of the given object_. We do so by engineering the diffusion prompt from random augmentations of the given image. Only in this manner does the diffusion model provide sufficiently strong constraints to allow meaningful 3D reconstruction.
In addition to setting the prompt correctly, we also add some regularizers: shading the underlying geometry and randomly dropping out texture (also similar to DreamFusion), smoothing the normals of the surface, and fitting the model in a coarse-to-fine fashion, capturing first the overall structure of the object and only then the fine-grained details. We also focus on efficiency and base our model on Instant-NGP [29]. In this manner, we achieve reconstructions in the span of hours instead of days if we were to adopt traditional MLP-based NeRF models.
We assess our approach by using random images captured in the wild as well as existing benchmark datasets. Note that we do _not_ train a fully-fledged 2D-to-3D model and we are _not_ limited to specific object categories; rather, we perform reconstruction on an image-by-image basis using a pretrained 2D generator as a prior. Nonetheless, we can surpass quantitatively and qualitatively previous single-image reconstructors, including Shelf-Supervised Mesh Prediction [58], which uses supervision tailored specifically for 3D reconstruction.
More impressively, and more importantly, we obtain plausible 3D reconstructions that are a good match for the provided input image (Fig. 1). Our reconstructions are not perfect, as the diffusion prior clearly does its best to explain the available image evidence but cannot always match all the details. Even so, we believe that our results convincingly demonstrate the viability of this approach and trace a path for future improvements.
To summarize, we make the following **contributions**: (1) We propose RealFusion, a method that can extract from a single image of an object a 360\({}^{\circ}\) photographic 3D reconstruction without assumptions on the type of object imaged or 3D supervision of any kind; (2) We do so by leveraging an existing 2D diffusion image generator via a new single-image variant of textual inversion; (3) We also introduce new regularizers and provide an efficient implementation using InstantNGP; (4) We demonstrate state-of-the-art reconstruction results on a number of in-the-wild images and images from existing datasets when compared to alternative approaches.
## 2 Related work
**Image-based reconstruction of appearance and geometry.** Much of the early work on 3D reconstruction is based on principles of multi-view geometry [11]. These classic meth
ods use photometry only to match image features and then discard it and only estimate 3D shape.
The problem of reconstructing photometry and geometry together has been dramatically revitalized by the introduction of neural radiance fields (RFs). NeRF [26] in particular noticed that a coordinate MLP provides a compact and yet expressive representation of 3D fields, and can be used to model RFs with great effectiveness. Many variants of NeRF-like models have since appeared. For instance, some [24, 48, 50] use sign distance functions (SDFs) to recover cleaner geometry. These approaches assume that dozens if not hundreds of views of each scene are available for reconstruction. Here, we use them for single-image reconstruction, using a diffusion model to "dream up" the missing views.
Few-view reconstruction.Many authors have attempted to improve the statistical efficiency of NeRF-like models, by learning or incorporating various kinds of priors. Quite related to our work, NeRF-on-a-Diet [17] reduces the number of images required to learn a NeRF by generating random views and measuring their "semantic compatibility" with the available views via CLIP embeddings [35], but they still require several input views.
While CLIP is a general-purpose model learned on 2D data, other authors have learned deep networks specifically for the goal of inferring NeRFs from a small number of views. Examples include IBRNet [51], NeRF-WCE [13], PixelNeRF [60], NeRFormer [38], and ViewFormer [22]. These models still generally require more than one input view at test time, require multi-view data for training, and are often optimized for specific object categories.
Single-view reconstruction.Some authors have attempted to recover full radiance fields from single images, but this generally requires multi-view data for training, as well as learning models that are specific to a specific object category. 3D-R2N2 [5], Pix2Vox [55, 55], and LegoFormer [57] learn to reconstruct volumetric representation of simple objects, mainly from synthetic data like ShapeNet [4]. More recently, CodeNeRF [19] predicts a full radiance field, including reconstructing the photometry of the objects. AutoRF [28] learns a similar autoencoder specifically for cars.
Extracting 3D models from 2D generators.Several authors have proposed to extract 3D models from 2D image generators, originally using GANs [3, 9, 12, 30, 31, 54].
More related to our work, CLIP-Mesh [20] and Dream Fields [16] do so by using the CLIP embedding and can condition 3D generation on text. Our model is built on the recent Dream Fusion approach [33], which builds on a similar idea using a diffusion model as prior.
However, these models have been used as either pure generators or generators conditioned on vague cues such as class identity or text. Here, we build on similar ideas, but we apply them to the case of single-view reconstruction.
Recently, the authors of [53] have proposed to directly generate multiple 2D views of an object, which can then be reconstructed in 3D using a NeRF-like model. This is also reminiscent of our approach, but their model requires multi-view data for training, is only tested on synthetic data, and requires to explicitly sample multiple views for reconstruction (in our case they remain implicit).
Diffusion Models.Diffusion denoising probabilistic models are a class of generative models based on iteratively reversing a Markovian noising process. In vision, early works formulated the problem as learning a variational lower bound [14], or framed it as optimizing a score-based generative model [45, 46] or as the discretization of a continuous stochastic process [47]. Recent improvements includes the use of faster and deterministic sampling [14, 25, 52], class-conditional models [7, 46], text-conditional models [32], and modeling in latent space [41].
## 3 Method
We provide an overview and notation for the background material first (Sec. 3.1), and then discuss our RealFusion method (Sec. 3.2).
Figure 2: **Method diagram.** Our method optimizes a neural radiance field using two objectives simultaneously: a reconstruction objective and a prior objective. The reconstruction objective ensures that the radiance field resembles the input image from a specific, fixed view. The prior objective uses a large pre-trained diffusion model to ensure that the radiance field looks like the given object from randomly sampled novel viewpoints. The key to making this process work well is to condition the diffusion model on a prompt with a custom token \(\langle\mathbf{e}\rangle\), which is generated prior to reconstruction using single-image textual inversion. This diagram does not display our coarse-to-fine training strategy or regularization terms, both of which improve qualitative results.
### Radiance fields and DreamFusion
**Radiance fields.** A _radiance field_ (RF) is a pair of functions \((\sigma(\mathbf{x}),c(\mathbf{x}))\) mapping a 3D point \(\mathbf{x}\in\mathbb{R}^{3}\) to an opacity value \(\sigma(\mathbf{x})\in\mathbb{R}_{+}\) and a color value \(c(\mathbf{x})\in\mathbb{R}^{3}\). The RF is called _neural_ when these two functions are implemented by a neural network.
The RF represents the shape and appearance of an object. In order to generate an image of it, one _renders_ the RF using the emission-absorption model. Let \(I\in\mathbb{R}^{3\times H\times W}\) be an image, so that \(I(u)\in\mathbb{R}^{3}\) is the color of pixel \(u\). In order to compute \(I(u)\), one casts a ray \(r_{u}\) from the camera center through the pixel, interpreted as a point on the 3D image plane (this implicitly accounts for the camera viewpoint \(\pi\in SE(3)\)). Then, one takes a certain number of samples \((\mathbf{x}_{i}\in r_{u})_{i\in\mathcal{N}}\), for indices \(\mathcal{N}=\{1,\dots,N\}\) taken with constant spacing \(\Delta\). The color is obtained as:
\[I(u)=\mathcal{R}(u;\sigma,c)=\sum_{i\in\mathcal{N}}(T_{i+1}-T_{i})c(\mathbf{x}_{i}), \tag{1}\]
where \(T_{i}=\exp(-\Delta\sum_{j=0}^{i-1}\sigma(\mathbf{x}_{j}))\) is the probability that a photon is transmitted from point \(\mathbf{x}_{i}\) back to the camera sensor without being absorbed by the material.
Importantly, the rendering function \(R(u;\sigma,c)\) is differentiable, which allows training the model by means of a standard optimizer. Specifically, the RF is fitted to a dataset \(\mathcal{D}=\{(I,\pi)\}\) of images \(I\) with known camera parameters by minimizing the \(L^{2}\) image reconstruction error
\[\mathcal{L}_{\text{rec}}(\sigma,c;\mathcal{D})=\frac{1}{|\mathcal{D}|}\sum_{( I,\pi)\in\mathcal{D}}\|I-R(\cdot;\sigma,c,\pi)\|^{2}. \tag{2}\]
In order to obtain good quality results, one typically requires a dataset of dozens or hundreds of views.
Here, we consider the case in which we are given _exactly one_ input image \(I_{0}\) corresponding to some (unknown) camera \(\pi_{0}\). In this case, we can also assume _any_ standard viewpoint \(\pi_{0}\) for that single camera. Optimizing Eq. (2) with a single training image leads to severe over-fitting: it is straightforward to find a pair \((\sigma,c)\) that has zero loss and yet does not capture any sensible 3D model of the object. Below we will leverage a pre-trained 2D image prior to (implicitly) dream up novel views of the object and provide the missing information for 3D reconstruction.
**Diffusion models.** A _diffusion model_ draws a sample from a probability distribution \(p(I)\) by inverting a process that gradually adds noise to the image \(I\). The diffusion process is associated with a variance schedule \(\{\beta_{t}\in(0,1)\}_{t=1}^{T}\), which defines how much noise is added at each time step. The noisy version of sample \(I\) at time \(t\) can then be written \(I_{t}=\sqrt{\bar{\alpha}_{t}}I+\sqrt{1-\bar{\alpha}_{t}}\epsilon\) where \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I}),\) is a sample from a Gaussian distribution (with the same dimensionality as \(I\)), \(\alpha_{t}=1-\beta_{t}\), and \(\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{i}\). One then learns a denoising neural network \(\hat{\epsilon}=\Phi(I_{t};t)\) that takes as input the noisy image \(I_{t}\) and the noise level \(t\) and tries to predict the noise component \(\epsilon\).
In order to draw a sample from the distribution \(p(I)\), one starts by drawing a sample \(I_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). Then, one progressively denoises the image by iterated application of \(\Phi\) according to a specified sampling schedule [15, 25, 44], which terminates with \(I_{0}\) sampled from \(p(I)\).
Modern diffusion models are trained on large collections \(\mathcal{D}^{\prime}=\{I\}\) of images by minimizing the loss
\[\mathcal{L}_{\text{diff}}(\Phi;\mathcal{D}^{\prime})=\tfrac{1}{|\mathcal{D}^{ \prime}|}\sum_{i\in\mathcal{D}^{\prime}}||\Phi(\sqrt{\bar{\alpha}_{t}}I+\sqrt{ 1-\bar{\alpha}_{t}}\epsilon,t)-\epsilon||^{2}. \tag{3}\]
This model can be easily extended to draw samples from a distribution \(p(\mathbf{x}|\mathbf{e})\) conditioned on a _prompt_\(\mathbf{e}\). Conditioning on the prompt is obtained by adding \(\mathbf{e}\) as an additional input of the network \(\Phi\), and the strength of conditioning can be controlled via classifier-free guidance [7].
**DreamFusion and Score Distillation Sampling (SDS).** Given a 2D diffusion model \(p(I|\mathbf{e})\) and a prompt \(\mathbf{e}\), DreamFusion extracts from it a 3D rendition of the corresponding concept, represented by a RF \((\sigma,c)\). It does so by randomly sampling a camera parameter \(\pi\), rendering a corresponding view \(I_{\pi}\), assessing the likelihood of the view based on the
Figure 3: **Examples demonstrating the level of detail of information captured by the optimized embedding \(\langle\mathbf{e}\rangle\).** Rows 1-2 show input images and masks. The images are used to optimize \(\langle\mathbf{e}\rangle\) via our single-image textual inversion process. Rows 3-5 show examples of 2D images generated using \(\langle\mathbf{e}\rangle\) in new prompts, which we hope demonstrate the type of information encoded in \(\langle\mathbf{e}\rangle\). Rows 6-7 show RealFusionβs output, optimized using the prompt βAn image of a \(\langle\mathbf{e}\rangle\)β.
model \(p(I_{\pi}|\mathbf{e})\), and updating the RF to increase the likelihood of the generated view based on the model.
In practice, DreamFusion uses the denoiser network as a frozen critic and takes a gradient step
\[\nabla_{(\sigma,c)}\mathcal{L}_{\text{SDS}}(\sigma,c;\pi,\mathbf{e},t)=\\ E_{t,\epsilon}\Big{[}w(t)(\Phi(\alpha_{t}I+\sigma_{t}\epsilon;t, \mathbf{e})-\epsilon)\cdot\nabla_{(\sigma,c)}I\Big{]}, \tag{4}\]
where \(I=R(\cdot;\sigma,c,\pi)\). is the image rendered from a given viewpoint \(\pi\) and prompt \(\mathbf{e}\). This process is called _Score Distillation Sampling_ (SDS).
Note that Eq. (4) differs from simply optimizing the standard diffusion model objective because it does not include the Jacobian term for \(\Phi\). In practice, removing this term both improves generation quality and reduces computational and memory requirements.
One final aspect of DreamFusion is essential for understanding our contribution in the following section: DreamFusion finds that it is necessary to use classifier-free guidance [7] with a very high guidance weight of 100, much larger than one would use for image sampling, in order to obtain good 3D shapes. As a result, the generations tend to have limited diversity; they produce only the most likely objects for a given prompt, which is incompatible with our goal of reconstructing any given object.
### RealFusion
Our goal is to reconstruct a 3D model of the object contained in a single image \(I_{0}\), utilizing the prior captured in the diffusion model \(\Phi\) to make up for the missing information. We will achieve this by optimizing a radiance field using two simultaneous objectives: (1) a reconstruction objective Eq. (2) from a fixed viewpoint, and (2) a SDS-based prior objective Eq. (4) on novel views randomly sampled at each iteration. Figure 2 provides a diagram of the entire system.
Single-image textual inversion as a substitute for alternative views.The most important component of our method is the use of single-image textual inversion as a substitute for alternative views. Ideally, we would like to condition our reconstruction process on multi-view images of the object in \(I_{0}\), _i.e_. on samples from \(p(I|I_{0})\). Since these images are not available, we instead synthesize a text prompt \(\mathbf{e}^{(I_{0})}\) specifically for our image \(I_{0}\) as a proxy for this multi-view information.
Our idea, then, is to engineer a prompt \(\mathbf{e}^{(I_{0})}\) to provide a useful approximation of \(p(I|I_{0})\). We do so by generating random augmentations \(g(I_{0}),\,g\in G\) of the input image, which serve as pseudo-alternative-views. We use these augmentations as a mini-dataset \(\mathcal{D}^{\prime}=\{g(I_{0})\}_{g\in G}\) and optimize the diffusion loss Eq. (3) \(\mathcal{L}_{\text{diff}}(\Phi(\cdot;\mathbf{e}^{(I_{0})}))\) with respect to the prompt \(\mathbf{e}^{(I_{0})}\), while freezing all other text embeddings and model parameters.
In practice, our prompt is derived automatically from templates like "an image of a \(\langle\mathbf{\mathrm{e}}\rangle\)", where "\(\langle\mathbf{\mathrm{e}}\rangle\)" (\(=e^{(I_{0})}\)) is a new token introduced to the vocabulary of the text encoder of our diffusion model (see Appendix A for details). Our optimization procedure mirrors and generalizes the recently-proposed textual-inversion method of [10]. Differently from [10], we work in the single-image setting and utilize image augmentations for training rather than multiple views.
To help convey the intuition behind \(\langle\mathbf{\mathrm{e}}\rangle\), consider an attempt at reconstructing an image of a dish using the generic text prompt "An image of a fish" with losses Eqs. (3) and (4). In our experience, this often produces a reconstruction which looks like the input fish from the input viewpoint, but looks like some _different, more-generic_ fish from the backside. By contrast, using the prompt "An image of a \(\langle\mathbf{\mathrm{e}}\rangle\)", the reconstruction resembles the input fish from all angles. An example of exactly this case is shown in Figure 7.
Finally, Figure 3 demonstrates the amount of detail captured in the embedding \(\langle\mathbf{\mathrm{e}}\rangle\).
Coarse-to-fine training.In order to describe our coarse-to-fine training methodology, it is necessary to first briefly introduce our underlying RF model, a InstantNGP [29]. InstantNGP is a grid-based model which stores features at the vertices of a set of feature grids \(\{G_{i}\}_{i=1}^{L}\) at multiple resolutions. The resolution of these grids is chosen to be a geometric progression between the coarsest and finest resolutions, and feature grids are trained simultaneously.
We choose a InstantNGP over a conventional MLP-based NeRF due to its computational efficiency and training speed. However, the optimization procedure occasionally produces small irregularities on the surface of the object. We find that training in a coarse-to-fine manner helps to alleviate these issues: for the first half of training we only optimize the lower-resolution feature grids \(\{G_{i}\}_{i=1}^{L/2}\), and then in the second half of training we optimize all feature grids \(\{G_{i}\}_{i=1}^{L}\). Using this strategy, we obtain the benefits of both efficient training and high-quality results.
Normal vector regularization.Next, we introduce a new regularization term to encourage our geometry to have smooth normals. The introduction of this term is motivated by the observation that our RF model occasionally generated noisy-looking surfaces with low-level artifacts. To address these artifacts, we encourage our RF to have smoothly varying normal vectors. Notably, we perform this regularization in _2D_ rather than in 3D.
At each iteration, in addition to computing RGB and opacity values, we also compute normals for each point along the ray and aggregate these via the raymarching equa
tion to obtain normals \(N\in\mathcal{R}^{H\times W\times 3}\).1 Our loss is:
Footnote 1: Normals may be computed either by taking the gradient of the density field or by using finite differences. We found that using finite differences worked well in practice.
\[\mathcal{L}_{\mathrm{normals}}=\|N-\mathrm{stopgrad}(\mathsf{ blur}(N,k))\|^{2} \tag{5}\]
where stopgrad is a stop-gradient operation and \(\mathsf{blur}(\cdot,k)\) is a Gaussian blur with kernel size \(k\) (we use \(k=9\)).
Although it may be more common to regularize normals in 3D, we found that operating in 2D reduced the variance of the regularization term and led to superior results.
Mask loss.In addition to the input image, our model also utilizes a mask of the object that one wishes to reconstruct. In practice, we use an off-the-shelf image matting model to obtain this mask for all images.
We incorporate this mask in a simple manner by adding a simple \(L^{2}\) loss term on the difference between the rendered opacities from the fixed reference viewpoint \(\mathcal{R}(\sigma,\pi_{0})\in\mathcal{R}^{H\times W}\) and the object mask \(M\): \(\mathcal{L}_{\text{rec,mask}}=||O-M||^{2}\) Our final objective then consists of four terms:
\[\nabla_{\sigma,c}\mathcal{L}=\nabla\mathcal{L}_{\text{SDS}}+ \lambda_{\text{normals}}\cdot\nabla\mathcal{L}_{\text{normals}}\\ +\lambda_{\text{image}}\cdot\nabla\mathcal{L}_{\text{image}}+ \lambda_{\text{mask}}\cdot\nabla\mathcal{L}_{\text{mask}} \tag{6}\]
where the top line in the equation above corresponds to our prior objective and the bottom line corresponds to our reconstruction objective.
## 4 Experiments
### Implementation details
Regarding hyperparameters, we use _essentially the same set of hyper-parameters for all experiments_--there is no per-scene hyper-parameter optimization.2. For our diffusion model prior, we employ the open-source _Stable Diffusion_ model [41] trained on the LAION [43] dataset of text-image pairs. For our InstantNGP [29] model, we use a model with
Figure 4: **Qualitative results. RealFusion reconstructions from a single input view. Each pair of columns shows the textured object and the underlying 3D shape, as a shaded surface. Different pairs of columns show different viewpoints.**
16 resolution levels, a feature dimension of 2, and a maximum resolution of 2048, trained in a coarse-to-fine manner as explained above.
The camera for reconstruction is placed looking at the origin on a sphere of radius \(1.8\), at an angle of 15\(\mathrm{deg}\) above the plane. At each optimization step, we first render from the reconstruction camera and compute our reconstruction losses \(\mathcal{L}_{\mathrm{rec}}\) and \(\mathcal{L}_{\mathrm{rec,mask}}\). We then render from a randomly sampled camera to obtain a novel view, and use this view for \(\mathcal{L}_{\mathrm{sds}}\) and \(\mathcal{L}_{\mathrm{normals}}\). We use \(\lambda_{\text{image}}=5.0\), \(\lambda_{\text{mask}}=0.5\), and \(\lambda_{\text{normal}}=0.5\).
Regarding camera sampling, lighting, and shading, we keep nearly all parameters the same as [33]. This includes the use of diffuse and textureless shading stochastic throughout the course of optimization, after an initial warmup period of albedo-only shading. Complete details regarding this and other aspects of our training setup are provided in the supplementary material.
### Quantitative results
There are only few methods that attempt to reconstruct arbitrary objects in 3D. The most recent and best-performing of these is Shelf-Supervised Mesh Prediction [58], which we compare here. They provide 50 pretrained category-level models for 50 different categories in OpenImages [23]. Since we aim to compute metrics using 3D or multi-view ground truth, we evaluate on seven categories in the CO3D dataset [39] with corresponding OpenImages categories. For each of these seven categories, we select three images at random and run both RealFusion and Shelf-Supervised to obtain reconstructions.
We first test the quality of the recovered 3D shape in Fig. 5. Shelf-Supervised directly predicts a mesh. We extract one from our predicted radiance fields using marching cubes. CO3D comes with sparse point-cloud reconstruction of the objects obtained using multi-view geometry. For evaluation, we sample points from the reconstructed meshes and align them optimally with the ground truth point cloud by first estimating a scaling factor and then using Iterated Closest Point (ICP). Finally, we compute F-score with threshold \(0.05\) to measure the distance between the predicted and ground truth point clouds. Results are shown in Tab. 1.
In order to evaluate the quality of the reproduced appearance, we also compare novel-view renderings from our and their method (Tab. 1). Ideally, these renderings should pro
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{2}{c|}{_F-score_} & \multicolumn{2}{c}{_CLIP-similarity_} \\ \hline \hline & [59] & Ours & [59] & Ours \\ \hline Backpack & 7.58 & **12.22** & 0.72 & **0.74** \\ Chair & 8.26 & **10.23** & 0.65 & **0.76** \\ Motorcycle & 8.66 & **8.72** & 0.69 & **0.70** \\ Orange & 6.27 & **10.16** & 0.71 & **0.74** \\ Skateboard & **7.74** & 5.89 & **0.74** & **0.74** \\ Teddybear & **12.89** & 10.08 & 0.73 & **0.82** \\ Vase & 6.30 & **9.72** & 0.69 & **0.71** \\ \hline Mean & 8.24 & **9.58** & 0.70 & **0.74** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Quantitative comparison.** We compare our method with Shelf-Supervised [59] on seven object categories. The F-score and CLIP-similarity metrics are designed to measure the quality of reconstruction shape and appearance, respectively. For both metrics, higher is better. Metrics are averaged over three images per category. Our method outperforms [59] in aggregate, despite the fact that [59] uses a _different category-specific model_ for each category.
Figure 5: **Qualitative comparison with prior work.** We show the results of our method and the category-level method of [59] on real-world images from the CO3D dataset [38]. Each pair of rows show two novel views produced by [59] and our method. For [59], we use category-specific models for each CO3D category (in this case, motorcycles, cups, and backpacks). Despite not requiring any category-specific information, our method is able to reconstruct objects at a higher level of detail than [59].
Figure 6: **A demonstration of multi-modal image reconstruction.** Above, we see our methodβs ability to generate a diverse set of object reconstructions given the same input image. In particular, the method produces different textures on the backsides of the generated objects, despite all objects matching the input image from the reference view.
duce views that are visually close to the real views. In order to test this hypothesis, we check whether the generated views are close or not to the other views given in CO3D. We then report the CLIP embedding similarity of the generated images with respect to the closest CO3D view available (_i.e_. the view with maximum similarity).
### Qualitative results
Figure 4 shows additional qualitative results from multiple viewpoints. Having a single image of an object means that several 3D reconstructions are possible. Figure 6 explores the ability of RealFusion to sample the space of possible solutions by repeating the reconstruction several times, starting from the same input image. There is little variance in the reconstructions of the front of the object, but quite a large variance for its back, as expected.
Figure 11 shows two typical failure modes of RealFusion: in some cases the model fails to converge, and in others it copies the front view to the back of the object, even if this is not semantically correct.
Figure 8: **Effect of coarse-to-fine training.** The top row of each pair is generated by optimizing all levels of a multi-resolution 3D feature grid from the first optimization step, whereas he bottom row is optimized in a coarse-to-fine manner.
Figure 7: **A visualization of the effect of single-image textual inversion on reconstruction quality.** In each pair of rows, the top row shows the result of utilizing a standard text prompt for our diffusion-model-based loss (_e.g_. βAn image of a statue of a catβ). The bottom row shows the result of utilizing a text prompt optimized for the input image in a fully-automatic manner; this textual inversion process dramatically improves object reconstruction.
### Analysis and Ablations
One of the key components of RealFusion is our use of single-image textual inversion, which allows the model to correctly imagine novel views of a specific object. Figure 7 shows that this component plays indeed a critical role in the quality of the reconstructions. Without textual inversion, the model often reconstructs the backside of the object in the form of a generic instance from the object category. For example, the backside of the cat statue in the top row of Fig. 7 is essentially a different statue of a more generic-looking cat, whereas the model trained with textual inversion resembles the true object from all angles.
Other components of the model are also significant. Figure 9 shows that the normal smoothness regularizer of Eq. (5) results in smoother, more realistic meshes and reduces the number of artifacts. Figure 8 shows that coarse-to-fine optimization reduces the presence of low-level artifacts and results in smoother, visually pleasing surfaces. Fig. 10 shows that using Stable Diffusion works significantly better than relying on an alternative such as CLIP.
## 5 Conclusions
We have introduced RealFusion, a new approach to obtain full 360\({}^{\circ}\) photographic reconstructions of any object given a single image of it. Given an off-the-shelf diffusion model trained using only 2D images and no special supervision for 3D reconstruction, as well as a single view of the target object, we have shown how to select the model prompt to imagine other views of the object. We have used this conditional prior to learn an efficient, multi-scale radiance field representation of the reconstructed object, incorporating an additional regularizer to smooth out the reconstructed surface. The resulting method can generate plausible 3D reconstructions of objects captured in the wild which are faithful to the input image. Future works include specializing the diffusion model for the task of new-view synthesis and incorporating dynamics to reconstruct animated 3D scenes.
Ethics.We use the CO3D dataset in a manner compatible with their terms. CO3D does not contain personal information. Please see [https://www.robots.ox.ac.uk/~vedaldi/research/union/ethics.html](https://www.robots.ox.ac.uk/~vedaldi/research/union/ethics.html) for further information on ethics.
Acknowledgments.L. M. K. is supported by the Rhodes Trust. A. V., I. L. and C.R. are supported by ERC-UNIONCoG-101001212. C. R. is also supported by VisualAI EP/T028572/1.
|
2308.05553 | Prikry type forcings and the BukovskΓ½-Dehornoy phenomena | This paper is meant to present in a coherent way several instances of quite
common phenomena that was first identified (independently) by Bukovsk\'y and
Dehornoy. We present the basic result for Prikry type forcing and show how to
extend it to the Gitik-Shraon forcing, the Extender Based Prikry forcing,
Prikry forcings with interleaved collapses and Radin forcing for $o(\kappa) <
\kappa^+$. | Yair Hayut | 2023-08-10T13:08:16Z | http://arxiv.org/abs/2308.05553v1 | # Prikry type forcings and the Bukovsky-Dehornoy phenomena
###### Abstract.
This paper is meant to present in a coherent way several instances of quite common phenomena that was first identified (independently) by Bukovsky and Dehornoy. We present the basic result for Prikry type forcing and show how to extend it to the Gitik-Shraon forcing, the Extender Based Prikry forcing, Prikry forcings with interleaved collapses and Radin forcing for \(o(\kappa)<\kappa^{+}\).
This document is based on a tutorial lectures that were given in Torino, in the 8th European Set Theory Conference. This research was supported by the Israel Science Foundation 1967/21.
In Section 2, I derive the results of Gitik-Sharon [16], using the Bukovsky-Dehornoy approach. In Section 3 I derive a Bukovsky-Dehornoy Theorem for the Extender Based Prikry forcing and for Prikry forcing with interleaved collapses.
In Section 4 we derive a variant of the Bukovsky-Dehornoy Theorem for the Magidor and Radin forcing. This theorem seems to be folklore as well, but I show there how to derive the main properties of Radin forcing using this approach. The main result of the last section is the _failure_ of the Bukovsky-Dehornoy Theorem for Radin forcing with \(o(\kappa)=\kappa^{+}\).
## 1. Prikry forcing and the Bukovsky-Dehornoy Theorem
In this section we will present the Prikry forcing from the perspective of the Bokuvsky-Dehornoy theorem. Instead of designing a forcing notion and analysing its properties, our goal is to construct a pair of models of ZFC, with the same cardinals \(N_{0}\subseteq N_{1}\) such that there is an \(N_{0}\)-regular cardinal which is singular in \(N_{1}\).
### Iterated ultrapowers
Recall the definition of a measure on \(\kappa\).
**Definition 1**.: _Let \(\kappa\) be a regular uncountable cardinal._
* _An ultrafilter_ \(U\subseteq\mathcal{P}(\kappa)\) _is_ \(\kappa\)_-complete if every intersection of_ \(<\kappa\) _sets from_ \(U\) _is in_ \(U\)_._
* _We say that_ \(U\subseteq\mathcal{P}(\kappa)\) _is a_ normal measure _if_ \(U\) _is an ultrafilter that contains all co-bounded sets and for every_ \(f\colon A\to\kappa\) _such that_ \(A\in U\) _and_ \(f(\alpha)<\alpha\) _for all_ \(\alpha\in A\)_, there is_ \(\gamma\in\operatorname{range}f\)_, such that_ \(f^{-1}(\{\gamma\})\in U\)_._
Note that every normal measure is \(\kappa\)-complete.
**Lemma 2** (Scott, [26]).: _An ultrafilter \(U\) on \(\kappa\) is normal if and only if there is an elementary embedding \(j\colon V\to M\), such that \(M\) is well founded and \(\operatorname{crit}j=\kappa\), and \(U=\{X\subseteq\kappa\mid\kappa\in j(X)\}\)._
Throughout this paper, I will try to replace (as much as possible) the combinatorial definitions of the objects involved, such as measures and extenders, with properties of elementary embeddings. This fits better with the Bukovsky-Dehornoy theorem as well as will some recent works of Merimovich, for example [22]. Nevertheless, we need to know that all our elementary embeddings have combinatorial definition. This is important in order to be able to iterated them as well as to control their continuity points.
**Definition 3** (Kunen, [19]).: _Let \(U\) be a \(\kappa\)-complete measure in a model \(V\). Let us define by induction an iteration of the ultrapower embedding._
\[\begin{array}{rll}V&=&M_{0}&j_{0,0}=id\\ M_{n+1}&=&\operatorname{Ult}(M_{n},j_{n}(U)),&j_{n,n+1}\colon M_{n}\to M_{n+1 }\\ &\vdots&\\ M_{\omega}&=&\lim\langle M_{n},j_{n,m}\mid n\leq m<\omega\rangle&j_{n,\omega} \colon M_{n}\to M_{\omega}\end{array}\]
_Where \(\lim\) indicates the direct limit of the directed system of embeddings._
**Theorem 4** (Gaifman, [13]).: \(M_{\omega}\) _is well founded._
Let us assume that \(U\) is normal, for simplicity. Let \(\kappa_{n}=j_{n}(\kappa)\) for \(n\leq\omega\). Let \(P=\langle\kappa_{n}\mid n<\omega\rangle\).
Let us look at \(M_{\omega}[P]\) -- the least ZFC-model containing \(M_{\omega}\) and \(P\). This model can be defined by \(\bigcup_{\alpha\in\beta\in\operatorname{Ord}}L_{\beta}(M_{\alpha}\cup\{P\})\), but as the following theorems will show us, it can also be defined without referring to the \(L(A)\) construction. We will show that \(M_{\omega}\) and \(M_{\omega}[P]\) have the same cardinals and \(\kappa_{\omega}\) is regular in \(M_{\omega}\) and singular in \(M_{\omega}[P]\).
Let us start with a few simple facts about iterated ultrapowers by a measure on \(\kappa\).
**Claim 5**.:
1. _For every element_ \(x\in M_{n}\)_, there is_ \(f\in V\)_,_ \(f\colon\kappa^{n}\to V\) _such that_ \(x=j_{n}(f)(\kappa_{0},\ldots,\kappa_{n-1})\)_._
2. _For every_ \(x\in M_{\omega}\) _there is_ \(n<\omega\) _and_ \(\bar{x}\in M_{n}\) _such that_ \(x=j_{n,\omega}(\bar{x})\)_. In particular, there is_ \(f\colon\kappa^{n}\to V\) _such that_ \(x=j_{\omega}(f)(\kappa_{0},\ldots,\kappa_{n-1})\)_._
3. \(\operatorname{crit}j_{n,m}=\kappa_{n}\) _for all_ \(n<m\leq\omega\)_._
4. \(\kappa_{\omega}=\sup\kappa_{n}\)_._
**Definition 6**.: _An ordinal \(\alpha\) is a continuity point of an elementary embedding \(j\), if \(j(\alpha)=\sup j\,"\,\alpha\). An ordinal \(\alpha\) is a fixed point of \(j\), if \(j(\alpha)=\alpha\)._
Since \(j(\beta)\geq\beta\) for all \(\beta\), every fixed point is a continuity point.
**Claim 7**.: _Let \(n<\omega\). Every regular cardinal in \(M_{n}\) which is not \(\kappa_{n}\) is a continuity point of \(j_{n,m}\) for all \(n<m\leq\omega\)._
Proof.: Let \(j=j_{n,m}\). If \(\lambda<\kappa_{n}=\operatorname{crit}j\), then it is a fixed point. Otherwise, let \(\alpha<j(\lambda)\). Then, there is a function \(f\in M_{n}\) from some finite power of \(\kappa_{n}\) to \(\lambda\) representing \(\alpha\). But then, taking \(\gamma=\sup\operatorname{range}f<\lambda\), \(\alpha<j(\gamma)\) by Los theorem.
The embeddings \(j_{n,\omega}\) as well as the models \(M_{m}\) for \(m\geq n\) are definable in \(M_{n}\). In particular, \(M_{\omega}\subseteq M_{n}\) as a definable subclass and \(P\in M_{n}\). Therefore, \(M_{\omega}[P]\subseteq M_{n}\). We conclude that:
**Lemma 8**.: \(M_{\omega}[P]\subseteq\bigcap M_{n}\)_._
From this lemma, let us conclude that \(M_{\omega}\) and \(M_{\omega}[P]\) have the same cardinals.
**Lemma 9**.: \(M_{\omega}[P]\cap V_{\kappa_{\omega}}\subseteq M_{\omega}\)_._
Proof.: Pick \(X\in M_{\omega}[P]\cap V_{\kappa_{\omega}}\) and let \(n\) be large enough so that \(\kappa_{n}>\operatorname{rank}X\). Then, \(X\in M_{n}\) and therefore \(X=j_{n,\omega}(X)\in M_{\omega}\).
**Lemma 10**.: _If \(\lambda\neq\kappa_{\omega}\) is a regular cardinal in \(M_{\omega}\), then it remains regular in \(M_{\omega}[P]\)._
Proof.: Let \(n<\omega\) be large enough and let \(\bar{\lambda}\) be the pre-image of \(\lambda\) under \(j_{n,\omega}\) and let \(\bar{\rho}\) be the pre-image of \(\rho=\operatorname{cf}^{M_{\omega}[P]}(\lambda)\). So \(\bar{\lambda}\) is regular in \(M_{n}\) and \(\bar{\rho}\leq\bar{\lambda}\). Note that \(\rho\neq\kappa_{\omega}\), which is singular in \(M_{\omega}[P]\) and therefore \(\bar{\rho}\neq\kappa_{n}\).
Since \(\bar{\lambda},\bar{\rho}\) are regular cardinals in \(M_{n}\), and different than \(\kappa_{n}\), by Lemma 7 they are continuity points of the embedding \(j_{n,\omega}\):
\[\lambda =j_{n,\omega}(\bar{\lambda})=\sup\{j_{n,\omega}(\alpha)\mid\alpha< \bar{\lambda}\},\] \[\rho =j_{n,\omega}(\bar{\rho})=\sup\{j_{n,\omega}(\alpha)\mid\alpha< \bar{\rho}\}.\]
Thus,
\[\bar{\rho}=\mathrm{cf}^{M_{n}}(\rho)=\mathrm{cf}^{M_{n}}(\mathrm{cf}^{M_{\omega}[P ]}\,\lambda)=\mathrm{cf}^{M_{n}}\,\lambda=\bar{\lambda}\]
So, we conclude that \(M_{\omega}\subseteq M_{\omega}[P]\) have the same cardinals and \(P\) witness the singularity of \(\kappa_{\omega}\) in \(M_{\omega}[P]\).
We would like to show that \(M_{\omega}[P]\) is a generic extension of \(M_{\omega}\) (without pointing on an explicit forcing notion), and get some general information about the properties of the forcing.
**Lemma 11**.: \(M_{\omega}[P]\) _is a generic extension of \(M_{\omega}\) using a \(\kappa_{\omega}^{+}\)-c.c. forcing notion._
Proof.: By Bukovsky Theorem, [5], this is equivalent to the following statement: For every \(f\colon\alpha\to\lambda\), \(f\in M_{\omega}[P]\), there is \(g\colon\alpha\to\mathcal{P}(\lambda)\) in \(M_{\omega}\) such that \(\forall\zeta<\alpha,f(\zeta)\in g(\zeta)\) and \(|g(\zeta)|\leq\kappa_{\omega}\).
Using Lemma 8, we know that \(M_{\omega}[P]\subseteq\bigcap M_{n}\). Let \(f\in\bigcap M_{n}\) be a function from \(\alpha\) to \(\lambda\) for some ordinals \(\alpha,\lambda\). Let us define for each \(n\) a function \(f_{n}\) such that if \(\zeta=j_{n,\omega}(\bar{\zeta}),f(\zeta)=j_{n,\omega}(\bar{\xi})\) we let \(f_{n}(\bar{\zeta})=\bar{\xi}\) (so \(f_{n}\) is a partial function).
Let \(g_{n}\colon\kappa^{n}\to V\) represent \(f_{n}\), so \(j_{n}(g_{n})(\kappa_{0},\ldots,\kappa_{n-1})=f_{n}\). Finally, let:
\[\bar{g}(\zeta)=\{g_{n}(\eta)(\zeta)\mid n<\omega,\eta\in\kappa^{n},\zeta\in \mathrm{dom}\,g(\eta)\},\]
and let \(g=j_{\omega}(\bar{g})\). Let us verify that for every \(\zeta<\alpha\), \(f(\zeta)\in g(\zeta)\).
Let \(n\) be large enough to that \(\zeta,f(\zeta)\) are in the range of \(j_{n,\omega}\) and in particular the pre-image of \(\zeta\) under \(j_{n,\omega}\) is in the domain of \(f_{n}\). In particular,
\[f(\zeta)=j_{n,\omega}(f_{n}(\bar{\zeta}))\in j_{n,\omega}(\{g_{n}(\eta)(\zeta )\mid\eta\in j_{n}(\kappa)^{n}\})\subseteq j_{\omega}(\bar{g})(\zeta).\]
Since \(|\bar{g}(\zeta)|\leq\kappa\) for all \(\zeta\), the result follows.
**Remark 12**.: _Let us consider an arbitrary iteration of measures, so we let \(V=M_{0}\) and \(M_{n+1}=\mathrm{Ult}(M_{n},\mathcal{V}_{n})\), with a direct limit \(M_{\omega}\). Any model of \(\mathrm{ZFC}\) between \(M_{\omega}\) and \(\bigcap M_{n}\) is a \(j_{\omega}(\lambda)\)-c.c. extension of \(M_{\omega}\), where \(\lambda\) bounds the size of the sets of the measures. More precisely, if the width of the embedding \(j_{n}\) is \(<j_{n}(\lambda)\), then the forcing is \(j_{\omega}(\lambda)\)-c.c.1_
Footnote 1: Recall that the width of an elementary embedding \(j\colon M\to N\) between two models of ZFC is \(\leq\mu\) iff for every \(x\in N\) there is a set \(a\in M\) with \(|a|\leq\mu\) such that \(x\in j(a)\). Equivalently, if for every \(x\in N\) there is \(f\in M\), \(|\,\mathrm{dom}\,f|\leq\mu\) and \(a\in N\) such that \(j(f)(a)=x\).
Note that while this approach allows us to compute the chain condition of the forcing quite easily, it is still unclear whether one can deduce that the size of the corresponding forcing is \(\leq 2^{\kappa}\).
**Question 13**.: _Can we show abstractly, without referring to the Prikry forcing, that \(M_{\omega}[P]\) is a generic extension of \(M_{\omega}\) using a forcing notion of cardinality \(\leq 2^{\kappa_{\omega}}\)?_
So far our arguments only used the inclusion \(M_{\omega}[P]\subseteq\bigcap M_{n}\), and applied absoluteness. In general, the intersection of models of ZFC does not have to satisfy even \(\mathrm{ZF}\). In this case, the model \(\bigcap M_{n}\) is definable in every \(M_{n}\), so we can abstractly get more information, using the methods of Set Theoretic Geology, [12, Lemma 21].
Fortunately, we do not need to analyse this model as, surprisingly, the Bukovsky-Dehornoy Theorem shows that the intersection model is the minimal possible model:
**Theorem 14** (Bukovsky-Dehornoy, [6, 8]).: \(\bigcap M_{n}=M_{\omega}[P]\)_._
We include the proof as its main ideas are going to repeat throughout the other, more complicated, cases.
Proof.: The proof consists of two steps. In the first step we show that the model \(M_{\omega}[P]\) is closed under countable sequences of ordinals. In the second step we use the elementary embeddings in order to approximate an arbitrary set of ordinals from \(\bigcap M_{n}\) using a countable sequence of sets in \(M_{\omega}\). From this approximation, \(M_{\omega}[P]\) can compute \(X\).
**Lemma 15**.: \(M_{\omega}[P]\) _is closed under countable sequences of ordinals._
Proof.: Let \(\langle\alpha_{n}\mid n<\omega\rangle\) be a sequence of ordinals. Fix, in \(V\), a countable sequence of functions \(f_{n}\colon\kappa^{m_{n}}\to\operatorname{Ord}\), such that \(j_{\omega}(f_{n})(\kappa_{0},\ldots,\kappa_{m_{n}-1})=\alpha_{n}\). Let \(\vec{f}=\langle f_{n}\mid n<\omega\rangle\).
Then \(j_{\omega}(\vec{f})\in M_{\omega}\). Therefore,
\[\langle\alpha_{n}\mid n<\omega\rangle=\langle j_{\omega}(f_{n})(P\upharpoonright m _{n})\mid n<\omega\rangle\in M_{\omega}.\]
**Lemma 16**.: _Let \(X\in\bigcap M_{n}\) be a set of ordinals. Then \(X\in M_{\omega}[P]\)._
Proof.: Let
\[Y_{n}=\{\alpha\mid j_{n,\omega}(\alpha)\in X\},\quad Z_{n}=j_{n,\omega}(Y_{n}).\]
By the \(\sigma\)-closure, \(\langle Z_{n}\mid n<\omega\rangle\in M_{\omega}\). Now,
\[\beta\in X\iff\forall^{*}n\beta\in Z_{n},\]
where \(\forall^{*}\) means for all large \(n\). Indeed, if \(\beta\in\operatorname{range}j_{n,\omega}\) then \(\beta\in X\iff\beta\in Z_{n}\).
From the proof one can extract a more concrete definition for the model \(M_{\omega}[P]\):
**Proposition 17**.: \(M_{\omega}[P]\) _is the least transitive class \(C\) that contains \(M_{\omega}\cup\{P\}\) and closed under the operations:_
1. _If_ \(\vec{g}=\langle g_{n}\mid n<\omega\rangle\in C\) _such that_ \(\operatorname{dom}g_{n}=\kappa_{\omega}^{n}\) _then_ \(\langle g_{n}(P\upharpoonright n)\mid n<\omega\rangle\) _is in_ \(C\)_._
2. _If_ \(\langle X_{n}\mid n<\omega\rangle\in C\) _then_ \(X=\{z\mid\forall^{*}n,z\in X_{n}\}\)_._
3. _If_ \(X=\langle A,E\rangle\in C\) _and_ \(E\) _is well founded and extensional, then the Mostowski collapse of_ \(X\) _is in_ \(C\)_._
## 2. Gitik-Sharon forcing
Let us consider now the diagonal supercompact forcing, introduced by Gitik and Sharon in [16]. This forcing is using a stronger type of large cardinal axioms -- supercompactness.
**Definition 18**.: _Let \(\kappa\leq\lambda\) be cardinals. Let \(P_{\kappa}\lambda\) be the set of all subsets of \(\lambda\) of cardinality \(<\kappa\)._
_A \(\kappa\)-complete ultrafilter \(\mathcal{U}\) on \(P_{\kappa}\lambda\) is fine if for every \(\alpha\in\lambda\), the set \(\{x\in P_{\kappa}\lambda\mid\alpha\in x\}\in\mathcal{U}\). \(\mathcal{U}\) is normal if it is fine and for every choice function \(f\colon A\to\lambda\), where \(A\in\mathcal{U}\), there is an \(\mathcal{U}\)-large set \(B\subseteq A\), such that \(f\upharpoonright B\) is constant._
**Lemma 19** (Reinhardt, Magidor).: _There is a normal measure on \(P_{\kappa}\lambda\) if and only if there is an elementary embedding \(j\colon V\to M\), with critical point \(\kappa\) such that \(M\) is closed under \(\lambda\)-sequences and \(j(\kappa)>\lambda\)._
Let us recall that given an elementary embedding \(j\colon V\to M\), such that \(\operatorname{crit}j=\kappa\), \(j(\kappa)>\lambda\) and \({}^{\lambda}M\subseteq M\), the ultrafilter \(\mathcal{U}=\{X\subseteq P_{\kappa}\lambda\mid j\,"\,\lambda\in j(X)\}\) is a normal measure on \(P_{\kappa}\lambda\). In other words, the _seed_ of \(\mathcal{U}\) is \(j\,"\,\lambda\).
**Remark 20**.: _Throughout this section we will always assume that \(\kappa\) carries a sequence of normal measures \(\mathcal{V}_{n}\) on \(P_{\kappa}\kappa^{+n}\). Except for a couple of changes in the notations, there is no harm in reducing the hypothesis to \(\mathcal{V}_{n}\) being \(\kappa\)-complete fine measure on \(P_{\kappa}\kappa^{+n}\)._
We would like to define an iteration using this sequence of supercompact measures.
\[\begin{array}{ccc}V&=&M_{0}&j_{0,0}=id\\ M_{n+1}&=&\operatorname{Ult}(M_{n},j_{n}(\mathcal{V}_{n})),&j_{n,n+1}\colon M _{n}\to M_{n+1}\\ &\vdots&\\ M_{\omega}&=&\lim\langle M_{n},j_{n,m}\mid n\leq m<\omega\rangle&j_{n,\omega} \colon M_{n}\to M_{\omega}\end{array}\]
Either Gaifman's arguments, or the modern version using the completeness of the measures show that:
**Lemma 21**.: \(M_{\omega}\) _is well founded._
As usual, we let \(j_{n}=j_{0,n}\) for \(n\leq\omega\).
Unlike the case of the Prikry forcing, this time the seeds of the embeddings are moved by the later steps of the iterations. Thus, we need to define the sequence \(P\) in a more precise way.
Let
\[p_{n}=j_{n+1,\omega}\left(j_{n,n+1}\,"\,j_{n}(\kappa^{+n})\right)=j_{n,\omega} \,"\,j_{n}(\kappa^{+n})\]
and let \(P=\langle p_{n}\mid n<\omega\rangle^{2}\).
Let us collect a couple of useful properties of the iteration.
**Lemma 22**.: _For every \(n<m\leq\omega\), \(\operatorname{crit}j_{n,m}=j_{n}(\kappa)\)._
**Lemma 23**.: _For every \(x\in M_{n}\) there is a function \(f\colon\prod_{m<n}P_{\kappa}\kappa^{+m}\to V\) such that_
\[x=j_{n}(f)(\bar{p}_{0},\ldots,\bar{p}_{n-1}),\]
_where \(\bar{p}_{m}=j_{n,m}\,"\,j_{m}(\kappa^{+m})\). In particular, for every \(x\in M_{\omega}\) there is a natural number \(n\) and \(f\colon\prod_{m<n}P_{\kappa}\kappa^{+m}\to V\) such that \(x=j_{\omega}(f)(p_{0},\ldots,p_{n-1})\)._
**Lemma 24**.: _Let \(\lambda\) be a regular cardinal in \(M_{n}\), \(\lambda\notin[j_{n}(\kappa),j_{n}(\kappa^{+\omega}))\). Then, \(\lambda\) is a continuity point of \(j_{n,\omega}\)._
Proof.: This is a corollary of Lemma 22 and Lemma 23. Let us prove the lemma for \(n=0\). The general case is similar, with slightly more complicated notations. If \(\lambda<\kappa\) then \(\lambda\) is a fixed point of \(j_{\omega}\). If \(\lambda>\kappa^{\omega}\), then let \(\alpha<j_{\omega}(\lambda)\). So, \(\alpha\) is represented by some function \(f\) as above. But, \(\left|\,\mathrm{dom}\,f\right|\leq\kappa^{+n}<\lambda\), so \(\sup\operatorname{range}f=\gamma<\lambda\), and therefore \(\alpha<j_{\omega}(\gamma)<j_{\omega}(\lambda)\).
So, the proof of Lemma 10 goes throughout unchanged and we conclude that every \(M_{\omega}\)-regular cardinal \(\lambda\) which is not in the interval \([j_{\omega}(\kappa),j_{\omega}(\kappa^{+\omega}))\) remains regular in \(M_{\omega}[P]\).
**Lemma 25**.: _In \(M_{\omega}[P]\), \(\forall n<\omega\), \(\operatorname{cf}j_{\omega}(\kappa^{+n})=\omega\)._
Proof.: Look at \(\alpha_{m}=\sup j_{m,\omega}\,"\,j_{m}(\kappa^{+n})\). For every \(m\geq n\), this is an ordinal below \(j_{\omega}(\kappa^{n})\), as \(j_{m}(\kappa^{+n})\) is a discontinuity point of \(j_{m,m+1}\). But, for every ordinal \(\gamma<j_{\omega}(\kappa^{+n})\) there is some large enough \(m\) such that \(\gamma\in\operatorname{range}j_{m,\omega}\) and thus \(\gamma<\alpha_{m}\). Therefore, \(\sup_{m\geq n}\alpha_{m}=j_{\omega}(\kappa^{+n})\).
The sequence \(\langle\alpha_{m}\mid m<\omega\rangle\) can be computed from \(P\): \(\alpha_{m}=\sup\left(p_{m}\cap j_{\omega}(\kappa^{+n})\right)\).
We conclude that the successor of \(j_{\omega}(\kappa)\) in \(M_{\omega}[P]\) is \(j_{\omega}(\kappa^{+\omega+1})\).
By Remark 12, \(M_{\omega}[P]\) is \(j_{\omega}(\kappa^{+\omega+1})\)-c.c. generic extension of \(M_{\omega}\).
We are now ready for the proof of the Bukovsky-Dehornoy Theorem for the Gitik-Sharon forcing. The proof is essentially the same as the one for the Prikry forcing, so we will sketch it.
**Theorem 26**.: \(M_{\omega}[P]=\bigcap M_{n}\)_._
Proof.: First, we need to show that \(M_{\omega}[P]\) is closed under countable sequences of ordinals. Indeed, if \(\langle\alpha_{n}\mid n<\omega\rangle\) is a countable sequence of ordinals, then for each \(n\), pick a function \(g_{n}\colon\prod_{m<m_{n}}P_{\kappa}\kappa^{+m}\) such that
\[\alpha_{n}=j_{\omega}(g_{n})(p_{0},\ldots,p_{m_{n}-1}),\]
using Lemma 23. Then, since \(j_{\omega}(\langle g_{n}\mid n<\omega\rangle)=\langle j_{\omega}(g_{n})\mid n <\omega\rangle\in M_{\omega}\), we get that:
\[\langle\alpha_{n}\mid n<\omega\rangle=\langle j_{\omega}(g_{n})(P\upharpoonright m _{n})\mid n<\omega\rangle\in M_{\omega}[P].\]
Next, given a set of ordinals \(X\in\bigcap M_{n}\), let us define the sets:
\[Y_{n}=\{\alpha\mid j_{n,\omega}(\alpha)\in X\}\quad,Z_{n}=j_{n,\omega}(Y_{n})\]
and verify that \(\alpha\in X\) iff for all large \(n\), \(\alpha\in Z_{n}\).
One of the interesting features of the Gitik-Sharon forcing relates to the behavior of \(\kappa^{+\omega+1}\). In particular, if \(\kappa^{+\omega}\) is strong limit cardinal then it is a fixed point of \(j_{\omega}\). Thus, under suitable cardinal arithmetic assumption, the Gitik-Sharon forcing cannot introduce certain sufficiently absolute objects.
Let us start with a simple observation.
**Claim 27**.: _Let us assume that \(\kappa^{+\omega}\) is a strong limit cardinal._
_There is no special Aronszajn tree on \(\kappa^{+\omega+1}\) in \(V\) if and only if there is no special Aronszajn tree on the successor of \(j_{\omega}(\kappa)\) in \(M_{\omega}[P]\)._
Proof.: Let us assume that there is such a tree \(T\) in \(M_{\omega}[P]\). So, this is a tree of height \(\left(j_{\omega}(\kappa)^{+}\right)^{M_{\omega}[P]}=\kappa^{+\omega+1}\) such that there is a function \(f\colon T\to j_{\omega}(\kappa)\) which is injective on chains. Since \(M_{\omega}[P]\subseteq V\), \(T,f\in V\) and since \(\left|j_{\omega}(\kappa)\right|^{V}=\kappa^{+\omega}\), \(T\) is a special Aronszajn tree in \(V\).
The other direction is similar: if \(T\) is a special Aronszajn tree in \(V\), then \(j_{\omega}(T)\) is a special Aronszajn tree in \(M_{\omega}\). But being special is upwards absolute, so it is special in \(M_{\omega}[P]\) as well.
Theorem 26 implies that \(M_{\omega}[P]\) is closed under \(\kappa\)-sequences with respect to \(V\), since it is an intersection of models which are closed under \(\kappa\)-sequences of ordinals. The following claim (and its variant) was used in [10] in order to derive instances of Chang's Conjecture in the Gitik-Sharon extension.
**Claim 28**.: _Let \(\kappa^{+\omega}\) be a strong limit. Let us assume that Chang's Conjecture \((\kappa^{+\omega+1},\kappa^{+\omega})\twoheadrightarrow(\rho^{+},\rho)\) holds for some \(\rho<\kappa\)._
_Then \(M_{\omega}[P]\models(j_{\omega}(\kappa)^{+},j_{\omega}(\kappa))\twoheadrightarrow( \rho^{+},\rho)\)._
Proof.: Let \(\mathcal{A}\) be an algebra in \(M_{\omega}[P]\) on \(j_{\omega}(\kappa)^{+}=\kappa^{+\omega+1}\). Then, by Chang's Conjecture in \(V\), there is \(\mathcal{B}\prec\mathcal{A}\) of order type \(\rho^{+}\). But \(\mathcal{B}\in M_{\omega}[P]\), by the closure of the model.
Let us remark that the hypothesis of the claim follows from the assumption that \(\kappa\) is \(\kappa^{+\omega+1}\)-supercompact.
Claim 28 fails if \(\kappa^{+\omega}\) is not a strong limit. In this case, the Gitik-Sharon forcing adds a good scale (see Definition 32), which implies that every instance of Chang's Conjecture as in the claim does not hold.
### Scales
This subsection deals with the applications of the Bukovsky-Dehornoy Theorem to the behavior of scales at a Prikry type extension. This type of application was used in [4], and the basic ideas are taken from there. Here we apply it for the Gitik-Sharon forcing, in order to derive the preservation of a bad scale in the generic extension by the Gitik-Sharon forcing.
The applications in this subsection used only the fact that the generic extension \(M_{\omega}[P]\) is closed under countable sequences.
Scales are one of the basic objects in the Shelah's PCF theory, see [28, 1]. Let us present a special case.
**Definition 29** (Shelah).: _Let \(\lambda\) be a singular cardinal of countable cofinality and let \(\langle\lambda_{n}\mid n<\omega\rangle\) be a sequence of regular cardinals converging to \(\lambda\). A scale on \(\prod\lambda_{n}\) is a sequence of functions \(\langle f_{\alpha}\mid\alpha<\mu\rangle\), \(f_{\alpha}\in\prod\lambda_{n}\) such that:_
* _For all_ \(\alpha<\beta\)_,_ \(\{n\mid f_{\alpha}(n)\leq f_{\beta}(n)\}\) _is co-finite._
* _For all_ \(g\in\prod\lambda_{n}\)_, there is_ \(\alpha<\mu\) _such that_ \(\{n\mid g(n)\leq f_{\alpha}(n)\}\) _is co-finite._
We will denote the assertion "\(\{n\mid f(n)\leq g(n)\}\) is co-finite" by \(f\leq^{*}g\) and we call the set \(n\) of values such that \(f(n)>g(n)\), the set of violations of the inequality.
It is clear that every product carries a scale of some length.
**Lemma 30** (Shelah).: _The lengths of any two scales on the same product have the same cofinality._
In particular, it makes sense to assume that the length of the scale is always a regular cardinal.
**Theorem 31** (Shelah).: _For every singular cardinal \(\lambda\) of countable cofinality, there is a sequence \(\langle\lambda_{n}\mid n<\omega\rangle\) of regular cardinals and a scale \(\langle f_{\alpha}\mid\alpha<\lambda^{+}\rangle\) on \(\prod\lambda_{n}\)._
The notion of scales is much wider than this limited definition. For our purposes, we would like to focus on scales of minimal length, and consider ones with a better behaviour.
**Definition 32**.: _Let \(\mathcal{S}=\langle f_{\alpha}\mid\alpha<\lambda^{+}\rangle\) be scale. An ordinal \(\beta<\lambda^{+}\) of uncountable cofinality is good if there is \(n<\omega\) and \(A\subseteq\beta\) cofinal such that for every \(\alpha_{0}<\alpha_{1}\) in \(A\), \(\{m\mid f_{\alpha_{0}}(m)>f_{\alpha_{1}}(m)\}\subseteq n\)._
_An ordinal \(\beta\) is bad, if it is not good._
_A scale \(\mathcal{S}\) is a good if there are club many good points. A scale is bad if it has stationarily many bad ordinals._
The notion of bad (and good) points in a scale can be traced back to [27]. The connections between the notions of good scales to other anti-compactness principles such as square principles were summarized and investigated, for example, in the seminal paper [7].
**Remark 33** (Shelah).: _If \(\mathcal{S},\mathcal{S}^{\prime}\) are both scales on the same product of length \(\lambda^{+}\), then their sets of good ordinals agree up to a non-stationary error._
Proof.: Let \(\mathcal{S}=\langle f_{\alpha}\mid\alpha<\lambda^{+}\rangle,\mathcal{S}^{ \prime}=\langle f^{\prime}_{\alpha}\mid\alpha<\lambda^{+}\rangle\). Let \(C\) be the club of all ordinals \(\delta\) such that for all \(\alpha<\delta\), there is \(\alpha^{\prime}<\delta\) such that \(f_{\alpha}\leq^{*}f^{\prime}_{\alpha^{\prime}}\) and vise verse. Then, an ordinal \(\delta\in C\) is good in \(\mathcal{S}\) if and only if it is good in \(\mathcal{S}^{\prime}\): Take \(A\subseteq\delta\) witnessing \(\delta\) being good in \(\mathcal{S}\). Then, by induction, pick a sequence of ordinals \(\alpha_{i}<\alpha^{\prime}_{i}<\alpha_{i+1}<\cdots\) such that \(\alpha_{i}\in A\) for all \(i\) and \(f_{\alpha_{i}}\leq^{*}f^{\prime}_{\alpha^{\prime}_{i}}\leq^{*}f_{\alpha_{i+1}}\).
For every \(i\), there is a bound for the error in the inequality \(f_{\alpha_{i}}\leq^{*}f_{\alpha^{\prime}_{i}}\) and \(f^{\prime}_{\alpha^{\prime}_{i}}\leq^{*}f_{\alpha_{i+1}}\), \(n_{i}\). As \(\operatorname{cf}\delta>\omega\), there is some \(n_{*}\) such that \(A^{\prime}=\{\alpha^{\prime}_{i}\mid n_{i}=n_{*}\}\) is unbounded. So, \(A^{\prime}\) witnesses \(\delta\) being good from \(\mathcal{S}^{\prime}\).
In the paper [16], Gitik and Sharon solved Woodin's question whether it is consistent that in a successor of a singular, SCH fails but there is no weak square. They achieved that by showing that in the model obtained by forcing with the Gitik-Sharon forcing, starting with a supercompact cardinal \(\kappa\) such that \(2^{*}>\kappa^{+\omega+1}\), there are bad scales on the successor of \(\kappa\). In our framework, this theorem translates to the following statement about \(M_{\omega}[P]\).
**Theorem 34** (Gitik-Sharon).: _Assume that in \(V\) there is a scale on \(\prod\kappa^{+n}\) in which the set of bad points \(S\) contains stationarily many ordinals of cofinality \(<\kappa\). Then, there is a bad scale in the successor of \(j_{\omega}(\kappa)\) in \(M_{\omega}[P]\)._
Proof.: Following the ideas from Claims 27 and 28, we would like to somehow use the scale from \(V\) in order to verify that a corresponding scale from \(M_{\omega}[P]\) cannot be good.
Let us note that if \(\kappa^{+\omega}\) is a strong limit, this is rather trivial, but we are mostly interested in the case that \(2^{\kappa}\) is large. In this case, \(\langle\kappa^{+n}\mid n<\omega\rangle\) is not necessarily cofinal at \(j_{\omega}(\kappa)\), as even \(j_{1}(\kappa)\) might be larger than all of those cardinals. Thus, we study the product of \(\mu_{n}=\left(j_{n}(\kappa^{+n+1})\right)^{M_{\omega}}\).
**Claim 35**.: _For every \(n<\omega\), \(\kappa^{+n+1}\) is a continuity point of \(j_{n}\)._
Note that \(\mu_{n}\) is a regular cardinal in \(M_{n}\), and by absoluteness, it is regular in \(M_{\omega}[P]\) as well. Moreover, since \(j_{n+1}(\kappa)>j_{n}(\kappa^{+n+1})\), \(\sup\mu_{n}=j_{\omega}(\kappa)\).
Pick a scale on \(\prod\mu_{n}\), \(\mathcal{T}=\langle g_{\alpha}\mid\alpha<\mu_{*}\rangle\). Even though \(\mu_{n}\) might not be regular in \(V\), the two defining properties of a scale (weakly increasing and cofinal) holds in \(V\) for \(\mathcal{T}\) since \(M_{\omega}[P]\) is closed under \(\omega\)-sequences or ordinals.
We would like to collapse \(\mathcal{T}\) to a scale in \(V\). Recall that for all \(n\), \(\sup j_{n}\,"\,\kappa^{+n+1}=\mu_{n}\). Let us define for every \(\alpha\),
\[h_{\alpha}(n)=\min\{\beta<\kappa^{+n+1}\mid g_{\alpha}(n)<j_{n}(\beta)\},\]
and let \(\bar{\mathcal{T}}\) be \(\langle h_{\alpha}\mid\alpha<\mu_{*}\rangle\).
**Claim 36**.: \(\bar{\mathcal{T}}\) _is a scale in \(V\) on \(\prod\kappa^{+n+1}\)._
Proof.: Let \(\alpha<\beta\). Since \(g_{\alpha}(n)\leq g_{\beta}(n)\) for almost all \(n\), we conclude that \(h_{\alpha}(n)\leq h_{\beta}(\alpha)\) for almost all \(n\).
Let \(h\) be arbitrary. Then, let us look at the \(\tilde{h}(n)=j_{n}(h(n))\). Since \(M_{\omega}[P]\) is closed under countable sequences, \(\tilde{h}\in M_{\omega}[P]\). Therefore, there is \(\alpha\) such that \(\tilde{h}(n)\leq g(\alpha)\) for almost all \(n\). We conclude that \(h(n)\leq h_{\alpha}(n)\) for almost all \(n\).
In particular, \(\operatorname{cf}^{V}\mu_{*}=\kappa^{+\omega+1}\). Fix in \(V\) an arbitrary continuous sequence of ordinals \(\vec{\beta}=\langle\beta_{i}\mid i<\kappa^{+\omega+1}\rangle\) cofinal at \(\mu_{*}\), and let \(\bar{\mathcal{T}}=\langle h_{\beta_{i}}\mid i<\kappa^{+\omega+1}\rangle\). Fix a scale \(\mathcal{S}\) as in the hypothesis of the theorem. Let \(C\) be the club from the proof of Remark 33 for the scales \(\mathcal{S},\bar{\mathcal{T}}\).
**Claim 37**.: _For every \(\alpha\in C\) of cofinality \(<\kappa\), if \(\alpha\) is bad for \(\mathcal{S}\) in \(V\) then \(\beta_{\alpha}\) it is bad for \(\mathcal{T}\) in \(M_{\omega}[P]\)._
Proof.: Assume that this is not the case. Since \(\operatorname{cf}\alpha^{V}<\kappa\) and the sequence \(\vec{\beta}\) is continuous, \(\operatorname{cf}^{V}\beta_{\alpha}<\kappa\). By the closure of \(M_{\omega}[P]\), \(\operatorname{cf}^{M_{\omega}[P]}\beta_{\alpha}<\kappa\) and moreover, there is \(B\subseteq\beta_{\alpha}\) in \(M_{\omega}[P]\), cofinal and contained in \(\{\beta_{i}\mid i<\alpha\}\). Pick \(A\subseteq B\) witnessing \(\beta_{\alpha}\) being good for \(\mathcal{T}\).
Then, \(A\) witnesses that \(\beta_{\alpha}\) is good for \(\bar{\mathcal{T}}\), which means that \(\bar{A}=\{i\mid\beta_{i}\in A\}\) witness that \(\alpha\) is good for \(\bar{\mathcal{T}}\) and thus by Remark 33, \(\alpha\) is good from \(\mathcal{S}\).
Finally, if \(D\) is a club in \(M_{\omega}[P]\) then \(D\in V\) and it is a closed and unbounded subset of the ordinal \(\mu_{*}\). We conclude that \(\{\alpha\mid\beta_{\alpha}\in D\}\in V\) is a club in \(\kappa^{+\omega+1}\). Together with the previous claim, we see that the set of bad points in \(\mathcal{T}\) has to be stationary in \(M_{\omega}[P]\).
The standard proofs of properties of scales in Prikry type extensions need some bounding lemmas. Looking closely into the arguments, one can identify the parallel parts: the bounding lemmas correspond to the construction of the scale \(\bar{\mathcal{T}}\) from \(\mathcal{T}\). Nevertheless, using the Bukovsky-Dehornoy method, we do not need to talk about names and use the strong Prikry Property is order to partially realize them.
Finally, we can prove:
**Theorem 38** (Gitik-Sharon).: _Let \(\kappa\) be a supercompact cardinal such that \(2^{\kappa}\geq\kappa^{+\omega+2}\). Then, there is a generic extension in which \(\kappa\) is a strong limit singular, there is a bad scale on \(\kappa^{+}\) and \(2^{\kappa}>\kappa^{+}\)._
Proof.: In \(M_{\omega}\), there is a forcing notion adding \(P\), by Bukovsky Theorem, and since \(M_{\omega}[P]\) satisfies the conclusion of the theorem (the failure of SCH and the existence of a bad scale), by the forcing theorem, there is a condition forcing that.
By elementarity, the same holds in \(V\): there is a forcing notion and a condition in it forcing the failure of SCH together with the existence of a bad scale on \(\kappa\).
By combining interleaved collapses in the Gitik-Sharon forcing, one can obtain a model in which SCH fails at \(\aleph_{\omega^{2}}\) and there is a bad scale on \(\aleph_{\omega^{2}}\). In the next section we will address the issue of adding collapses to a forcing notion in which the Bukovsky-Dehornoy Theorem holds, thus allowing us to obtain the full result.
## 3. Combining iterated ultrapowers with forcings
In this section, we will give a couple of examples for extensions of an iterated ultrapower, \(M_{\omega}\), using an object which can be obtained only in a generic extension of \(V\). While an additional level of complexity is added to the whole process, still a few key components are preserved. Our goal model can be presented as the intersection of a (definable) decreasing sequence of models in a generic extension, so many of the arguments from the previous sections will be applicable here as well.
The most notable change is that we are losing the elementary embeddings between the models in the chain. This makes the proof of the parallel intersection theorems more involved.
We will deal with two main cases: the Extender Based Prikry Forcing and adding interleaved collapses.
### Extender Based Prikry forcing
In this section, we follow closely results and ideas from Merimovich, [21, 23] in order to derive a Bukovsky-Dehornoy theorem for extender based Prikry forcing.
There are several definitions for _extenders_ in the literature, see for example [14, 18]. For our purposes, a \((\kappa,\lambda)\)-extender \(E\) is a combinatorial (set) object coding an elementary embedding \(j\colon V\to M\) with \(\operatorname{crit}j=\kappa\), \(j(\kappa)>\lambda\), \(M\) is closed under sequences of length \(\kappa\) and for every \(x\in M\) there are \(f\colon\kappa\to V\) in \(V\) and \(\gamma<\lambda\) such that \(j(f)(\gamma)=x\). In particular, the width of the embedding \(j\) is \(\leq\kappa\).
**Lemma 39** (\(\kappa\)-directness).: _Let \(E\) be a \((\kappa,\lambda)\)-extender and \(j\colon V\to M\) be the derived elementary embedding._
_For every \(A\subseteq\lambda\), \(|A|\leq\kappa\) there is \(\gamma<\lambda\) such that for every \(\delta\in A\) there is \(f_{\delta}\colon\kappa\to\kappa\) such that \(j(f_{\delta})(\gamma)=\delta\)._
Proof.: Since \(M\) is closed under \(\kappa\)-sequences, \(A\in M\) and moreover, some enumeration \(\langle\delta_{i}\mid i<\kappa\rangle\in M\). Thus, there is \(\gamma<\lambda\) and \(g\colon\kappa\to V\) such that \(j(g)(\gamma)=\langle\delta_{i}\mid i<\kappa\rangle\).
So, define \(f_{\delta_{i}}(\zeta)=g(\zeta)(i)\) and the result follows from elementarity.
As a definable elementary embedding, the elementary embedding derived from an extender can be iterated and the direct limit of such an iteration is well founded.
Fix a \((\kappa,\lambda)\)-extender \(E\), and let \(\langle M_{n},j_{n,m}\mid n\leq m\leq\omega\rangle\) be the corresponding iteration of \(E\).
**Lemma 40**.: _For every \(x\in M_{\omega}\) there are \(n<\omega\), \(f\colon\kappa^{n}\to V\) and \(\gamma_{0},\dots,\gamma_{n-1}\), such that \(\gamma_{k}<j_{k}(\lambda)<j_{k+1}(\kappa)\) and \(x=j_{\omega}(f)(\gamma_{0},\dots,\gamma_{n-1})\)._
**Lemma 41**.: _The width of the embedding \(j_{n,m}\) for \(n<m\leq\omega\) is \(\leq j_{n}(\kappa)\)._
Proof.: First, for \(m=n+1\), this follows from our initial hypothesis on the extender \(E\), using elementarity.
Let us assume that the width of the embedding \(j_{n,m}\) is \(\leq j_{n}(\kappa)\). Let \(x\in M_{m+1}\). Since the width of \(j_{m,m+1}\) is \(j_{m}(\kappa)\), there is \(y\in M_{m}\) such that \(|y|\leq j_{m}(\kappa)\) and \(x\in j_{m,m+1}(y)\).
Since the width of \(j_{n,m}\) is \(j_{n}(\kappa)\), we conclude that there is \(z\in M_{n}\) such that \(|z|\leq j_{n}(\kappa)\) and \(y\in j_{n,m}(z)\). By elementarity, taking \(z^{\prime}=\{y^{\prime}\in z\mid|y^{\prime}|\leq j_{n}(\kappa)\}\) we conclude that \(y\in j_{n,m}(z^{\prime})\). Finally, let \(w=\bigcup z^{\prime}\) -- this set is a union of \(\leq j_{n}(\kappa)\) many sets of cardinality \(\leq j_{n}(\kappa)\), so \(|w|\leq j_{n}(\kappa)\).
So, \(x\in j_{n,m+1}(w)\) since \(x\in j_{m,m+1}(y)\) and \(j_{m,m+1}(y)\in j_{m,m+1}(j_{n,m}(z))\).
So, by induction, the claim holds for all \(n<m<\omega\).
Now, let us deal with the width of \(j_{n,\omega}\). If \(x\in M_{\omega}\), then there is \(m>n\) (without loss of generality) and \(\bar{x}\in M_{m}\) such that \(x=j_{m,\omega}(\bar{x})\). Since \(j_{n,m}\) has width \(j_{n}(\kappa)\), there is \(y\in M_{n}\) such that \(\bar{x}\in j_{n,m}(y)\).
So, \(x\in j_{n,\infty}(y)\), as wanted.
Merimovich proved that one can obtain a generic for the extender based Prikry forcing (defined in a proper way), but forcing with the direct extension order taking an iterated ultrapower and adding the generator. The following theorem works the details for the Bukovsky-Dehornoy intersection theorem for this forcing, again without explicitly defining the forcing notion.
**Theorem 42**.: _Let \(H\subseteq\mathbb{P}^{*}\), be a \(V\)-generic filter for the forcing \(\mathbb{P}^{*}=\{f\colon\lambda\to\kappa^{<\omega}\mid|f|\leq\kappa\}\), ordered by reverse inclusion._
_Let us define inductively_
\[H_{n+1}=\text{upwards closure of }\{j_{n,n+1}(p)^{\frown}\{(j_{n,n+1}(\alpha), \alpha)\mid\alpha\in\operatorname{dom}p\}\mid p\in H_{n}\}\]
_Let \(G\colon j_{\omega}(\lambda)\times\omega\to j_{\omega}(\kappa)\) be defined by:_
\[G(\alpha,n)=\gamma \iff\] \[\exists m<\omega, p\in H_{m}, \alpha\in\operatorname{dom}j_{m,\omega}(p), n\in\operatorname{dom}(j_{m,\omega}(p)(\alpha))\] \[\gamma=j_{m,\omega}(p)(\alpha)(n)\]
_Then \(M_{\omega}[G]=\bigcap M_{n}[H_{n}]\). Moreover, in \(M_{\omega}[G]\)\(\operatorname{cf}j_{\omega}(\kappa)=\omega\) and \(2^{j_{\omega}(\kappa)}=j_{\omega}(\lambda)\)._
Proof.: First, the verification that \(\operatorname{cf}^{M_{\omega}[G]}j_{\omega}(\kappa)=\omega\) and \(G\) forms a scale of length \(j_{\omega}(\lambda)\) on \(j_{\omega}(\kappa)\) is straight-forward. Let us focus in the proof of the intersection theorem.
As in Theorem 14, we need to show first that the model is closed under countable sequences.
**Lemma 43**.: \(M_{\omega}[G]\) _is closed under \(\omega\)-sequences._
Proof.: Since \(M_{\omega}[G]\) is a model of ZFC, it is enough to prove the claim for sequences of ordinals.
The crux of the argument is [21, Corollary 2.6].
Fix \(\alpha\) an ordinal in \(M_{\omega}[G]\). So, there is a function \(f\colon\kappa^{n}\to\operatorname{Ord}\) and \(\gamma_{0},\ldots,\gamma_{n-1}\) as in Lemma 40, such that \(j_{\omega}(f)(\gamma_{0},\ldots,\gamma_{n-1})=\alpha\).
Work in \(V\). Fix an elementary substructure \(N\prec H(\chi)\) for some large \(\chi\), such that \(\kappa\subseteq N\), \(|N|=\kappa\), \(E,\alpha\in N\). Let us apply the \(\kappa\)-directness of \(E\), in the sense of Lemma 39, and obtain an ordinal \(\rho\) such that for every \(\beta\in N\) there is \(h\) such that \(j(h)(\rho)=\beta\). In particular, this is true for \(\gamma_{0}\), so there is \(h_{0}\) such that \(j(h_{0})(\rho)=\gamma_{0}\). As \(N\) knows that the width of \(j\) is \(\kappa\), there is \(a\subseteq\lambda\) in \(N\) of cardinality \(\kappa\) such that \(\gamma_{1}\in j(a)\). In particular, since \(a\subseteq N\), there is \(h_{1}\) such that \(j_{2}(h_{1})(j_{1}(\rho))=\gamma_{1}\). Let us claim that we can "trace back" \(h_{1}\) to a function in \(V\). Indeed, in \(N\) there is \(\bar{\rho}<\lambda\) which is Rudin-Keisler above \(a\). In particular, there is \(\bar{h}_{2}\in M_{1}^{N}\) such that \(j_{1,2}(\bar{h}_{2})(\bar{\rho})=\gamma_{2}\). But, \(\bar{h}_{2}=j_{1}(\bar{\bar{h}}_{2})(\bar{\gamma}_{2})\) for some \(\bar{\gamma}_{2}\in N\) and thus we conclude that all those computations can be made using only \(\rho\).
Continue this way, we conclude that there is a function \(h\colon\kappa^{n}\to\operatorname{Ord}\) such that \(j_{\omega}(h)(\rho,j_{1}(\rho),j_{2}(\rho),\ldots,j_{n-1}(\rho))=\alpha\).
Now, let \(\langle\alpha_{n}\mid n<\omega\rangle\) be a sequence of ordinals. Applying the above arguments with \(N\) containing the sequence \(\langle\alpha_{n}\mid n<\omega\rangle\) instead of just a single ordinal
\(\alpha\), we obtain \(\rho\) and a sequence of functions \(\langle h_{n}\mid n<\omega\rangle\) such that for all \(n\), \(j_{n}(h_{n})(\rho,j_{1}(\rho),\ldots,j_{m_{n}-1}(\rho))=\alpha_{n}\).
Since the sequence of images of \(\rho\) is a final segment of \(G(j_{\omega}(\rho))\), we conclude that \(\langle\alpha_{n}\mid n<\omega\rangle\in M_{\omega}[G]\).
Let \(X\in\bigcap M_{n}[H_{n}]\) be a set of ordinals. For each \(n\), let us pick a \(j_{n}(\mathbb{P}^{*})\)-name \(\dot{\tau}_{n}\) for
\[Y_{n}=\{\alpha\mid j_{n,\omega}(\alpha)\in X\}.\]
So, \(\dot{\tau}_{n}^{H_{n}}=Y_{n}\).
Let \(\sigma_{n}=j_{n,\omega}(\tau_{n})\in M_{\omega}\). Since \(M_{\omega}[G]\) is closed under \(\omega\)-sequences, the sequence of names \(\langle\sigma_{n}\mid n<\omega\rangle\) is a member of \(M_{\omega}[G]\).
The problem is the we do not have access to the actual generics \(H_{n}\) from which the names were realized. In order to overcome this, we will need to isolate a relevant version of the Prikry forcing that hides inside our forcing and show that its generic is unique (up to shifts). Here we must diverge from the thesis of the paper and work with forcing notion. As we would like to avoid cluttering this part of the proof with definitions, we refer the reader to [14, Section 1.2] for the definition of Prikry forcing on trees.
Let \(s\subseteq j_{\omega}(\lambda)\) be a set of size \(\leq\kappa_{\omega}\) in \(M_{\omega}\), such that \(s\) is the intersection with \(\lambda\) of an elementary submodel of \(H^{M_{\omega}}(\chi)\). Let \(U(s)\) be the corresponding measure: \(A\in U(s)\iff\{\langle k(\alpha),\alpha\rangle\mid\alpha\in A\}\in k(A)\) for \(k=j_{j_{\omega}(E)}\).
Let \(\mathbb{Q}_{s}\) be the tree Prikry forcing defined using the measure \(U(s)\). Clearly, \(U(s)\) is the image of some measure of the form \(U(\bar{s})\) on \(M_{n}\) for some \(n<\omega\).
**Lemma 44**.: _There is a sequence \(\vec{t}=\langle t_{n}\mid n<\omega\rangle\) which is generic for \(\mathbb{Q}_{s}\), and a condition \(p\in j_{\omega}(\mathbb{P}^{*})\), such that \(p^{\curvearrow}\langle t_{0},\ldots,t_{n-1}\rangle\) are compatible with \(G\) for all \(n\)._
_Moreover, the sequence \(\vec{t}\) is unique, up to a finite shift._
Proof.: This follows from the Merimovich's criteria, [23]:
First, in order to get such a sequence pick any \(p\in\bigcup j_{n,\omega}\) " \(H_{n}\) such that \(\operatorname{dom}p\supseteq s\). Let \(p\in j_{n,\omega}(H_{n})\). Take, \(t_{m}\) to be the added coordinates in step \(n+m\), namely for \(\alpha\in\operatorname{dom}j_{n,m}(\bar{p})\) we let \(t_{m}(j_{m,\omega}(\alpha))=\alpha\).
Let \(p,p^{\prime}\) be conditions in \(j_{\omega}(\mathbb{P}^{*})\) with domain \(s\), such that there are sequences \(\langle t_{n}\mid n<\omega\rangle\), \(\langle t_{n}^{\prime}\mid n<\omega\rangle\) are generic for \(\mathbb{Q}_{s}\), and for all \(\alpha\in s\),
\[p(\alpha)^{\curvearrow}\langle t_{n}(\alpha)\mid n<\omega\rangle=p^{\prime} (\alpha)^{\curvearrow}\langle t_{n}^{\prime}(\alpha)\mid n<\omega\rangle=G( \alpha).\]
Let us show that there is \(k\) such that for all large \(n\), \(t_{k+n}=t_{k^{\prime}+n}^{\prime}\) and
\[p^{\curvearrow}\langle t_{0},\ldots,t_{k-1}\rangle=p^{\prime}{}^{\curvearrow }\langle t_{0}^{\prime},\ldots,t_{k^{\prime}-1}^{\prime}\rangle.\]
Pick in \(M_{\omega}\) a function \(g\) enumerating \(s\). For all large \(n\), \(g\) " \(t_{n}(\kappa_{\omega})=\operatorname{dom}t_{n}\). In particular, by comparing the values at \(\kappa_{\omega}\), we obtain the possible value of \(k^{\prime}-k\). Moreover, for all large \(n\),
\[\forall\alpha\in\operatorname{dom}t_{n},\,t_{n}(\alpha)>\max(p(\alpha),p^{ \prime}(\alpha))\text{ and }t_{n}(\alpha)<t_{n+1}(\kappa_{\omega}).\]
This is easily obtained by taking the right \(U(s)\)-large tree. Therefore, for such \(n\)-s, the value of \(t_{n}(\alpha)\) is the unique ordinal in \(G(\alpha)\) which is between \(G(\kappa)_{n}\) and \(G(\kappa)_{n+1}\).
Given a condition \(p\) and a sequence \(\vec{t}\) witnessing the validity of the lemma for \(s\), let \(p_{n}=p^{\curvearrow}\langle t_{0},\ldots,t_{n-1}\rangle\). We call \(\langle p_{n}\mid n<\omega\rangle\) a Prikry sequence for \(s\). The lemma indicates that the Prikry sequence is unique, up to an initial segment. We
will assume always that \(|p_{n}(\kappa_{\omega})|=n\). This makes the Prikry sequences to agree up to an initial segment, without a shift.
**Lemma 45**.: _Let \(\langle p_{n}\mid n<\omega\rangle\) be a Prikry sequence for some \(s\). Then, for all large \(n\), \(p_{n}\in j_{n,\omega}(H_{n})\)._
Proof.: Pick \(n\) large enough so that \(s=j_{n,\omega}(\bar{s})\). Let \(q\in H_{n}\) with domain \(\bar{s}\). Then, the canonical sequence of generators, starting with \(j_{n,\omega}(q)\) is a Prikry sequence for \(s\). In particular, letting \(\langle q_{m}\mid m<\omega\rangle\) be the corresponding sequence (adding dummy conditions in the beginning, if needed), we know that for all large \(n\), \(q_{n}=p_{n}\), as we set the shift using the \(\kappa_{\omega}\) coordinate.
Since \(q_{n}\in H_{n}\), once we go past the dummy coordinates, the conclusion follows.
Let us claim that \(X\) is the set of all \(\alpha\) such that there is a condition \(p\) and \(\vec{t}\) compatible with \(G\), and for all large \(n\),
\[p_{n}\Vdash\alpha\in\sigma_{n}\]
Indeed, if \(\alpha\in X\), the existence of such \(p\) is clear. Otherwise, since for all large \(n\), \(p_{n}\) comes from \(H_{n}\), it cannot force contradictory information.
It is interesting to try to see where the chain condition proof fails. Indeed, any ordinal can still be captured by a set of size \(\kappa\). The problem is both the generic for \(\mathbb{P}^{*}\) and the fact that there is no elementary embedding from \(V[H]\) to \(M_{\omega}[G]\).
### Interleaved Collapses
In this section, we will describe a situation in which interleaved collapses can be incorporated into the BD setting. Let us discuss first the simple setting of the vanilla Prikry forcing using a normal measure \(U\). Let \(j\colon V\to M\) be the ultrapower map using the normal measure.
The following lemma is well known.
**Lemma 46**.: _If \(2^{\kappa}=\kappa^{+}\), then there is an \(M\)-generic filter for \(\operatorname{Col}(\kappa^{+},<j(\kappa))\) in \(V\)._
Proof.: Let us count the maximal antichains of the forcing \(\operatorname{Col}(\kappa^{+},<j(\kappa)\) in \(M\). By the chain condition and since \(j(\kappa)\) is inaccessible in \(M\), there are \(j(\kappa)\) such antichains in \(M\). In \(V\), \(|j(\kappa)|=|\kappa^{\kappa}|=\kappa^{+}\). Let \(\vec{A}=\langle A_{\alpha}\mid\alpha<\kappa^{+}\rangle\) be an enumeration of all maximal antichains in \(M\), \(\vec{A}\in V\).
The forcing \(\operatorname{Col}(\kappa^{+},<j(\kappa))^{M}\) is \(\kappa^{+}\)-closed in \(M\), and since \(M\) is closed under \(\kappa\)-sequences, it is \(\kappa^{+}\)-closed in \(V\) as well. Let us define in \(V\) a decreasing sequence of conditions, \(\langle p_{\alpha}\mid\alpha<\kappa^{+}\rangle\) with the property that \(p_{\alpha}\leq q_{\alpha}\in A_{\alpha}\). This can be done, using the closure of the forcing (from the point of view of \(V\)) at limit steps.
Let \(K\) be the upwards closure of \(\langle p_{\alpha}\mid\alpha<\kappa^{+}\rangle\). Then \(K\) is \(M\)-generic filter.
While it seems like the argument relies on the chain condition of the forcing, a similar argument works for the forcing \(\operatorname{Col}(\kappa^{+},j(\kappa))\), as the number of dense open sets is still \(\kappa^{+}\) from the point of view of \(V\). It does not work if \(2^{\kappa}>\kappa^{+}\). Yet, by carefully constructing the model, at some cases one can obtain an \(M\)-generic filter in those cases as well.
From this point, we will only assume that there is an \(M\)-generic filter for the collapse \(\operatorname{Col}(\kappa^{+},<j(\kappa))\), but modifying this to other forcing notions that admit a guiding generic does not change the argument.
Let us consider the direct system of ultrapower embeddings, \(j_{n,m}\colon M_{n}\to M_{m}\), \(m\leq\omega\). Let \(K_{n+1}=j_{n}(K)\).
**Lemma 47**.: _For each \(n\leq m\), \(K_{n+1}\) is \(M_{m}\) generic for the forcing \(\operatorname{Col}(\kappa_{n}^{+},<\kappa_{n+1})\)._
Proof.: First, by elementarity, \(K_{n+1}\) is \(M_{n}\)-generic. Moreover, since \(M_{m}\subseteq M_{n}\) and
\[\operatorname{Col}^{M_{n}}(\kappa_{n}^{+},<\kappa_{n+1})=\operatorname{Col}^{ M_{m}}(\kappa_{n}^{+},<\kappa_{n+1}),\]
we conclude that \(K_{n+1}\) is \(M_{m}\) generic as well.
**Lemma 48**.: _Let \(K_{0}\) be \(V\)-generic for \(\operatorname{Col}(\omega_{1},<\kappa)\)._
_For each \(n\), \(K_{0}\times K_{1}\times\cdots\times K_{n}\) is \(M_{n}\)-generic for \(\prod_{i\leq n}\operatorname{Col}(\kappa_{i-1}^{+},<\kappa_{i})\)._
Proof.: Using Easton Lemma: if \(H\) is generic for a \(\lambda\)-c.c. forcing and \(G\) is generic for a \(\lambda\)-closed forcing then \(H\) and \(G\) are mutually generic.
Let \(\vec{K}=\langle K_{n}\mid n<\omega\rangle\).
**Theorem 49**.: \(M_{\omega}[\vec{K}]=\bigcap M_{n}[\vec{K}\upharpoonright n+1]\)_._
Proof.: As in the proof of Theorem 14, we need to show first closure under \(\omega\)-sequences of ordinals and then to use a similar (but simpler) argument as in Theorem 42 in order to conclude the full theorem.
Let us begin with the closure under \(\omega\)-sequences.
**Lemma 50**.: \(M_{\omega}[\vec{K}]\) _is closed under \(\omega\)-sequence of ordinals from \(V[K_{0}]\)._
Proof.: Indeed, it is easy to extract \(P=\langle\kappa_{n}\mid n<\omega\rangle\) from \(\vec{K}\). Therefore, \(M_{\omega}[\vec{K}]\supseteq M_{\omega}[P]\) which is closed under countable sequences of ordinals from \(V\), by Lemma 15. But \(V\) and \(V[K_{0}]\) have the same countable sequences, so the conclusion holds.
Let \(X\in\bigcap M_{n}[\vec{K}\upharpoonright n+1]\) be a set of ordinals. Let us define \(Y_{n}\) and the corresponding name \(\dot{\tau}_{n}\) as before:
\[Y_{n}=\{\alpha\mid j_{n,\omega}(\alpha)\in X\},\quad\dot{\tau}_{n}^{\vec{K} \upharpoonright n+1}=Y_{n}.\]
Let us look at \(j_{n,\omega}(\dot{\tau}_{n})\), this is a name with respect to the forcing
\[\mathbb{Q}_{n}^{j}=\left(\prod_{i<n}\operatorname{Col}(\kappa_{i-1}^{+}, \kappa_{i})\right)\times\operatorname{Col}(\kappa_{n-1}^{+},\kappa_{\omega}).\]
**Claim 51**.: \(\alpha\in X\) _if and only if for all large \(n\), there is a condition \(p\in\vec{K}\upharpoonright n+1\) such that (viewing \(p\) as a condition in \(\mathbb{Q}_{n}^{j}\)), \(p\Vdash_{\mathbb{Q}_{n}^{j}}\alpha\in j_{n,\omega}(\dot{\tau}_{n})\)._
Proof.: Indeed, if \(\exists\bar{\alpha},j_{n,\omega}(\bar{\alpha})=\alpha\), then \(\bar{\alpha}\in Y_{n}\) and there is some \(p\in\vec{K}\upharpoonright n+1\) that forces that, namely \(\bar{\alpha}\in\dot{\tau}_{n}\). Since \(j_{n,\omega}(p)=p\), we conclude that \(p\Vdash_{\mathbb{Q}_{n}^{j}}\alpha\in j_{n,\omega}(\dot{\tau}_{n})\).
On the other hand, if there is \(p\in\vec{K}\upharpoonright n+1\) that forces \(\alpha\in j_{n,\omega}(\dot{\tau}_{n})\) and \(\alpha=j_{n,\omega}(\bar{\alpha})\) then again \(p=j_{n,\omega}(p)\) and therefore \(p\Vdash\bar{\alpha}\in\dot{\tau}_{n}\).
In particular, since \(\langle\dot{\tau}_{n}\mid n<\omega\rangle\in M_{\omega}[P]\), we conclude that \(X\in M_{\omega}[\vec{K}]\).
From Theorem 49, it is easy to deduce which cardinals are preserved in \(M_{\omega}[\vec{K}]\) and even show using Bukovsky's Theorem \(M_{\omega}[\vec{K}]\) is a generic extension of \(M_{\omega}\) using a \(\kappa_{\omega}^{+}\)-c.c. forcing notion.
### Extender Based Prikry forcing with interleaved collapses
Let us combine the results of Subsections 3.1 and 3.2.
The following well known lemma, shows that it is possible to obtain a guiding generic filter for extender ultrapower.
**Lemma 52**.: _Let \(E\) be a \((\kappa,\kappa^{++})\)-extender and let us assume that \(2^{\kappa}=\kappa^{+}\). Let \(M\) be the extender ultrapower by \(E\)._
_Then, there is an \(M\)-generic filter, \(K\) for \(\operatorname{Col}(\kappa^{+++},<j(\kappa))\)._
Proof.: Let \(U\) be the derived normal ultrafilter, and let \(i\colon V\to N\) be the normal ultrapower. By Lemma 46, there is an \(N\)-generic filter \(K_{0}\) for the forcing \(\operatorname{Col}((\kappa^{+++})^{N},<i(\kappa))\).
Let \(k\colon N\to M\) be the map given by \(k([f]_{U})=j(f)(\kappa)\). Let \(K\) be the upwards closure of \(k\) "\(K_{0}\).
**Claim 53**.: \(K\) _is \(M\)-generic filter._
Proof.: For every \(x\in M\), there is \(g\colon(\kappa^{++})^{N}\to N\) such that \(x\in\operatorname{range}k(g)\). This is clear, by noting that if \(x=j(r)(a)\) for some generator \(a\), then without loss of generality, \(a\in\kappa^{++}\) and therefore \(x\in\operatorname{range}k(i(r)\restriction(\kappa^{++})^{N})\).
Given a dense open set \(D\in M\), let \(g\colon(\kappa^{++})^{N}\to N\) cover it, and let \(D^{\prime}=\bigcap\{g(\alpha)\mid g(\alpha)\text{ dense open}\}\). By the distributivity of the forcing in \(N\), \(D^{\prime}\) is dense open and therefore there is a condition in \(K_{0}\) meeting it.
Let us consider now the iteration \(\langle j_{n,m}\colon M_{n}\to M_{m}\mid n\leq m\leq\omega\rangle\) given by iterating the extender embedding. Let \(K_{0}\) be a \(V\)-generic for \(\operatorname{Col}(\omega_{1},<\kappa)\), and let \(H_{0}\subseteq\mathbb{P}^{*}\) be \(V\)-generic.
Let us define \(H_{n}\) and \(G\) in the very same way as in Theorem 42 and \(K_{n}\) in the same way as in Theorem 49.
**Theorem 54**.: \(\bigcap M_{n}[\vec{K}\upharpoonright n+1][H_{n}]=M_{\omega}[\vec{K}][G]\)_._
Proof.: As the ideas of the proof are very similar to the proofs of Theorem 3.1 and Theorem 49 let us sketch the proof.
First, we show that the model \(M_{\omega}[\vec{K}][G]\) is closed under \(\omega\)-sequences, using Lemma 43. Then, we obtain a sequence of names \(\langle\sigma_{n}\mid n<\omega\rangle\) as above. Using Lemma 44, we obtain local approximations of the filters \(j_{n,\omega}\) "\(H_{n}\) and by the same argument of Claim 51 we obtain the relevant conditions for the \(K\)-parts.
## 4. Magidor and Radin forcing
In [20], Magidor introduces a variant of the Prikry forcing that enables one to change the cofinality of a measurable cardinal to be uncountable. This forcing was revised by Mitchell and generalized by Radin. The version that we present here follows Mitchell's definition.3 As in the other parts of the paper, we are not going to define a forcing notion but rather an extension of an iterated ultrapower.
Footnote 3: A variant of this presentation appeared in the unpublished book of Cummings and Woodin.
**Definition 55**.: _Let \(U,U^{\prime}\) be two normal measures on \(\kappa\). Then \(U\) is below \(U^{\prime}\) in the Mitchell order if \(U\in\operatorname{Ult}(V,U^{\prime})\)._
**Definition 56** (Mitchell).: _Let \(\partial^{\mathcal{U}}\) be a function from \(\kappa+1\) to ordinals._
_A sequence \(\mathcal{U}=\langle U_{\alpha,\beta}\mid\alpha\leq\kappa,\beta<o^{\mathcal{U}}( \alpha)\rangle\) is a coherent sequence if for every \(\alpha\leq\kappa,\beta<o^{\mathcal{U}}(\alpha)\),_
\[j_{U_{\alpha,\beta}}(\mathcal{U})\restriction(\alpha,\beta)=\mathcal{U}\restriction (\alpha,\beta)\]
_where \(\mathcal{U}\restriction(\alpha,\beta)=\langle U_{\gamma,\delta}\mid(\gamma< \alpha,\beta<o^{\mathcal{U}(\gamma)})\vee(\gamma=\alpha\wedge\delta<\beta)\rangle\)._
**Definition 57**.: _Let \(\vec{U}\) be a Mitchell increasing sequence of normal measures on \(\kappa\). We say that \(\vec{U}=\langle U_{\alpha}\mid\alpha<\delta\rangle\) is pre-coherent if there is a function \(c\) such that for every \(\alpha<\delta\), \(j_{U_{\alpha}}(c)(\kappa)=\alpha\)._
Any coherent sequence of normal measures gives rise to a pre-coherent Mitchell increasing sequence of normal measures of the same length, but not necessarily vice-verse, see [3].
**Remark 58**.: _Let \(\vec{U}\) be a Mitchell increasing sequence of normal measures. If \(\operatorname{len}\vec{U}<\kappa^{+}\) then \(\vec{U}\) is pre-coherent._
Proof.: If \(\operatorname{len}\vec{U}=0\), there is nothing to prove. So let us assume that it is non-zero.
Since the measures are discrete, there is a sequence of pairwise disjoint sets \(\langle A_{\alpha}\mid\alpha<\operatorname{len}\vec{U}\rangle\) such that \(A_{\alpha}\in U_{\beta}\iff\alpha=\beta\).
Define \(g(\zeta)=\alpha\) iff \(\zeta\in A_{\alpha}\) and \(0\) otherwise. Let also fix a sujective function \(r\colon\kappa\to\operatorname{len}\vec{U}\). Now, \(a=\operatorname{range}j(r)\restriction\kappa=j\operatorname{"}\operatorname{len }\vec{U}\). In particular, for \(\pi_{a}\) the Mostowski collapse of the set \(a\), and it is easy to verify that \(\pi_{a}\circ j(g)\in M\) is the desired function.
Finally, let
\[c(\zeta)=\pi_{\operatorname{range}r\restriction\zeta}(g(\zeta)),\]
where this application is defined, and zero otherwise.
Let \(\vec{U}\) be a sequence of normal measures on \(\kappa\), increasing in the Mitchell order, and let \(o^{\vec{U}}(\kappa)=\zeta\). Let us define an iteration as well as a sequence of ordinals \(\langle\gamma_{\xi}\mid\xi<\alpha_{*}\rangle\) as follows:
\(M_{0}=V\), \(j_{0,0}=id_{V}\). Given \(M_{\alpha}\) and maps \(j_{\beta,\alpha}\colon M_{\beta}\to M_{\alpha}\) for all \(\beta<\alpha\), we pick the least index \(\gamma_{\alpha}<\operatorname{len}j_{\alpha}(\vec{U})\) such that \(\sup\{\beta<\alpha\mid j_{\beta,\alpha}(\gamma_{\beta})=\gamma_{\alpha}\}\) is bounded (and we let \(\gamma_{0}=0\)).
Let \(M_{\alpha+1}=\operatorname{Ult}(M_{\alpha},j_{\alpha}(\vec{U})(\gamma_{\alpha}))\) and let \(j_{\alpha,\alpha+1}\) be the ultrapower map. Let \(j_{\beta,\alpha+1}=j_{\alpha,\alpha+1}\circ j_{\beta,\alpha}\) for all \(\beta\leq\alpha\).
If \(\gamma_{\alpha}\) is undefined, we halt.
Let \(\kappa_{\alpha}\) be the critical point of \(j_{\alpha,\alpha+1}\) and let \(\alpha_{*}\) be the length of the process.
The following Lemma is a comparison argument due to Mitchell, in disguise.
**Lemma 59**.: _The iteration halts. Moreover, if \(\operatorname{len}\vec{U}=\mu<\kappa\), it halts after \(\omega^{\mu}\) many steps (ordinal exponentiation)._
Proof.: Let us assume towards a contradiction that the iteration continues for \(\lambda=(\operatorname{len}(\vec{U})^{\kappa})^{+}\) many steps (cardinal exponentiation).
Consider \(\gamma_{\alpha}\) for limit ordinals \(\alpha<\lambda\). As we take direct limits at limit steps of the iteration, each such \(\gamma_{\alpha}\) is the image of an ordinal from a previous step in the iteration. More precisely, there are finitely many ordinals \(\beta_{0}<\dots<\beta_{n-1}<\alpha\) and a function \(f\colon\kappa^{n}\to\operatorname{len}\vec{U}\) in \(V\), such that \(j_{\alpha}(f)(\kappa_{\beta_{0}},\dots,\kappa_{\beta_{n-1}})=\gamma_{\alpha}\).
By Fodor's lemma, there is a stationary set \(S\subseteq\lambda\), such that \(f\) and the finite sequence \(\beta_{0},\ldots,\beta_{n-1}\) are fixed. Pick \(\alpha\in S\cap\operatorname{acc}S\). Then, for every \(\beta\in S\cap\alpha\), \(\gamma_{\alpha}=j_{\beta,\alpha}(\gamma_{\beta})\), a contradiction to our choice of \(\gamma_{\alpha}\).
For the moreover part, one can verify by induction that for every non-zero ordinal \(\alpha<\kappa\) of Cantor's normal form \(\alpha=\omega^{\beta_{0}}\cdot n_{0}+\omega^{\beta_{1}}\cdot n_{1}+\cdots+ \omega^{\beta_{m}}\cdot n_{m}\), where \(\beta_{0}>\beta_{1}>\cdots>\beta_{m}\), and \(n_{i}\neq 0\) for all \(i\), \(\gamma_{\alpha}=\beta_{m}\). Since the length of \(\vec{U}\) in this case is below the critical point of \(j_{\alpha}\), the process terminates at step \(\alpha_{*}=\omega^{\mu}\).
Let \(P=\langle\kappa_{\alpha}\mid\alpha<\alpha_{*}\rangle\), be the sequence of the critical points. In this case, the model \(M_{\alpha_{*}}[P]\) clearly contains sets which are not in \(M_{\beta}\) for large \(\beta\). The reason is that for every infinite \(\beta\), \(M_{\beta}\) is not closed under countable sequences and in particular will not contain the initial segments of \(P\).
Thus, the correct models for this theorem are \(M_{\alpha}[P\upharpoonright\alpha]\). Note that for \(\alpha\) limit, \(\kappa_{\alpha}\) is (typically) a singular cardinal in this model. Thus, in those cases there is no elementary embedding from \(M_{\alpha}[P\upharpoonright\alpha]\) to \(M_{\beta}[P\upharpoonright\beta]\).
**Lemma 60**.: \(M_{\alpha_{*}}[P]\subseteq\bigcap_{\alpha<\alpha_{*}}M_{\alpha}[P\upharpoonright\alpha]\)_._
Proof.: Working in \(M_{\alpha}[P\upharpoonright\alpha]\) one can compute from the parameter \(\gamma_{\alpha}\) the rest of the models \(M_{\alpha^{\prime}}\) for \(\alpha^{\prime}>\alpha\) and the iteration, and thus \(P\setminus\alpha\).
**Theorem 61**.: _Let us assume that \(\operatorname{len}\vec{U}\leq\kappa^{+}\) and that it is pre-coherent then \(M_{\alpha_{*}}[P]\) is closed under \(\kappa\)-sequences._
Proof.: Fix a function \(c\) witnessing the pre-coherency of \(\vec{U}\).
Let \(\alpha\in\operatorname{Ord}\). Then, \(\alpha\) is represented by some function \(f\colon\kappa^{n}\to\operatorname{Ord}\) and a finite sequence \(\zeta_{0}<\cdots<\zeta_{n-1}\) in \(P\), such that:
\[\alpha=j_{\alpha_{*}}(f)(\zeta_{0},\ldots,\zeta_{n-1}).\]
The main challenge is to "describe" which elements from \(P\) we evaluate \(j_{\alpha_{*}}(f)\) in, in order to obtain \(\alpha\), in particular where \(\operatorname{len}\vec{U}\geq\kappa\).
**Lemma 62**.: _Let \(\zeta\in P\), \(\beta=\operatorname{otp}(P\cap\zeta)\)._
_Then, there is a function \(g\in M_{\beta}\) and \(\zeta^{\prime}<\zeta\) in \(P\) such that \(\zeta=\min(\{\rho\in P\mid\rho>\zeta^{\prime}\text{ and }j_{\beta,\alpha_{*}}(g)(\rho)=j_{\alpha_{*}}(c)(\rho)\}\)._
_In particular, there are \(\zeta_{0},\ldots,\zeta_{n-1}<\zeta\) in \(P\) and \(h\colon\kappa^{n}\to o(\kappa)\) in \(V\) such that_
\[\zeta=\min(\{\rho\in P\mid\rho>\zeta_{n-1}\text{ and }j_{\alpha_{*}}(h)(\zeta_{0},\ldots,\zeta_{n-1},\rho)=j_{\alpha_{*}}(c)(\rho)\}.\]
Proof.: For \(\zeta\in P\) which is not a limit point, the claim is obvious: its Mitchell order under \(\vec{U}\) is simply \(0\) and one can read it from its predecessor.
For \(\zeta\) limit, as \(\gamma_{\beta}<\zeta^{+}\), there is a canonical function representing it, \(g\in M_{\beta}\). As \(\beta\) is limit, \(g=j_{\bar{\beta},\beta}(\bar{g})\) for some \(\bar{g}\in M_{\bar{\beta}}\), which is the canonical function for some ordinal \(\bar{\delta}\). Being canonical, the ordinal \(\bar{\delta}\) is definable from \(\bar{g}=g\upharpoonright\zeta^{\prime}\) for some \(\zeta^{\prime}\) and in general, for every \(\zeta^{\prime}\leq\xi<\zeta\) in \(P\), \(g(\zeta)\) is the unique ordinal which is the height of the canonical function \(g\upharpoonright\xi\). Since \(j_{\delta,\beta}(g\upharpoonright\xi)=g\) for \(\delta=\operatorname{otp}(P\cap\xi)\), \(\xi\in P\), we conclude that if equality holds, then \(j_{\delta,\beta}(\gamma_{\xi})=\gamma_{\beta}\), but this can only hold for boundedly many values in \(P\).
The second part follows from the assumption that we are taking direct limits at limit steps and thus \(g\) is the image of a function from a shorted iteration. Thus, by induction, one can represent \(g\) as \(j_{\beta}(h)(\zeta_{0},\ldots,\zeta_{n-1})\) (recall that \(\beta=\operatorname{otp}(P\cap\zeta)\)), \(\zeta_{0},\ldots,\zeta_{n-1}<\zeta\) in \(P\) and \(h\colon\kappa^{n}\to o(\kappa)\)
Let
\[j_{\alpha_{*}}(h)^{P}(\zeta_{0},\dots,\zeta_{n-1}) = \min(\{\rho\in P\mid\begin{array}{c}\rho>\zeta_{n-1}\text{ and }\\ j_{\alpha_{*}}(h)(\zeta_{0},\dots,\zeta_{n-1},\rho)=o^{j_{\alpha_{*}}(\mathcal{U })}(\rho)\}.\end{array}\]
Given \(\zeta\in P\) we may find an increasing sequence of elements of \(P\), \(\zeta_{0},\dots,\zeta_{n}\) and functions \(g_{0},\dots,g_{n-1}\in V\) such that:
\[\zeta_{k}=j_{\alpha_{*}}(g_{k})^{P}(\zeta_{0},\dots,\zeta_{k-1})\]
This is always possible, using standard arguments, see [15].
So, we conclude that each ordinal \(\alpha\) can be represented using a function \(f\) and finitely many functions \(g_{0},\dots,g_{n-1}\) that allows us to "read" the relevant elements from \(P\).
Let \(\langle\alpha_{i}\mid i<\kappa\rangle\) be a sequence of ordinals. Let \(\langle f^{i},g_{0}^{i},\dots,g_{n_{i}-1}^{i}\mid i<\kappa\rangle\) be a choice of representatives, as above. Namely, take \(\beta_{j}^{i}=j_{\alpha_{*}}^{P}(\beta_{0}^{i},\dots,\beta_{j-1}^{i})\) and \(\alpha_{i}=j_{\alpha_{*}}(f)(\beta_{0}^{i},\dots,\beta_{n_{i}-1}^{i})\).
Apply \(j_{\alpha_{*}}\), and truncate at \(\kappa\), and then compute from \(P\) first the indexes \(\beta_{i}^{0},\dots,\beta_{i}^{n-1}\) and then the ordinals \(\alpha_{i}\).
Let us analyse \(\operatorname{cf}\alpha_{*}\) (which is the cofinality of \(j_{\alpha_{*}}(\kappa)\)), under the assumption that \(\operatorname{len}\vec{U}<\kappa^{+}\).
**Lemma 63**.: _Let us assume that \(\vec{U}\) is weakly-coherent and that \(\operatorname{cf}\operatorname{len}\vec{U}\leq\kappa\)._
_Let \(\lambda=\operatorname{cf}^{M_{\alpha_{*}}[P]}(j_{\alpha_{*}}(\kappa))\). Then,_
\[\lambda=\begin{cases}\omega&\operatorname{len}\vec{U}\text{ is a successor ordinal}\\ \operatorname{cf}^{V}(\operatorname{len}\vec{U})&\operatorname{cf}^{V}( \operatorname{len}\vec{U})<\kappa\\ \omega&\operatorname{cf}^{V}(\operatorname{len}\vec{U})=\kappa\end{cases}\]
Proof.: We split into cases.
**Case 0:**\(\operatorname{len}\vec{U}\) is a successor ordinal. In this case \(\operatorname{cf}\alpha_{*}\) is \(\omega\). Indeed, the ordinals \(\alpha\) in which \(\gamma_{\alpha}\) is the last measure in \(j_{\alpha}(\vec{U})\) form an \(\omega\)-sequence, \(\alpha_{n}\) cofinal at \(\alpha_{*}\). Using the pre-coherency of the sequence, \(\gamma_{\alpha}\) can be read from \(P\).
**Case 1:**\(\operatorname{cf}\operatorname{len}\vec{U}=\mu<\kappa\). Pick a cofinal sequence \(\langle\rho_{i}\mid i<\mu\rangle\), cofinal at \(\operatorname{len}U\). Since \(\mu<\kappa\) which is the critical point, for all \(\alpha\) (and in particular, for \(\alpha_{*}\)), \(\operatorname{cf}^{M_{\alpha}}\operatorname{len}j_{\alpha}(\vec{U})=\mu\).
Consider \(\alpha_{*}\). For each \(i\), there are unboundedly many \(\beta<\alpha_{*}\) such that \(\gamma_{\beta}=j_{\beta}(\rho_{i})\). Let \(\beta_{i}\) be the least such ordinal. Then, \(\beta_{i}\) are increasing, and cofinal at \(\alpha_{*}\). Indeed, below each \(\beta_{i}\), the images of \(\rho_{j}\), for all \(j<i\) as well as all lower ordinals appear unboundedly. Again, using the pre-coherency, this cofinal sequence at \(j_{\alpha_{*}}(\kappa)\) can be computed in \(M_{\alpha_{*}}[P]\). The cofinality cannot be lower than that, as otherwise, the cofinality of \(\mu\) must be collapsed, but \(M_{\alpha_{*}}[P]\subseteq V\).
**Case 2:**\(\operatorname{cf}^{V}\operatorname{len}\vec{U}=\kappa\). In this case we need to show that \(\operatorname{cf}\alpha_{*}=\omega\).
Fix a sequence \(\vec{\delta}=\langle\delta_{\xi}\mid\xi<\kappa\rangle\) cofinal at \(\operatorname{len}\vec{U}\). Let us define by induction a sequence of ordinals cofinal at \(\alpha_{*}\).
Let \(\alpha_{0}=0\).
Let \(n<\omega\). Denote \(\rho_{n}=j_{\alpha_{n}+1}(\vec{\delta})(j_{\alpha_{n}}(\kappa))\). Since \(\rho_{n}<\operatorname{len}\vec{U}\), we may define \(\alpha_{n+1}\) to be the least ordinal such that \(\gamma_{\alpha_{n+1}}=j_{\alpha_{n}+1,\alpha_{n+1}}(\rho_{n})\).
Let us show that \(\alpha_{*}=\sup\alpha_{n}\). Indeed, let \(\alpha_{\omega}=\sup\alpha_{n}\), and let us assume towards a contradiction, that \(\alpha_{\omega}\neq\alpha_{*}\), so \(\gamma_{\alpha_{*}}\) is an ordinal \(<j_{\alpha_{*}}(\operatorname{len}\vec{U})\).
Since \(\alpha_{\omega}\) is a limit ordinal, \(\kappa_{\alpha_{\omega}}=\sup_{n<\omega}\kappa_{\alpha_{n}}\) and thus \(j_{\alpha_{\omega}}(\vec{\delta})\) is cofinal at \(j_{\alpha_{\omega}}(\operatorname{len}\vec{U})\), and there is \(n\) such that the \(\kappa_{\alpha_{n}}\)-th point of this sequence exceeds \(\gamma_{\alpha_{\omega}}\). Without loss of generality, \(\gamma_{\alpha_{\omega}}=j_{\alpha_{n},\alpha_{\omega}}(\bar{\gamma})\).
Now, as for every \(m>n\), \(\rho_{m}>j_{\alpha_{n},\alpha_{m}}(\bar{\gamma})\), there must be an ordinal \(\beta\) between \(\alpha_{m}\) and \(\alpha_{m+1}\) such that \(j_{\beta,\alpha_{\omega}}(\gamma_{\beta})=\gamma\), contradicting the definition of \(\gamma_{\alpha_{\omega}}\).
We conclude that if \(\operatorname{len}\vec{U}<\kappa^{+}\) then \(\operatorname{cf}\alpha_{*}<\kappa\). Moreover, this can be decoded from \(P\) itself, and thus this cofinality is going to be correctly computed in \(M_{\alpha_{*}}[P]\).
**Lemma 64**.: _If \(\operatorname{cf}^{V}(\operatorname{len}\vec{U})=\kappa^{+}\) then \(M_{\alpha_{*}}[P]\models j_{\alpha_{*}}(\kappa)\) is regular._
Proof.: We will prove the claim for the case \(\operatorname{len}\vec{U}=\kappa^{+}\). The general case is similar.
**Claim 65**.: _for every \(\alpha\), \(M_{\alpha}[P\restriction\alpha]\) is a \(\kappa^{+}_{\alpha}\)-c.c. extension of \(M_{\alpha}\) and in particular \(\kappa^{+}_{\alpha}\) is preserved._
The proof for this claim is very similar to the proof of Lemma 11, with the additional complication that every function in \(M_{\beta}[P\restriction\beta]\) needs to be bounded (using an inductive hypothesis) by a corresponding function from \(M_{\beta}\). 4
Footnote 4: There is a different way to show that the regularity of \(\kappa^{+}_{\alpha}\) is preserved, by showing that \(\kappa_{\alpha}\) must be singular in \(M_{\alpha}[P\restriction\alpha]\) (using an inductive hypothesis and the previous theorem) and thus the cofinality of this cardinal must be below \(\kappa_{\beta}\) for some \(\beta<\alpha\). Similar argument appears ahead.
Let us assume now that \(\operatorname{cf}^{M_{\alpha_{*}}[P]}(j_{\alpha_{*}}(\kappa))<j_{\alpha_{*}}(\kappa)\). So, there is \(\alpha<\alpha_{*}\) such that \(\operatorname{cf}^{M_{\alpha_{*}}[P]}(j_{\alpha_{*}}(\kappa))<\kappa_{\alpha}\) and thus \(M_{\alpha}[P\restriction\alpha]\models\operatorname{cf}\alpha_{*}<\kappa_{\alpha}\).
In this model, \(M_{\alpha}[P\restriction\alpha]\), one can compute the rest of the iteration \(j_{\alpha,\alpha_{*}}\) and in particular, the sequence \(\langle\gamma_{\alpha}\mid\alpha<\alpha_{*}\rangle\). It is clear that this sequence is cofinal at \(j_{\alpha_{*}}(\kappa^{+})=\operatorname{len}j_{\alpha_{*}}(\vec{U})\).
As \(P\) is cofinal at \(j_{\alpha_{*}}(\kappa)\), \(\operatorname{cf}^{M_{\alpha}[P\restriction\alpha]}(\alpha_{*})=\operatorname {cf}^{M_{\alpha}[P\restriction\alpha]}(j_{\alpha_{*}}(\kappa))<\kappa_{\alpha}\).
Since the embedding \(j_{\alpha,\alpha_{*}}\) is continuous at \(j_{\alpha_{*}}(\kappa^{+})\), we conclude that the cofinality of \(j_{\alpha_{*}}(\kappa^{+})\) in \(M_{\alpha}[P\restriction\alpha]\) must be the same as the cofinality of \(j_{\alpha}(\kappa^{+})\) which is strictly larger than \(\kappa_{\alpha}\), by the chain condition of the forcing.
But this is a contradiction -- on the one hand the cofinality of \(\alpha_{*}\) must be strictly below \(\kappa_{\alpha}\) and on the other hand it must be \(j_{\alpha}(\kappa^{+})\).
**Lemma 66**.: _Let \(\vec{U}\) be a Mitchell increasing sequence of measures and let us consider the corresponding iteration._
_If \(\alpha<\alpha_{*}\) satisfies that \(\gamma_{\alpha}\) is strictly larger than \(j_{\beta,\alpha}(\gamma_{\beta})\) for all \(\beta<\alpha\), then the embedding \(j_{\alpha+1,\alpha_{*}}\colon M_{\alpha+1}\to M_{\alpha_{*}}\) lifts to an embedding \(\tilde{j}_{\alpha+1,\alpha_{*}}\colon M_{\alpha+1}[P\restriction\alpha]\to M_{ \alpha_{*}}[P\restriction\alpha]\)._
Proof.: Without loss of generality, \(\gamma_{\alpha}\in j^{\prime\prime}_{\alpha}\operatorname{len}\vec{U}\). Otherwise, we will need to repeat the following process finitely many times.
Let \(\bar{\gamma}\) be an ordinal such that \(j_{\alpha}(\bar{\gamma})=\gamma_{\alpha}\).
Let us consider the ultrapower by \(U_{\bar{\gamma}}\), \(k_{0}\colon V\to N\). By the definition, in this model, the sequence \(\vec{U}\restriction\bar{\gamma}\) exists. By our choice of \(\gamma_{\alpha}\), and as \(P(\kappa),\vec{U}\restriction\bar{\gamma}\in N\), if we will start to iterate \(N\) based on \(\vec{U}\restriction\bar{\gamma}\) we will obtain exactly the iteration \(j_{\alpha}\restriction N\) and \(j_{\alpha}(\bar{\gamma})=\gamma_{\alpha}\).
Next, the following diagram commutes:
\(\begin{CD}V@>{k_{0}}>{}>N@>{k_{\alpha+1,\alpha_{*}}}>{}>N_{\alpha_{*}}\\ M_{\alpha}@>{j_{\alpha,\alpha+1}}>{}>M_{\alpha+1}@>{j_{\alpha+1,\alpha_{*}}}>{}>M_{ \alpha_{*}}\end{CD}\)
where \(k_{\alpha+1,\alpha_{*}}\) is the iteration as defined in \(N\) using \(k_{0}(\vec{U})\).
In \(N\), one can compute \(P\upharpoonright\alpha\) and as \(k_{0}(\kappa)\) is an inaccessible much larger than \(\kappa\) in \(N\), this set is bounded below \(k_{0}(\kappa)\). Thus, the embedding \(k_{\alpha+1,\alpha_{*}}\) can be restricted to the class \(M_{\alpha+1}[P\upharpoonright\alpha]\). Moreover, since \(k_{\alpha+1,\alpha_{*}}\upharpoonright M_{\alpha+1}=j_{\alpha+1,\alpha_{*}}\), we conclude that the image of this map is going to be contained in \(M_{\alpha_{*}}[P\upharpoonright\alpha]\).
In order to show elementarity, we recall the definition
\[M_{\alpha+1}[P\upharpoonright\alpha] =\bigcup_{\zeta\in\operatorname{Ord}}L[M_{\alpha+1}\cap V_{\zeta},P\upharpoonright\alpha]\] \[M_{\alpha_{*}}[P\upharpoonright\alpha] =\bigcup_{\zeta\in\operatorname{Ord}}L[M_{\alpha_{*}}\cap V_{ \zeta},P\upharpoonright\alpha]\]
so the restriction of \(k_{\alpha+1,\alpha_{*}}\) to each component is elementary, and thus it is elementary.
**Theorem 67**.: _Let \(\vec{U}\) be a Mitchell increasing sequence of measures and let us assume that \(\operatorname{len}\vec{U}<\kappa^{+}\). Then \(\bigcap M_{\alpha}[P\upharpoonright\alpha]=M_{\alpha_{*}}[P]\)._
Proof.: Let us assume by induction on \(\kappa\) and \(\operatorname{len}\vec{U}\) that the theorem holds, namely that \(\bigcap M_{\alpha}[P\upharpoonright\alpha]=M_{\alpha_{*}}[P]\). Let \(\beta_{i}\) the cofinal sequences at \(\alpha_{*}\) defined in the cases above.
**Case 0:** len \(\vec{U}\) is a successor ordinal. In this case, apply the inductive hypothesis for \(\operatorname{len}\vec{U}\), (and elementarity) in the model \(M_{\beta_{n}+1}[P\upharpoonright\beta_{n}]\). We get that \(M_{\beta_{n+1}}[P\upharpoonright\beta_{n+1}]=\bigcap_{\alpha\in[\beta_{n}, \beta_{n+1})}M_{\alpha}[P\upharpoonright\beta_{n}][P\upharpoonright[\beta_{n}, \alpha)]\). This is true, by Lemma 66, applied finitely many times. So, in order to show that the theorem holds, consider \(X\in\bigcap_{n<\omega}M_{\beta_{n}+1}[P\upharpoonright\beta_{n}]\), and use the elementary emebddings \(\tilde{j}_{\beta_{n}+1,\alpha_{*}}\colon M_{\beta_{n}+1}[P\upharpoonright\beta _{n}]\to M_{\alpha_{*}}[P\upharpoonright\beta_{n}]\), and repeat the argument of Theorem 14.
**Case 1:** cf len \(\vec{U}<\kappa\). This case is the same, we notational differences, using the closure of the models \(M_{\alpha}[P]\) under \(\kappa\)-sequences.
**Case 2:** cf len \(\vec{U}=\kappa\). In this case, cf \(\alpha_{*}=\omega\) and we can repeat the argument of Case 0.
It worth mentioning that the intersection theorem is quite weak in this case. Indeed, if we would apply it for an arbitrary sequence of measures, which might fail to be Mitchell increasing, it still holds, but it might be quite degenerated. For example, if we look at an iteration of length \(\omega+1\) of the same measure \(U\) and look at the model \(M_{\omega+1}[P]\), then since \(P\upharpoonright\omega\) defines the normal measure \(j_{\omega}(U)\), this model is going to be simply \(M_{\omega}[P]\) (in particular, class many cardinals of \(M_{\omega+1}\) are collapsed in \(M_{\omega+1}[P]\)).
**Claim 68**.: _Let \(\vec{U}\) be a Mitchell increasing sequence of measures of length \(\kappa^{+}\)._
_Then \(\bigcap_{\alpha<\alpha_{*}}M_{\alpha}[P\upharpoonright\alpha]\) is strictly larger than \(M_{\alpha_{*}}[P\upharpoonright\alpha_{*}]\)._
Proof.: Let us show that \(\Gamma=\langle j_{\alpha,\alpha_{*}}(\gamma_{\alpha})\mid\alpha<\alpha_{*}\rangle\) belongs to the intersection model, but not to the generic extension.
First, let us show by induction on \(\beta\) for every sequence of measures on \(\kappa\) of length \(\beta<\kappa^{+}\), \(\vec{U}^{\prime}\) the corresponding sequence \(\Gamma^{\prime}\) belongs to \(M^{\prime}_{\beta_{*}}[P^{\prime}]\) (where all those objects are defined using \(\vec{U}^{\prime}\).
Let us assume that the claim is proved for every \(\beta^{\prime}<\beta\), and let us consider the case of a sequence of length \(\beta\). Since \(M_{\beta_{*}}[P]=\bigcap_{\delta<\beta_{*}}M_{\delta}[P\upharpoonright\delta]\), it is enough to show that for every \(\delta<\beta_{*}\), \(\Gamma\in M_{\delta}[P\upharpoonright\delta]\).
We prove that \(\Gamma\in M_{\delta}[P\upharpoonright\delta]\) using a second level of induction, on \(\delta<\beta_{*}\). Note that since \(\Gamma\upharpoonright[\delta,\beta_{*})\in M_{\delta}\), it is enough to show that \(\Gamma\upharpoonright\delta\in M_{\delta}[P\upharpoonright\delta]\).
Let \(\rho<\delta\) be the last ordinal such that \(j_{\rho,\delta}(\gamma_{\rho})\geq\gamma_{\delta}\), assuming that there is an ordinal \(\rho\) such that \(j_{\rho,\delta}(\gamma_{\rho})\geq\gamma_{\delta}\). If there is such an ordinal then there is a maximal one, since below every \(\rho\) which is large enough so that \(\gamma_{\delta}\) will be in the image of \(j_{\rho,\delta}\), and \(j_{\rho,\delta}(\gamma_{\rho})>\gamma_{\delta}\) there are unboundedly many ordinals such that \(j_{\rho,\delta}(\gamma_{\rho^{\prime}})=\gamma_{\delta}\). Using the definition of \(\gamma_{\delta}\), we know that this set is bounded and using the definition of \(\gamma_{\rho}\), we know that it is closed.
By the inductive hypothesis, \(\Gamma\upharpoonright\delta\in M_{\rho}[P\upharpoonright\rho]\). In \(M_{\rho+1}\), the iteration up to \(\delta\) is definable using the measure sequence \(j_{0,\rho+1}(\vec{U}\upharpoonright\gamma_{\rho})\), which by the (external) induction hypothesis, pushed forward by \(j_{0,\rho+1}\), satisfies that \(M_{\delta}[P\upharpoonright[\rho,\delta)\) contains \(\Gamma\upharpoonright[\rho,\delta)\). Combining all together, the result follows.
Next, let us verify that \(\Gamma\notin M_{\alpha_{*}}[P]\). The map \(\kappa_{\alpha}\mapsto\gamma_{\alpha}\) is a surjection on \(j_{\alpha_{*}}(\operatorname{len}\vec{U})\) which is \(j_{\alpha_{*}}(\kappa^{+})\), and in particular \(j_{\alpha_{*}}(\kappa^{+})\) is collapsed. But, this is impossible, by Claim 65.
**Question 69**.: _Is \(M_{\alpha_{*}}[P]\) is always closed under \(\kappa\)-sequences?_
**Question 70**.: _Is there a parallel for the intersection theorem for Radin forcing with \(o(\kappa)\geq\kappa^{+}\)?_
|
2306.13845 | Stimulated Emission of Radiation and the Black Hole Information Problem | The quantum theory of black holes has opened up a window to study the
intersection of general relativity and quantum field theory, but perceived
paradoxes concerning the fate of classical information directed at a black hole
horizon, as well as concerning the unitarity of the evaporation process, have
led researchers to question the very foundations of physics. In this
pedagogical review I clarify the ramifications of the fact that black holes not
only emit radiation spontaneously, but also respond to infalling matter and
radiation by emitting approximate clones of those fields in a stimulated
manner. I review early purely statistical arguments based on Einstein's
treatment of black bodies, and then show that the Holevo capacity of the black
hole (the capacity to transmit classical information through a quantum channel)
is always positive. I then show how stimulated emission turns the black hole
into an almost optimal quantum cloning machine, and furthermore discuss the
capacity of black holes to transmit quantum information. Taking advantage of an
analogy between black hole physics and non-linear optics I show that a
calculation of the evolution of a black hole over time, using a discretization
of the black hole $S$-matrix path integral, yields well-behaved Page curves
suggesting that black hole evaporation is unitary. Finally, I speculate about
possible observable consequences of stimulated emission of radiation in black
holes. | Christoph Adami | 2023-06-24T03:05:48Z | http://arxiv.org/abs/2306.13845v1 | # Stimulated Emission of Radiation and the Black Hole Information Problem
###### Abstract
The quantum theory of black holes has opened up a window to study the intersection of general relativity and quantum field theory, but perceived paradoxes concerning the fate of classical information directed at a black hole horizon, as well as concerning the unitarity of the evaporation process, have led researchers to question the very foundations of physics. In this pedagogical review I clarify the ramifications of the fact that black holes not only emit radiation spontaneously, but also respond to infalling matter and radiation by emitting approximate clones of those fields in a _stimulated_ manner. I review early purely statistical arguments based on Einstein's treatment of black bodies, and then show that the Holevo capacity of the black hole (the capacity to transmit classical information through a quantum channel) is always positive. I then show how stimulated emission turns the black hole into an almost optimal quantum cloning machine, and furthermore discuss the capacity of black holes to transmit _quantum_ information. Taking advantage of an analogy between black hole physics and non-linear optics I show that a calculation of the evolution of a black hole over time, using a discretization of the black hole \(S\)-matrix path integral, yields well-behaved Page curves suggesting that black hole evaporation is unitary. Finally, I speculate about possible observable consequences of stimulated emission of radiation in black holes.
## I Introduction
Black-hole quantum physics has been an exciting but frustrating area of research for almost fifty years--ever since Hawking discovered that black holes described in quantum field theory become less black than their name suggests (Hawking, 1975). In classical physics black holes are completely black 1 and do not emit any particles because all classical trajectories must end up in the black hole singularity. According to a semi-classical calculation in curved-space quantum field theory, however, black holes emit radiation through the spontaneous emission of particles near the event-horizon, while the black hole gives up mass in the process. Ultimately, if no new mass is accreted, black holes may even evaporate completely, leaving behind only Hawking's eponymous radiation.
Footnote 1: This holds true only for incident radiation with vanishing impact parameter, as modes with angular momentum can be scattered.
The mere existence of Hawking radiation revealed a daunting problem almost immediately (Hawking, 1976). Particles that accrete onto a black hole can be viewed as carriers of information. For example, a particle's identity (whether it is a photon, electron, or proton), its angular momentum, mass, etc., can encode information that appears to be lost completely and irretrievably behind the event horizon. While it is expected that the Hawking radiation theoretically consists out of all kinds of particles (all those that can be created in pairs via vacuum fluctuations), the _thermal_ nature of the radiation seems to imply that its quantum numbers are completely uncorrelated to those that enter the horizon of an already formed black hole, or even of the matter and radiation that created it. If Hawking radiation was the only energy left over after the evaporation of the black hole, all information about the state of matter that initially created the black hole, as well as the information carried by particles that accreted on to it later, would forever be lost.
This is a more serious problem than is perhaps obvious from the start. Such a loss of information is not merely inconvenient. It would signal the breakdown of some of the most fundamental and cherished laws of nature that we have been able to establish, namely the conservation of probability. Probability conservation is built into quantum mechanics by describing the time-dependence of a wavefunction by unitary operators \(U=e^{-i/\hbar H}\), where \(H\) is a Hermitian Hamiltonian operator. In quantum field theory, unitarity is ensured by the unitarity of the \(S\)-matrix, which is in itself a consequence of a Hermitian interaction Hamiltonian (see, e.g., (Sakurai, 1967)). On the one hand, both quantum mechanics and in particular quantum field theory have been exceedingly accurate in their prediction of the microscopic properties of matter and light, to such an extent that it would be shocking if unitarity would be violated by such macroscopic objects as black holes.
On the other hand, we do not have a consistent theory of quantum gravity, in particular not one that is expected to accurately describe the late stage of black-hole evaporation, where gravitational fields are expected to be very strong. One could therefore not dismiss out of hand that our current theories simply break down under such extreme circumstances, and that a consistent theory of quantum gravity would remedy the dilemma. But the following reasoning suggests that a full-fledged theory of quantum gravity cannot be necessary to solve the problem of probability conservation in black hole evaporation. Classical information, as mentioned above, is carried by ordinary degrees of freedom such as spin, polarization, momentum etc., all of which are adequately described in a semiclassical theory of gravity. The approximations involved in the semiclassical theory concern the treatment of the space-time metric: it is left unquantized, meaning that it is treated as a background field. While a consistent theory of quantum gravity should treat the space-time metric as a quantum mechanical variable that can be entangled with other degrees of freedom, it is not plausible that the uncertainty associated with the decoherence of the metric will have a significant impact on black hole dynamics until perhaps the black hole is of Planck mass (Wald, 1994). Yet, during the period where quantum effects of the metric are small (large, massive black holes), predictability would _already_ be lost because the incident quantum numbers are inaccessible behind the horizon. Indeed, the principle of microscopic time reversibility relies on predictability at _all_ times. Thus, it is not necessary to wait for the complete evaporation of the black hole in order to see a problem with the standard description of black holes. Moreover, at the point where quantum effects on the metric are expected to be significant, it is implausible that information can be recovered from the evaporation of a Planck-size black hole because it is unclear how it could store that much entropy. Thus, we are encouraged to look for a consistent treatment of black hole dynamics that allows for predictability _at all times_, and explains the apparent lack of coherence in a completely unitary manner.
Before discussing possible solutions to the "black hole information problem", I'd like to emphasize that there are really _two_ such problems. The one that gets most of the attention is the problem of whether black holes can turn pure states into mixed states, as Hawking claimed they do (Hawking, 1976). Even though this question is often couched in language that suggests that it is a problem of information conservation ("what happens to the information about the initial state of the matter and radiation that formed the black hole"?) it is strictly speaking not a problem of
information transmission, but rather a problem in showing that the quantum state after the evaporation of the black holes has returned to the pure state it started out as, before the formation of the black hole. Quite generally, the question should be: "Is black hole formation and evaporation described by unitary dynamics?"
The literature usually distinguishes four standard alternatives to deal with the apparent "information loss". I briefly summarize them here, but refer to (Fabbri and Navarro-Salas, 2005; Preskill, 1993; Unruh and Wald, 2017) for a more thorough exposition. The first alternative is the most conservative: it claims that information is released with the Hawking radiation after all, or to put it more concisely, that the state after evaporation of the black hole somehow returns to purity. This type of scenario has been advocated by Page and by Bekenstein (Bekenstein, 1993; Page, 1993), and also has more modern support from theories in which quantum gravity is coupled to dilatons (Almheiri _et al._, 2020; Callan, Jr and S. Giddings _et al._, 1992). One of the most common objections to this scenario is that the thermal nature of the Hawking radiation precludes it from carrying any information. We will see below that there is in fact no basis to this objection, as information can be encrypted in maximum entropy states.
The second alternative posits that, conceivably, the information remains locked inside of the black hole, but the black hole does not disappear but rather becomes a stable remnant. This explanation suffers from several problems, such as the necessity of requiring almost infinitely-degenerate Planck-sized black holes that should be observable via pair formation. Moreover, it does not address how those remnants would constitute a pure state. The third general category of explanations claims that it is possible that all the accumulated information "comes out at the very end", that is, after most (or all) of the mass of the black hole was radiated away. This suggestion runs into the problem that an arbitrary large amount of information cannot be radiated away in a finite amount of time. Finally, a fourth alternative suggested that the information is not lost, but rather is sealed away into one or more nucleating baby universes. This type of explanation has now even fewer defenders than it had at its inception.
The second "information problem" concerns the fate of information that interacts with an already-formed black hole. While sometimes discussed at the same time as the problem of the unitarity of the black-hole evaporation process, these two problems are actually quite separate. While Hawking radiation (according to the first alternative to information loss discussed above) might carry the imprint of the formation of the black hole, it is very unlikely that it carries information about the identity of particles that interact with the black hole horizon at late times. This problem turns out to be a problem in quantum channel theory: can information that is absorbed at the black hole event horizon be reconstituted by an inertial observer that can only observe radiation emanating from the black hole? Because the black hole is treated as a static quantity here (the "communication channel"), statements about the capacity of this channel do not address the unitarity of the evaporation process.
In this pedagogical review, I will address both problems using modern methods of quantum information theory. The problem of communicating via a black hole is solved by realizing that information is not lost within a black hole because a perfect copy of the information is always maintained outside of the event horizon, thanks to a well-known physical mechanism: the stimulated emission of particles at the event horizon that must accompany the spontaneous emission (i.e., Hawking radiation) in any consistent theory of black body radiation. It turns out that particles emitted via stimulated emission provide the "quantum hair" (Coleman _et al._, 1992) necessary for a conceptual understanding of macroscopic black holes.
I will first present a simple statistical argument that appeared in the same year as Hawking's paper that points out that the process of stimulated emission was missing from the latter's discussion. I then discuss the curved-space quantum field theory derivation of stimulated emission in response to early- and late-time modes, and go on to show how to calculate the classical and quantum information transmission capacity of black holes. We will see that the capacity to transmit classical information with arbitrary accuracy over a quantum channel (the so-called "Holevo capacity") is always positive, meaning that information is never lost in black holes. The capacity to transmit quantum information turns out to vanish in come cases, but we'll see that this ensures that the laws of physics are _not_ broken.
I then present an approach that allows us to move beyond the semi-classical description of black holes to illustrate how black holes might evaporate in a unitary manner, giving rise to the well-known Page curves. This approach is speculative in the sense that I have to assume a particular form for the interaction Hamiltonian of black holes with radiation modes, but is arguably less speculative than approaches that rely coupling gravity to dilaton fields or conformal field theories. Moreover, because the coupling between black-hole and Hawking/partner modes is exactly equivalent to the coupling of modes that governs optical parametric amplifiers, there is a chance that laboratory experiments can shed light on black-hole evaporation dynamics in a manner similar to black-hole analogue experiments. I close with speculations about observable consequences of stimulated emission in black holes.
## II Statistical black hole thermodynamics
To set the stage (and my notation), I will repeat a simple "maximum entropy" argument due to Bekenstein (Bekenstein, 1975) to obtain the probability distribution \(p(n)\) of quanta in any of \(n\) outgoing modes emitted by a black hole via spontaneous emission (the distribution of Hawking radiation), using only Hawking's result that the mean number of outgoing quanta is (Hawking, 1975)
\[\langle n\rangle=\frac{\Gamma}{e^{x}-1}\;.\] (II.1)
Here, \(\Gamma\) is the absorption coefficient of the black hole (the "gray-body factor", so that \(1-\Gamma\) is the black hole's reflectivity), and \(x=\hbar\omega/{T_{\rm BH}}^{2}\), where \(\omega\) is the mode's frequency, and \(T_{\rm BH}\) the black hole temperature. Bekenstein derived the result (for a single massless scalar bosonic mode \(n\))
\[p(n)=(1-e^{-\lambda})e^{-n\lambda}\] (II.2)
simply by demanding that the entropy of the outgoing radiation
\[S=-\sum_{n}p(n)\log p(n)\] (II.3)
is maximal, with the constraints \(\sum_{n}p(n)=1\) and \(\sum_{n}np(n)=\frac{\Gamma}{e^{x}-1}\) implemented via Lagrange multipliers. In Eq. (II.2), the Lagrange multiplier \(\lambda\) is related to black hole parameters via
\[e^{-\lambda}=\frac{\Gamma}{e^{x}-1+\Gamma}\;.\] (II.4)
It should be noted that distribution (II.2) (which was not given by Hawking) was also derived independently using full-fledged curved-space quantum field theory by Wald (Wald, 1975).
Bekenstein now considered what would happen if a black hole that produces outgoing radiation with distribution \(p(n)\) is immersed in a heat bath with temperature3\(T\). Together with his graduate student Amnon Meisels, Bekenstein found that the distribution for the ensuing outgoing radiation \(p_{o}(n)\) was _not_ compatible with the assumption that this radiation was composed only of spontaneous emission and radiation scattered from the surface of the black hole with a reflectivity \(1-\Gamma_{0}\) (so that \(\Gamma_{0}\) is the probability that a single quantum is absorbed by the black hole). Instead, they found that
Footnote 3: In general, for a charged and rotating black hole, \(x=\hbar\omega/T_{\rm BH}-\hbar m\Omega-\epsilon\Phi\), with \(m\) the azimuthal quantum number, \(\epsilon\) the electric charge, \(\Omega\) the rotational frequency, and \(\Phi\) the electrical potential. To keep matters simple, I will only treat uncharged non-rotating black holes here.
Footnote 3: This radiation has the distribution \(p_{\star}(n)=(1-e^{-y})e^{-ny}\) where \(y=\hbar\omega/T\) and \(T\) the temperature of the heat bath.
\[p_{o}(n)=\sum_{m=0}^{\infty}p(n|m)p_{\star}(m)\;,\] (II.5)
where \(p(n|m)\) is the probability distribution to observe \(n\) outgoing particles _given_ that \(m\) such particles were incident on the black hole. This distribution is independent of the environment that the black hole finds itself in, and respects the detailed balance condition
\[e^{-xm}p(n|m)=e^{-xn}p(m|n)\;.\] (II.6)
This condition implies microscopic reversibility, and allows us to see that the mean number of outgoing particles \(\langle n\rangle\) is given by the sum of Hawking's spontaneous emission term (II.1) _and_ an average fraction \(m(1-\Gamma)\) returned outward
\[\langle n\rangle=\frac{\Gamma}{e^{x}-1}+m(1-\Gamma)\;.\] (II.7)
By using the result that the fraction that is returned by pure scattering has to be \(m(1-\Gamma_{0})\) (where \(1-\Gamma_{0}\) is the pure scattering reflectivity introduced earlier), Bekenstein and Meisels could show that
\[\Gamma=\Gamma_{0}(1-e^{-x})\;.\] (II.8)
This result shows that \(1-\Gamma\) is in fact the _effective_ reflectivity of the black hole, so that the black hole absorptivity \(\Gamma\) is the sum of \(\Gamma_{0}\) and a _negative_ contribution due to stimulated emission \(-\Gamma_{0}e^{-x}\), which is present in _all_ modes. Specifically, (II.7) can be rewritten to read
\[\langle n\rangle=(1-\Gamma_{0})m+(m+1)\frac{\Gamma}{e^{x}-1}\,\] (II.9)
where the \(m\) in the second term on the right hand side of Eq. (II.9) refers to the stimulated response to \(m\) incoming particles, and the "1" to the spontaneously emitted particles (Hawking radiation). These different terms are indicated in Fig. 2.1, which also shows the number of anti-particles generated inside of the black hole via spontaneous and stimulated emission (in order to conserve particle numbers, both spontaneous and stimulated emission occur via pair formation, as we will see later).
The exact form of the distribution \(p(n|m)\) is complicated, and had to be inferred from Eq. (II.5) by a power series expansion. However, it was later shown to be _exactly equal_ to the result obtained with a full treatment in curved-space quantum field theory by Panangaden and Wald (Panangaden and Wald, 1977). The result was later confirmed by Audretsch and Muller (Audretsch and Muller, 1992), who redid the calculation using wave packets, taking into account the red shift, and studying the effect of both incoming particles and anti-particles.
## III Curved-space semi-classical quantum field theory
The results of Bekenstein and Meisels, Panangaden and Wald, and Audretsch and Muller did not convince everybody that information in black holes was preserved even though that work demonstrated microscopic reversibility. Schiffer, for example, argued that thermal radiation still overpowers stimulated emission for the vast majority of modes (Schiffer, 1993). Furthermore, taking the red shift into account revealed that outgoing modes at observed frequencies \(\sim 1/8\pi M_{\rm BH}\) (where \(M_{\rm BH}\) is the mass of the black hole in units where \(G=c=k=\hbar=1\) as usual) ought to be due to incoming modes that were present just as the black hole was forming, and therefore must have been enormously blue-shifted with respect to the outgoing late-time radiation (Jacobsen, 1991). Thus, it was not clear how those particular calculations confront the question of what happens to late-time particles absorbed by an already-formed black hole. I will address this last point in this section by introducing Sorkin's treatment of early- and late incoming modes, and then answer the first question (does stimulated emission really conserve information?) by explicitly calculating the capacity of the black hole to transmit information encoded in both early-time and late-time modes in the section that follows, using the standard methods of quantum information theory.
To establish notation, I will first treat the simplest case: a perfectly absorbing black hole (\(\Gamma=1\)) and only early-time complex massless modes of energy \(\omega_{k}\) (\(\omega_{-k}\) for anti-particles). Even though we will later see that such a choice of absorptivity is inconsistent (and we will understand why), it is instructive to do this calculation first as it is simpler than the general case.
To introduce complex fields (which allow us to describe both particles and anti-particles) might at first glance seem like an unnecessary complication. Indeed, it is simpler (and still instructive) to study Hawking radiation using scalar fields only. However, because of the crucial role that negative-frequency modes play in this discussion, ignoring anti-particles (which are equivalent to particles traveling backwards in time) obscures some fundamental aspects of black hole physics, in which time-reversal invariance is key.
After the initial exposition of known (and therefore canonical, if not classical) results, I will introduce particles to the in-vacuum to understand how they stimulate the emission of particles in the out-vacuum outside of the horizon (and the emission of anti-particles beyond the horizon), and then introduce late-time modes using Sorkin's trick, allowing me to recover the gray-body absorptivity (and alleviate any worries about transplanckian frequencies at past infinity).
Consider the Penrose diagram in Fig. 3.1.
The relation between the operators annihilating the incoming modes \(a_{k}\) and \(b_{k}\) and the outgoing mode \(A_{k}\) is given by a Bogoliubov transformation (the annihilation and creation operators satisfy the commutation relations \([a_{k},a_{k^{\prime}}^{\dagger}]=[b_{k},b_{k^{\prime}}^{\dagger}]=\delta_{k,k^ {\prime}}\) and \([a_{k},a_{k^{\prime}}]=[b_{k},b_{k^{\prime}}]=[a_{k},b_{k^{\prime}}^{\dagger}]=0\))
\[A_{k}=e^{-iH}a_{k}e^{iH}=\alpha_{k}a_{k}-\beta_{k}b_{-k}^{\dagger}\] (III.1)
so that the unitary operator \(U=e^{-iH}\) (I set \(\hbar=1\)) maps the in-vacuum to the out-vacuum:
\[|0\rangle_{\rm out}=e^{-iH}|0\rangle_{\rm in}\.\] (III.2)
It is not an accident that I chose the letter \(H\) for the Hermitian operator. In general, we can write the time-dependent mapping from in-states to out-states in terms of the time evolution operator
\[U(t_{2},t_{1})={\sf T}e^{-i\int_{t_{1}}^{t_{2}}H(t^{\prime})dt^{\prime}}\,\] (III.3)
where \({\sf T}\) stands for Dyson's time-ordering operator and \(H(t)\) is the Hamiltonian describing the unitary evolution of the quantum state. This time evolution operator can be approximated using \(N\) small time slices \(\Delta t\) so that with \(t=N\Delta t\)
\[U(t)={\sf T}e^{-i\int_{0}^{t}H(t^{\prime})dt^{\prime}}\approx\prod_{i=1}^{N}e^ {-i\Delta tH_{i}},\] (III.4)
where \(H_{i}\) is the \(i\)-th time-slice Hamiltonian. In the static path approximation (one time slice) and absorbing \(\Delta t\) into the interaction strength, the operator (III.4) becomes the one implementing (III.2) with the Hamiltonian
\[H=i\sum_{k=-\infty}^{\infty}g_{k}\big{(}a_{k}^{\dagger}b_{-k}^{\dagger}-a_{k}b _{-k}\big{)}\.\] (III.5)
Figure 3.1: Penrose diagram showing the early-time modes \(a_{k}\) and \(b_{k}\), created at past infinity \(\mathscr{I}^{-}\) (just as the black hole formed) and traveling just outside and just inside of the event horizon towards future infinity \(\mathscr{I}^{+}\). The outgoing mode at future infinity is annihilated by \(A_{k}\).
where \(g_{k}\) is an "interaction strength" that, as we will see, sets the black hole temperature. Using the Baker-Campbell-Hausdorff theorem we can relate the coefficients \(\alpha_{k}\) and \(\beta_{k}\) in (III.1) to \(g_{k}\) via
\[\alpha_{k}^{2}=\cosh^{2}g_{k}\,\beta_{k}^{2}=\sinh^{2}g_{k}\] (III.6)
and we have \(\alpha_{k}^{2}-\beta_{k}^{2}=1\). The standard arguments of Hawking Hawking (1975) enforcing analyticity on the solutions to the free field equations4 allow us to deduce that
Footnote 4: Despite the appearance of an interaction strength \(g_{k}\), \(H\) is a free-field Hamiltonian.
\[\alpha_{k}^{2}=e^{\omega_{k}/T_{\rm BH}}\beta_{k}^{2}\:,\] (III.7)
and relate \(g_{k}\) to \(T_{\rm BH}\) since \(\frac{\omega_{k}}{T_{\rm BH}}\approx\log(g_{k})+{\cal O}(g_{k}^{2})\). With these definitions out of the way, we can write down the out-vacuum state as
\[|0\rangle_{\rm out}=\prod_{k=-\infty}^{\infty}e^{g_{k}\left(a_{k}^{\dagger}b_ {-k}^{\dagger}-a_{k}b_{-k}\right)}|0\rangle_{\rm in}\;.\] (III.8)
Using the disentangling theorem for SU(1,1), we can evaluate (III.8) to become (writing the in-vaccum as the product state \(|0\rangle_{\rm in}=|0\rangle_{a}|0\rangle_{b}\))
\[|0\rangle_{\rm out}=\prod_{k=-\infty}^{\infty}\frac{1}{\cosh^{2}g_{k}}\sum_{ n_{k},n_{k}^{\prime}}e^{-(n_{k}+n_{-k}^{\prime})\omega_{k}/2T_{\rm BH}}|n_{k},n_{-k }^{\prime}\rangle_{a}|n_{k}^{\prime},n_{-k}\rangle_{b}\;.\] (III.9)
This is enough for us to recover the probability distribution of outgoing particles (II.2) (albeit for the case \(\Gamma=1\) as we do not treat reflection here), by calculating the density matrix of outgoing radiation (the radiation in "region I", see Fig 3.1) via tracing out the interior of the black hole (region II)
\[\rho_{I}={\rm Tr}_{\rm II}|0\rangle_{\rm out}\langle 0|=\prod_{k}\rho_{k} \otimes\rho_{-k}\;.\] (III.10)
As expected, the density matrix factorizes into a particle term and an anti-particle term with
\[\rho_{k}=\frac{1}{1+\beta_{k}^{2}}\sum_{n=0}^{\infty}\biggl{(}\frac{\beta_{k }^{2}}{1+\beta_{k}^{2}}\biggr{)}^{n}|n_{k}\rangle\langle n_{k}|=(1-e^{-\omega _{k}/T_{\rm BH}})\sum_{n_{k}=0}^{\infty}e^{-n_{k}\omega_{k}/T_{\rm BH}}|n_{k} \rangle\langle n_{k}|\;.\] (III.11)
This expression implies Bekenstein's (and therefore Hawking's) result for the single mode spontaneous emission probability Eq. (II.2), as \(p(n)=\langle n|\rho_{k}|n\rangle\) (note that \(e^{-\lambda}=e^{-x}\) for total absoption). The mean number of outgoing particles becomes
\[\sum_{k=-\infty}^{\infty}{\rm out}\langle 0|a_{k}^{\dagger}a_{k}|0\rangle_{\rm out }=\sum_{k=-\infty}^{\infty}\beta_{k}^{2}\;,\] (III.12)
which is the celebrated Planck distribution of Hawking radiation since
\[\beta_{k}^{2}=\frac{e^{-\omega_{k}/T_{\rm BH}}}{1-e^{-\omega_{k}/t_{\rm BH}}}\;.\] (III.13)
Before considering the impact of particles entering the in-vacuum, let us take a closer look at the Hamiltonian (III.5) that we used to map past-infinity states to future-infinity states. This Hamiltonian is, as a matter of fact, very common in quantum optics, where it is known as the "squeezing" Hamiltonian that describes optical parametric amplification (generally, all quantum amplification processes can be described by Bogoliubov transformations Leonhardt (2010)). Quantum amplification is inherently a nonlinear process. In the simplest system, a _pump_ photon with frequency \(\omega_{p}\) is converted into two photons, called the _signal_ and _idler_, with frequencies \(\omega_{s}\) and \(\omega_{i}\), where \(\omega_{p}=\omega_{s}+\omega_{i}\). This process, called parametric downconversion, creates entangled photon pairs via the Hamiltonian (for a single mode)
\[H_{\rm OPA}=i\eta(a_{s}^{\dagger}b_{i}^{\dagger}-a_{s}b_{i})\;,\] (III.14)
where \(\eta\) is the coupling strength (which depends on the pump amplitude) and \(a_{s}^{\dagger}\) and \(b_{i}^{\dagger}\) are the creation operators for the signal and idler modes, respectively ("OPA" stands for "optical parametric amplification"). If we compare Hamiltonian (III.14) to Eq. (III.5), we see that the role of positive and negative frequency modes of Hawking radiation are here played by the signal and idler modes, and indeed the wavefunction of the signal-idler pair is simply (Nation _et al._, 2012)
\[|\Psi(t)\rangle=\frac{1}{\cosh\eta t}\sum_{n=o}^{\infty}e^{-\eta tn/2}|n_{i} \rangle|n_{s}\rangle\;,\] (III.15)
which we can compare to Eq. (III.9). In contrast to (III.9) that has both particles and anti-particles, the wave function for the signal/idler mode is that of a real (rather than a complex) scalar field and is explicitly time-dependent. Other quantum amplification processes that can be described by Bogoliubov transformations are the Unruh effect (Unruh, 1976) (particle creation by an accelerated observer) and the dynamical Casimir effect (Moore, 1970) (particle creation by oscillating mirrors), but only in the Unruh effect and Hawking radiation are the "signal" and "idler" modes causally disconnected. So, while the mathematics of mapping "in" to "out" states is similar, the physics of the processes can be quite different.
Bekenstein and Meisels found that (just as Einstein had derived (Einstein, 1917)) not only does the vaccum emit radiation spontaneously, it can also be stimulated to do so. To see this effect in curved-space quantum field theory, we need to consider \(m\) particles in the initial state
\[|\psi\rangle_{\rm out}=e^{-iH}|m\rangle_{a}|0\rangle_{b}=e^{-iH}\frac{1}{\sqrt {m!}}(a_{k}^{\dagger})^{m}|0\rangle_{\rm in}=\frac{1}{\sqrt{m!}}(A_{k}^{\dagger })^{m}e^{-iH}|0\rangle_{\rm in}\;.\] (III.16)
with \(H\) from Eq. (III.5) and using the Bogoliubov transformation (III.1). Carrying out this calculation for \(m_{k}\) particles in a single mode \(k\) incident on the black hole leads to an outgoing density matrix in region I (outside the black hole horizon) (Adami and Ver Steeg, 2014)
\[\rho_{\rm I}={\rm Tr}_{\rm II}|\psi\rangle_{\rm out}\langle\psi|=\rho_{k|m} \otimes\rho_{-k|0}\;.\] (III.17)
Here, the density matrix \(\rho_{k|m}\) of outgoing particles given that \(m\) particles were incident in mode \(k\) is (the density matrix of outgoing anti-particles \(\rho_{-k|0}\) looks just like the particle matrix with no incoming particles, shown in (III.11))
\[\rho_{k|m}=\frac{1}{(1+\beta_{k}^{2})^{m+1}}\sum_{n=0}^{\infty}\left(\frac{ \beta_{k}^{2}}{1+\beta_{k}^{2}}\right)^{n}{m+n\choose n}|m+n\rangle\langle m+ n|\;,\] (III.18)
which can be rewritten as
\[\rho_{k|m}=\sum_{n=0}^{\infty}p(n|m)|n\rangle\langle n|\] (III.19)
with the conditional probability (here \(x=\frac{\omega_{k}}{T_{\rm BH}}\) as before)
\[p(n|m)=(1-e^{-x})^{m+1}{m+n\choose n}e^{-nx}\;.\] (III.20)
If there are no particles entering the black hole (\(m=0\)), Eq. (III.20) reduces to \(p(n)=(1-e^{-x})e^{-nx}\), which is the \(\Gamma\to 1\) limit of (II.2). However, it is easily checked that the detailed balance condition (II.6) does not hold for the conditional probability (III.20). We will now see that this is due to our setting \(\Gamma=1\), which will turn out to be inconsistent: if there is stimulated emission, then the black hole cannot be fully absorbing: it _must_ return something to the outside world.
In order to correctly describe scattering off of the black hole horizon in curved-space quantum field theory, I will use a trick due to Sorkin (Sorkin, 1987), which will ultimately allow us to recover Bekenstein and Meisels' \(p(n|m)\) that observes detailed balance. Sorkin's insight comes from the observation that when we discuss the capacity of a black hole to transmit information, we are not really interested in the information that was encoded in the particles that were present during the formation of the black hole (the modes \(a_{k}\) and \(b_{-k}\)). After all, whether a collapsing star is going to form a black hole in the future is uncertain, and choosing the timing of informational modes in such a way that they travel just outside of the black hole would be rather difficult. Besides, we know that such modes will
be exponentially redshifted. Sorkin instead introduces _late-time_ modes \(c_{k}\) that at future infinity are exponentially blue-shifted5 with respect to the early-time modes \(a_{k}\) and \(b_{-k}\), and therefore commute with them. Sorkin's late-time mode along with the early-time modes are shown in Fig. 3.2.
Footnote 5: Note that in order to keep with the previous definition of \(a_{k}\) and \(b_{-k}\) modes, I have changed the nomenclature of (Sorkin, 1987).
The Bogoliubov transformation that connects the outgoing mode \(A_{k}\) to \(a_{k}\) and the late-time mode \(c_{k}\) can now be written as (Sorkin, 1987):
\[A_{k}\ =\ e^{-iH}a_{k}e^{iH}=\alpha_{k}a_{k}-\beta_{k}b_{-k}^{\dagger}+\gamma_{k} c_{k}\;,\] (III.21)
with \(\alpha_{k}^{2}-\beta_{k}^{2}+\gamma_{k}^{2}=1\) to ensure unitarity. What is the Hamiltonian \(H\) that gives rise to this transformation? It turns out that it is given by the sum of the term we already had (which turned out to be formally equivalent to the Hamiltonian of an active optical element) and the Hamiltonian of a _passive_ optical element: a beam splitter (Leonhardt, 2010):
\[H=\sum_{k=-\infty}^{\infty}ig_{k}(a_{k}^{\dagger}b_{-k}^{\dagger}-a_{k}b_{-k} )+ig^{\prime}_{k}(a_{k}^{\dagger}c_{k}-a_{k}c_{k}^{\dagger})\;.\] (III.22)
The beam-splitter term in (III.22) describes the interaction of late-time modes with the black hole horizon using an interaction strength \(g^{\prime}_{k}\), which we will be able to relate to the black hole's reflectivity \({\cal R}\).
The Bogoliubov coefficients \(\alpha_{k}\), \(\beta_{k}\), and \(\gamma_{k}\) can be written in terms of \(g_{k}\) and \(g^{\prime}_{k}\) by using (III.22) in (III.21),
\[\alpha_{k} = \cos(g^{\prime}_{k}w)\;,\] (III.23) \[\beta_{k} = \frac{g_{k}}{g^{\prime}_{k}}\frac{\sin(g^{\prime}_{k}w)}{w}\;\;,\] (III.24) \[\gamma_{k} = -\frac{\sin(g^{\prime}_{k}w)}{w}\;,\] (III.25)
where \(w=\sqrt{1-(g_{k}/g^{\prime}_{k})^{2}}\).
We can now proceed as before and construct the out-state using this amended Hamiltonian
\[|0\rangle_{\rm out}=e^{-iH}|0\rangle_{a}|0\rangle_{b}|0\rangle_{c}\;,\] (III.26)
now acting on a product state of early-and late-time modes (as we have assumed these modes commute). We can calculate the density matrix of radiation outside the black hole horizon by tracing the full density matrix over region II (the inside of the black hole that contains both modes \(b_{-k}\) and \(c_{k}\))
\[\rho_{\rm I}={\rm Tr}_{\rm II}|0\rangle_{\rm out}\langle 0|\;.\] (III.27)
Figure 3.2: Penrose diagram showing early-time modes (\(a_{k}\) and \(b_{-k}\)) and late-time modes \(c_{k}\). Late-time modes are scattered at the horizon with probability \({\cal R}\) (the black hole reflectivity). A perfectly absorbing black hole has \({\cal R}=0\).
Just as before, the anti-particle density matrix \(\rho_{-k|0}\) factorizes, and we find
\[\rho_{k|0}=\frac{1}{1+\beta_{k}^{2}}\sum_{n_{k}=0}^{\infty}\left(\frac{\beta_{k}^ {2}}{1+\beta_{k}^{2}}\right)^{n}|n_{k}\rangle\langle n_{k}|\;.\] (III.28)
This expression is formally identical to expression (III.11) when written in terms of \(\beta_{k}^{2}\), except that due to the term \(\gamma_{k}^{2}\) in the unitarity relation \(\alpha_{k}^{2}-\beta_{k}^{2}+\gamma_{k}^{2}=1\) we now have
\[\beta_{k}^{2}=\frac{\Gamma}{e^{\omega_{k}/T}-1}\;,\] (III.29)
where \(\Gamma\) is the absorptivity of the black hole (for this particular mode), with \(\Gamma=1-\gamma_{k}^{2}\). Thus, (III.28) is just the standard density matrix of spontaneous emission (a.k.a. Hawking radiation) including gray-body factors. It reduces to (III.11) in the unphysical limit \(\Gamma\to 1\).
Now that we have seen how we can use Sorkin's trick along with a simple beam-splitter term to recover the gray-body factor in Hawking's radiation (note that Hawking obtained this factor in a very different manner, by following particles from inside the black hole backwards in time into region I), we can study how the black hole reacts to particles that fall into the black hole at late times, in mode \(c_{k}\). Note that because these modes will _not_ be significantly red-shifted by the black hole, the corresponding Hawking radiation will be fully commensurate with the size of the black hole and a "transplanckian" problem is avoided.
I will now construct the outgoing state \(|\psi\rangle_{\rm out}\) when \(m_{k}\) late-time particles are directed at the horizon (all in the same mode \(k\)),
\[|\psi\rangle_{\rm out}=e^{-iH}|m_{k}\rangle_{\rm in}\;,\] (III.30)
with \(H\) from (III.22) and where \(|m_{k}\rangle_{\rm in}=|0\rangle_{a}|0\rangle_{b}|m_{k}\rangle_{c}\), i.e., the state with \(m_{k}\) incoming particles in mode \(c_{k}\) on \(\mathscr{I}^{-}\). Using Eq. (III.21), we can immediately calculate the number of particles emitted into mode \(A_{k}\) if \(m_{k}\) were incident in mode \(c_{k}\), since
\[{}_{\rm in}\langle\psi|A_{k}^{\dagger}A_{k}|\psi\rangle_{\rm in}=\beta_{k}^{2 }+m_{k}\gamma_{k}^{2}\;.\] (III.31)
Using \(\gamma_{k}^{2}=1-\alpha_{k}^{2}+\beta_{k}^{2}\), we can see that we have recovered Bekenstein and Meisels' result (II.7), using (III.29) and identifying \(\gamma_{k}^{2}=1-\Gamma\). Furthermore, we can also recover
\[\Gamma=\Gamma_{0}(1-e^{-\omega_{k}/T_{\rm BH}})\;,\] (III.32)
by noting that \(\alpha_{k}^{2}=\Gamma_{0}\) actually sets the reflectivity of the black hole, that is, \(\mathcal{R}\) in Fig. 3.2 is just \(1-\alpha_{k}^{2}\). We can also see that on account of Eqs. (III.23-III.25), we have
\[\left(\frac{g_{k}^{\prime}}{g_{k}}\right)^{2}=\frac{\gamma_{k}^{2}}{\beta_{k}^ {2}}=1+\frac{1-\alpha_{k}^{2}}{\alpha_{k}^{2}}e^{\omega_{k}/T_{\rm BH}}\;,\] (III.33)
which implies that because the reflectivity \(\mathcal{R}=1-\alpha_{k}^{2}=\Gamma_{0}\leq 1\), the parameter \(g_{k}^{\prime}\) that sets the scattering rate for modes of energy \(\omega_{k}\) is bounded from below by \(g_{k}\), which is itself bounded from above by \(1\) since \(g_{k}\sim e^{-\omega_{k}/T_{\rm BH}}\).
This explains why setting \(\Gamma=1\) could never be consistent. As previously discovered by Bekenstein and Meisels using maximum entropy arguments only (Bekenstein and Meisels, 1977), the total absorptivity of the black hole \(\Gamma\) is the product of the "bare" absorptivity \(\Gamma_{0}\) times the factor \(1-e^{-\omega_{k}/T_{\rm BH}}\) due to stimulated emission, so that \(\Gamma<1\) always. A "classical" black hole has \(\Gamma_{0}=1\) (for incoming \(s\)-waves) but a quantum black hole can never be totally black, and as a consequence information is never lost in black hole dynamics. To make this statement quantitative, we should proceed to calculate the capacity of quantum black holes to transmit classical information, using the tools of quantum information theory.
## IV Classical information capacity of quantum black holes
The primary application of the classical theory of information due to Shannon (Shannon, 1948) was to quantify the capacity of channels to transmit information. One of the surprising results of that theory is that it is possible to send information through a channel with perfect accuracy even when there is substantial noise that affects the transmitted
message. For example, we might imagine the problem of communicating by throwing copies of two different books into a fire repeatedly: books that have exactly the same number of words, and weigh the same6. The two different books allow us to encode information into a sequence of zeros and ones (denoting which of the books we choose to incinerate). But how could an individual that can only observe the flames extract the information encoded in the series of books? The answer is that the information is not lost in the flames: the two different books when burning give rise to slightly different ways in which the flames and smoke behave when turning the pages to ashes (due to the different ways in which words are arranged on the pages). While we certainly do not have the technology to detect these differences among the much more pronounced variation they are embedded in, this information is in principle accessible. And as a consequence, error correction techniques will allow us to retrieve this information with perfect accuracy. One way to do this is to coat the pages of the books with different pyrotechnic colorants for each, making information retrieval trivial.
Footnote 6: I choose books that differ only in their words but not in their mass to connect with a paradox where, throwing these books into a black hole, the identity of the books would be lost because the increase in the mass of the black hole is the same for either book.
Classical black holes, however, are very different from fire. If no marker carrying the information is available to the observer (as it is in the case of communication through fire), no amount of error correction can recover the information. Because Schwarzschild black holes (classical, non-spinning, neutral black holes) are only characterized by their mass, absorbing two books with different writing but equal mass would give rise to the same exact final state. Such dynamics would dictate that two separate phase-space trajectories merge into one, which is a direct violation of microscopic reversibility: an abomination.
In 1973, Bekenstein painted the picture of information loss that is still being discussed today (Bekenstein, 1973):
"We imagine a particle goes down a (...) black hole. As it disappears some information is lost with it".
In fact, Bekenstein estimated that the information loss must be at least one bit, which is the amount of uncertainty created by not knowing whether the particle still existed behind the horizon or not. It is now clear that this line of thinking is fundamentally rooted in a misunderstanding of the concept of information in physics, namely, that information is necessarily attached to the object that encodes it. Information, rather, is a relative state between an observer and a system (Adami, 2016), and more importantly, is _not_ tied to the encoding body. In the case of the fiery communication described earlier, the information was first encoded in ones and zeros, then translated to two kinds of books, and then retrieved into a list of ones and zeros after the color of the flames was decoded. In communication through black holes, information is first encoded in particles (for example, the polarization of photons, or particle/anti-particle identity). The particles are absorbed at the event horizon, which stimulates the emission of _exact copies_ of those particles outside (and inside) the horizon (I will discuss how such a process complies with the no-cloning theorem in section IV.2). The process of stimulated emission copies the information from the absorbed particle and _transfers_ it to other carriers outside the black hole, where they are accessible to an observer so that separate phase-space trajectories remain separate.
### Classical late-time capacity
We can follow the fate of information interacting with a black hole by using a _preparer_ to encode information into late-time quantum states that are then sent into the event horizon. For example, we can imagine a preparer \(X\) who sends packets of \(n\) particles with probability \(p(n)\). The internal state of the preparer can be described by the density matrix \(\rho_{X}=\sum_{n}p(n)|n\rangle\langle n|\), with entropy \(S(\rho_{X})=H[p]=-\sum_{n}p(n)\log p(n)\). After the particles interact with the black hole, our preparer is now correlated with it because the final state is now the density matrix
\[\rho_{\text{I,II},X}=\sum_{n}p(n)|\psi_{n}\rangle\langle\psi_{n}|\otimes|n \rangle_{X}\langle n|\;,\] (IV.1)
where \(|\psi\rangle_{n}=e^{iH}|0,0,n\rangle_{abc}\). Tracing over the black hole interior (region II), we obtain the joint density matrix of the radiation field in region I and the preparer \(X\):
\[\rho_{\text{I},X}\ =\ \sum_{n}p(n)\rho_{k|n}\otimes|n\rangle_{X}\langle n|\] (IV.2)
with entropy
\[S(\rho_{1,X})=H[p]+\sum_{n}p(n)S(\rho_{k|n})\] (IV.3)
owing to the block-diagonal form of Eq. (IV.2). The mutual entropy between radiation field and preparer is then simply given by
\[H(X;\mathrm{I}) = S(\rho_{1})+S(\rho_{X})-S(\rho_{1,X})\] (IV.4) \[= S\left(\sum_{n}p(n)\rho_{k|n}\right)-\sum_{n}p(n)S(\rho_{k|n})\;,\]
which turns out to be the Holevo bound (Holevo, 1973). The latter constitutes the maximum amount of classical information that can be extracted from a quantum measurement, and its maximum (over the probability distribution of signal states) is the capacity of a quantum channel to transmit classical information (Holevo, 1998). In other words, the black hole channel capacity _is_ the Holevo capacity, with Hawking radiation playing the role of channel noise.
As a simple example, consider a binary channel where the preparer either sends no particle (with probability \(1-p\)) or one particle (with probability \(p\)) into the black hole. As Hawking radiation does not depend on this decision, the information would be lost if stimulated and scattered particles were not present in the total radiation field outside the horizon. In order for information to be recovered, an outside observer must be able to make measurements on the radiation field that betray the preparer's decision. If we send in \(n\) particles in mode \(k\) (to signal a logical '1') then we obtain \(\rho_{k|n}\) in region I, which is diagonal in the number basis (its general expression is given in the Appendix). For a single incident particle,
\[\rho_{k|1} = \frac{\alpha_{k}^{2}}{(1+\beta_{k}^{2})^{2}}\sum_{n_{k}=0}^{ \infty}\bigg{(}\frac{\beta_{k}^{2}}{1+\beta_{k}^{2}}\bigg{)}^{n_{k}}\!(1+n_{k }\frac{\gamma_{k}^{2}}{\alpha_{k}^{2}\beta_{k}^{2}})|n_{k}\rangle\langle n_{k }|\;.\] (IV.5)
This density matrix is clearly non-thermal except in the unphysical limit \(\gamma_{k}=0\), where the probability of absorbing a single quantum would exceed 1.
Let us calculate the capacity \(\chi=\max_{p}H(X;\mathrm{I})\) for the worst-case scenario: a perfectly black hole (no reflection, i.e., \(\alpha_{k}^{2}=1\)). This is the worst-case scenario because any radiation reflected at the horizon would allow us to recover the information sent in even if the vast majority of particles disappear behind the horizon. That observation illustrates the true magic of Shannon's theorem: information is only lost if absolutely _no_ trace of the information is left for us to decode.
For the non-reflecting black hole, the mutual entropy \(H(X;\mathrm{I})\) is maximized at \(p=1/2\), and the capacity can be written in terms of the parameter \(z=e^{-\omega_{k}/T_{\mathrm{BH}}}\) as (Adami and Ver Steeg, 2014)
\[\chi=1-\frac{1}{2(1+z)^{3}}\sum_{m=0}^{\infty}\!\left(\frac{z}{1+z}\right)^{m }\!(m+1)(m-2z)\log(m+1)\;.\] (IV.6)
The capacity \(\chi\) is positive for all values \(0\leq z\leq 1\), which implies that the channel's capacity never vanishes, and information can always be recovered with perfect accuracy (see Fig. 4.1). As the black hole shrinks (and the Hawking temperature increases), the capacity to transmit information (using particles with a given energy \(\omega_{k}\)) decreases. But a decreased capacity does not indicate information loss. Instead, it tells us what the maximum rate of _perfectly accurate_ information transmission is. Thus, a capacity of 0.8 bits, for example, indicates that if we are sending in information at 1 bit per second, we can only retrieve perfectly accurate information at the rate of at most 0.8 bits per second, on account of the error correction necessary to protect the information. Non-optimal schemes of error correction will yield smaller rates of error-free information transmission.
Incidentally, using the conditional probability for perfect absorption of _early-time_ modes in Eq. (III.20) still leads to a non-vanishing capacity, though significantly reduced (see Fig. 4.1). It turns out that this early-time capacity was previously (and independently) derived for the Unruh channel (the case of accelerated observers, which is another case of quantum amplification). When replacing the black hole interaction strength \(g_{k}\) with the acceleration \(r\) (and keeping \(g_{k}^{\prime}=g_{k}\) to ensure zero reflectivity), the black hole and the Unruh capacities are identical (Bradler, 2011). In both cases, using the probability distribution \(p(n|0)\) for spontaneous emission (pure Hawking radiation) yields a vanishing capacity. This quantifies what has been conjectured many times before: Hawking radiation carries no signal, no information. It is only noise.
The capacity of the binary channel using an encoding of \(n\) particles as the logical '1', and either 0 particles or \(n\) anti-particles as the logical '0' exceeds the one shown in Fig. 4.1 (as the adventurous reader can confirm using the expressions in the Appendix) because these methods of encoding are significantly more robust to noise, that is, to the interference of the stimulated emission signal with the spontaneously emitted Hawking radiation. We will use such an encoding in the following section when discussing in which way a black hole can be understood in terms of quantum cloning machines.
### Black holes are quantum cloning machines
I wrote earlier that the stimulated emission process essentially "clones" the incoming particles so that these copies are available outside of the event horizon and information is not lost. Perfect cloning is, of course, forbidden in quantum mechanics: the "quantum no-cloning theorem" is a direct consequence of the linearity of quantum mechanics (see, for example, [22, 23]). The proof of this theorem is simple, and worth repeating here. Suppose we define a cloning operator \(U_{C}\) in quantum physics so that it will copy arbitrary quantum states \(|\phi\rangle\)
\[U_{C}|\phi\rangle|0\rangle=|\phi\rangle|\phi\rangle\] (IV.7)
onto a prepared ancilla state \(|0\rangle\). After the copying operation, the original quantum state is in a product state with its copy, as desired. The action of this operator on a quantum superposition \(\sigma|\phi\rangle+\tau|\psi\rangle\) (with complex \(\sigma\) and \(\tau\) that satisfy \(|\sigma|^{2}+|\tau|^{2}=1\)) does not produce a product state of that superposition, however, as
\[U_{C}(\sigma|\phi\rangle+\tau|\psi\rangle)|0\rangle=\sigma|\phi\rangle|\phi \rangle\ +\tau|\psi\rangle|\psi\rangle\neq(\sigma|\phi\rangle+\tau|\psi\rangle)(\sigma| \phi\rangle+\tau|\psi\rangle)\] (IV.8)
instead. In fact, this hypothetical "cloning operator" \(U_{C}\) in (IV.7) turns out to be an "entanglement operator", and is typically (in the space of qubits) the "controlled NOT" (CNOT) operator \(U_{C}=P_{0}\otimes 1+P_{1}\otimes\sigma_{x}\), where \(P_{0}\) and \(P_{1}\) project on the respective states, and the Pauli matrix \(\sigma_{x}\) flips a bit. In the following I discuss the cloning of binary states (quantum bits, or qubits), but the formalism can be extended to quantum states of arbitrary dimension [11].
While perfect cloning of arbitrary states is not possible, it is of course possible to clone "known" (that is, prepared) states, with the help of an operator as described above that uses projectors onto those known basis states. It is, however, also possible to make "approximate" copies of arbitrary quantum states, using unitary operators that are referred to as "quantum cloning machines". A quantum cloning machine is designed to maximize the probability that a quantum state \(|\psi\rangle\) (or, in general, \(N\) identically prepared states \(|\psi\rangle^{\otimes N}\)) is cloned into a state \(|\psi_{\rm out}\rangle=U|\psi_{\rm in}\rangle|0\rangle|R\rangle\) (or \(M\) copies of that state, with \(M>N\)) with a unitary transformation \(U\) acting on the input state \(|\psi_{\rm in}\rangle\), a "blank" state \(|0\rangle\) that will ultimately hold those multiple copies of the cloned state, and an ancillar state \(|R\rangle\). The formalism of quantum cloning machines was introduced by Buzek and Hillery [12], and has given rise to
Figure 4.1: Capacity \(\chi\) for a binary non-reflecting black hole channel as a function of the parameter \(z=e^{-\omega_{k}/T_{0}h}\). \(z=0\) corresponds to a black-hole with vanishing mass, while \(z\to 1\) as the mass of the black hole tends to infinity. The solid line represents the late-time capacity (IV.6). The dashed line is the capacity for early-time modes, which is obtained from the late-time capacity by rescaling \(z\rightarrow\frac{z}{z-1}\)[1]. Note that since \(g_{k}\approx z\), we can see this plot also as depicting the dependence of the information transmission capacity on the mode coupling strength in the black-hole Hamiltonian (III.5).
a large body of work (see, e.g., the reviews (Cerf and Fiurasek, 2006; Scarani _et al._, 2005)). For the general case of an \(N\to M\) cloning machine (see Fig. 4.2), the accuracy of the cloning process is determined by a _fidelity_ measure, defined as the expectation value of one copy of the \(M\) cloned states \(\rho_{\rm out}^{j}\) evaluated in the basis of the input state
\[F_{j}={}_{\rm in}\langle\psi|\rho_{\rm out}^{j}|\psi\rangle_{\rm in}\;.\] (IV.9)
The largest possible \(N\to M\) cloning fidelity of universal quantum cloning machines (devices that copy any input state \(|\psi\rangle\) with the same fidelity) is (Bruss _et al._, 1998; Gisin and Massar, 1997)
\[F_{\rm opt}=\frac{M(N+1)+N}{M(N+2)}\;.\] (IV.10)
For example, the optimal (and therefore maximal) fidelity of a \(1\to 2\) approximate cloning machine is \(F_{1\to 2}=5/6\). In the limit \(M\to\infty\), the cloning fidelity approaches \(\frac{N+1}{N+2}\), which happens to be the fidelity of the best possible state preparation via state estimation (using classical information only) from a finite quantum ensemble (Massar and Popescu, 1995). Quantum cloning machines that reach the cloning limit are called "optimal cloners", and can be constructed using simple quantum optical elements (parametric amplifiers and beam splitters) for discrete states such as polarization (Simon _et al._, 2000), but also for continuous variables (Braunstein _et al._, 2001; Fiurasek, 2001).
Because black holes stimulate the emission of copies of accreting particles, we can seek to apply the formalism of quantum cloning machines to black holes to answer the question: How well do black holes clone quantum states? We will see that the answer is "almost perfectly".
Since the cloning transformation that takes \(|\psi\rangle_{\rm in}\) into \(|\psi\rangle_{\rm out}\) is a unitary transformation, our candidate for the cloning operator is \(U=e^{-iH}\), which transforms late-time incoming particles into outgoing radiation with Eq. (III.22) as the Hamiltonian. But before we do this, let us first study a much simpler case: the fully absorbing (\(\Gamma=1\)) black hole with only early-time particles, described by Eq. (III.5). Understanding this case will set important limits to allow us to better understand the general cloner based on (III.22).
Let us apply \(U=e^{-iH}\) to an arbitrary incoming quantum state \(|\psi_{\rm in}\rangle\) (a state that includes the blank and ancillar states), where \(|\psi\rangle\) encodes information in the "particle-antiparticle" basis using a so-called "dual rail" encoding. In a dual rail encoding, the logical one \(|1\rangle_{L}\) is encoded with \(N\) particles and zero anti-particles in mode \(a\) (in region I outside the horizon), while for the logical zero particles and anti-particles are interchanged, i.e.,
\[|1\rangle_{L} = |N,0\rangle_{a}|0,0\rangle_{b}\] (IV.11) \[|0\rangle_{L} = |0,N\rangle_{a}|0,0\rangle_{b}\;.\] (IV.12)
Naturally, sending information to future infinity using particles or anti-particles that are just outside of the black hole horizon after the formation of the black hole is not feasible as we remarked above before introducing late-time modes: this is purely an exercise to set some limits. We will even consider the limit where information is encoded in modes just inside the black hole horizon (the \(b\) modes) for the same reason.
We can construct an arbitrary quantum state from the logical states (IV.11-IV.12) by writing
\[|\psi\rangle_{\rm in}=\sigma|1\rangle_{L}+\tau|0\rangle_{L}\;.\] (IV.13)
Figure 4.2: Schematics of an \(N\to M\) quantum cloning machine (\(M>N\)). The machine receives as input the product of \(N\) identical copies of the quantum state, supplemented by an internal state \(R\) and \(M-N\) states in a prepared blank state \(|0\rangle\). The \(M\) approximate clones are mixed states \(|\tilde{\psi}\rangle\langle\tilde{\psi}|\), while the \(M-N\) anti-clones are the output of an approximate logical NOT transformation of the clones, indicated by \(\overline{|\tilde{\psi}\rangle\langle\tilde{\psi}|}\).
However, because the quantum cloner defined by the unitary mapping \(U=e^{-iH}\) using Hamiltonian (III.5) is "rotationally invariant" (the action of the cloning machine does not depend on the state), we can simply choose \(\sigma=1\) and \(\tau=0\), that is, we will attempt to clone \(N\) particles.
Writing \(|\psi_{\rm in}\rangle=|N,0\rangle_{a}|0,0\rangle_{b}=|1\rangle_{L}^{\otimes N}\) we obtain using (III.5)
\[|\psi_{\rm out}\rangle=e^{-iH}|\psi_{\rm in}\rangle=\frac{1}{\alpha_{k}^{2+N} }\sum_{jj^{\prime}}^{\infty}e^{-(j+j^{\prime})\frac{\omega}{2\tau}}\sqrt{{j+N \choose N}}|j+N,j^{\prime}\rangle_{a}|j^{\prime},j\rangle_{b}\;.\] (IV.14)
In order to describe an \(N\to M\) cloning machine and calculate its fidelity, we need to fix the number of particles and antiparticles in region I to \(M\). In quantum optics, this "postselection" to a fixed number of clones is achieved by entangling the quantum state with a trigger signal and then measuring the trigger. If \(M\) particles are detected in the trigger, then we know that only the component of (IV.14) with \(M\) clones survives. For black holes, we can perform the same operation, but must send the trigger towards future infinity (but away from the black hole horizon). Even though the quantum state reconstruction is conditioned on the trigger, it only uses the quantum states coming from the black hole.
Fixing the number of output clones to \(M\) reduces the state (IV.14) to
\[|\psi\rangle_{M}\sim\sum_{j=0}^{M-N}\sqrt{{M-j\choose N}}|M-j,j\rangle_{a}|j, M-N-j\rangle_{b}\;,\] (IV.15)
which, it turns out, is (up to normalization) _identical_ to the wavefunction emanating from the optical quantum cloner of Simon et al. (Simon _et al._, 2000) that achieves the optimal fidelity (IV.10). Thus, for quantum states sent into a black hole at early times so that they remain just outside the horizon, the black hole behaves as a universal, optimal, quantum cloning machine.
Let us now study what happens to quantum states _behind_ the horizon (modes \(b\)). Just as in the optical realization of the optimal unversal cloner, the quantum state behind the horizon contains \(M-N\)_anticlones_ of the initial state \(|N,0\rangle_{a}\), that is, "inverted" states obtained from the initial states by the application of the optimal universal NOT gate (Buzek _et al._, 1999; Gisin and Popescu, 1999). The fidelity of these anticlones is
\[F_{\rm anti}=\frac{N+1}{N+2}\] (IV.16)
for each anticlone, which as we already saw is the fidelity of the best possible state preparation via state estimation (Massar and Popescu, 1995). This implies that the black hole has induced the _maximal_ disturbance on the states behind the horizon.
Consider now instead the fidelity of cloning when \(N\)_anti_-particles are sent into mode \(b\). We are now following anti-particles that are traveling just _inside_ the horizon: \(|\psi\rangle_{\rm in}=|0,0\rangle_{a}|0,N\rangle_{b}\). Thinking of these modes as impinging on the black hole horizon from _inside_ the black hole (a horizon that looks to those modes as a mirror, that is, a _white_ hole with \(\Gamma=0\)), we expect those modes to stimulate clones of these anti-particles inside the black hole, but also anticlones on the outside.
Let us calculate the fidelity of those anticlones generated on the outside region. The wavefunction is
\[|\psi\rangle_{M}\sim\sum_{j=0}^{M}\sqrt{{j+N\choose N}}|j,M-j\rangle_{a}|M-j,j +N\rangle_{b}\;,\] (IV.17)
which implies the probability to observe \(j\) particles and \(M-j\) antiparticles outside the horizon
\[p(j,M-j|N)=\frac{{j+N\choose N}}{\sum_{j=0}^{M}{j+N\choose N}}\;.\] (IV.18)
As can easily be checked using (IV.9), the fidelity of these anticlones is also \(N+1/N+2\), that is, it is equal to the fidelity of the anticlones when sending in \(|N,0\rangle_{a}\). But the anticlones of the antiparticle states are, of course, clones of the particle states!
In summary, sending in \(N\) particles in mode \(a\) creates particle clones in region I with _optimal fidelity_. These clones could in principle be entangled with particles that the \(N\) incoming particles were entangled with before they were sent into the black hole, very much like in the quantum optical case, where the initial particles are entangled with
a trigger (Simon _et al._, 2000) (we will exploit this later when calculating the quantum capacity of black holes in section V). Sending in \(N\) antiparticles in mode \(b\) towards the horizon (which, to those modes, looks like a perfectly reflecting mirror) gives rise to _classical_ particle clones in region I instead. They are classical because they cannot share entanglement with any particles that the \(N\) antiparticles inside the black hole could have been entangled with7. We note that those anti-particles traveling towards a white hole horizon can be seen as particles moving away from the horizon (backward in time) after absorption on a non-reflecting horizon. Thus, it is plausible that white holes are just time-reversed black holes. Box 1 makes this analogy more clear, and shows how stimulated emission saves microscopic time-reversal invariance.
Footnote 7: The classical roots of the probability \(\frac{N+1}{N+2}\) are also apparent by noting that this is Laplaceβs _rule of succession_: the likelihood that an event takes place given that we have \(N\) successive observations of it.
**Box 1: Microscopic Time-Reversal Invariance in the Presence of Black Holes**
Newtonian gravity is invariant under time-reversal at the micro-level: every particle trajectory in a gravitational field, when time-reversed, gives rise to another plausible particle trajectory. This is illustrated schematically in Fig. 4.3(a) (left panel), where for the sake of simplicity the effect of the gravitational field on a particle is shown as if a perfect mirror reflected the particle. Under time reversal, the trajectory simply reverses (middle panel). If we use charge-conjugation symmetry, we see that this trajectory is equivalent to that of an anti-particle moving backwards in time (anti-particles are depicted as dashed lines). Comparing the first and the last panel, we note that time reversal is equivalent to CP symmetry, thus embodying the celebrated CPT theorem.
However, classical black holes violate this theorem. If instead of a perfect mirror the particle encounters a perfectly absorbing black hole horizon (Fig. 4.3(b), left panel) reversing the arrow of time (middle panel) does not produce a time-reversed picture of the left panel, since a particle from inside of the black hole cannot escape to the outside. Instead, it is "reflected" at the horizon, that is, from inside the black hole must act like the perfect mirror in Fig. 4.3(a). Replacing particles by anti-particles moving backwards in time produces the right panel, which clearly is not the anti-particle version of the left panel, thus breaking CPT invariance. Adding Hawking radiation (spontaneous emission of particles) to this picture does not restore CPT invariance. However, the stimulated emission of radiation does. Fig. 4.3(c) shows the stimulated particle/anti-particle pair in red (for illustrative purposes I disregard factors of \(\beta^{2}\) here) that must accompany the absorption process (the case \(\Gamma=1\) is shown here). Time-reversing this trajectory produces the middle panel, where the particle from inside of the black hole indeed reflects at the horizon, but it also stimulates a particle/anti-particle pair outside of the horizon (shown in red). In fact, this is the white-hole stimulated emission process described in the main text (\(\Gamma=0\)). Re-interpreting particles moving backwards in time as anti-particles moving forwards in time produces the right panel in Fig. 4.3(c), which indeed is the same process as in the left panel, only with particles replaced with anti-particles. Thus, stimulated emission of pairs restores CPT invariance. The spontaneous emission of pairs (Hawking radiation) is not shown in these diagrams as it has no influence on CPT invariance: it only serves to safeguard the no-clon
Figure 4.3: Schematic representation of particle and anti-particle trajectories in the presence of white- and black hole horizons. Stimulated particles/anti-particles are rendered in red, solid lined denote particle trajectories, while dashed lines illustrate anti-particles. (a) Classical trajectory reflected at a mirror (or falling back under the influence of gravity), its time-reversed trajectory, and the equivalent trajectory in which particles moving forward in time are replaced by anti-particles moving backwards in time. (b) Classical trajectory of a particle observed by a black hole horizon, its time-reversed trajectory, and the equivalent process. (c): Quantum trajectories including stimulated emission effects.
Let us now analyze the more realistic cloning scenario where we send quantum states into the already-formed black hole at late times, using modes \(c\) (see Fig. 3.2) that are strongly blue-shifted with respect to the early-time modes \(a\) and \(b\). Because we also use the beam-splitter term with strength \(g_{k}^{\prime}\geq g_{k}\), we are studying "gray holes" with arbitrary reflectivity.
We first discuss \(1\to M\) cloning. Because the full Hamiltonian (III.22) is also rotationally invariant, we can again restrict ourselves to study cloning of one particular state. To clone the state \(|1\rangle_{L}=|0,0\rangle_{a}|0,0\rangle_{b}|1,0\rangle_{c}\), for example, we obtain (I omit the subscript \(k\) in the particle numbers and coefficients in the following, as we send in modes of only one particular frequency):
\[\rho_{a}=\text{Tr}_{bc}\left(U|1\rangle_{L}\langle 1|U^{\dagger}\right)=\rho_{ k|1}\otimes\rho_{-k|0}\;,\] (IV.19)
where \(\rho_{k|1}\) is given by Eq. (IV.5) derived earlier and \(\rho_{k|0}\) given by the standard result (III.28).
With (III.28) and (IV.5), the \(1\to M\) cloning fidelity can be calculated as before (with \(\xi=\frac{\gamma^{2}}{\alpha^{2}\beta^{2}}\))
\[F_{1\to M}=\frac{\sum_{j=0}^{M}\frac{M-j}{M}p(M-j|1)p(j|0)}{\sum_{j=0}^{M}p(M- j|1)p(j|0)}=\frac{3+\xi+2\xi M}{3(2+\xi M)}\;.\] (IV.20)
Let us investigate this result in a number of physical limits. As the black hole becomes more and more reflective, \(\Gamma_{0}=\alpha^{2}\to 0\), which implies \(\xi=\frac{e^{\omega/T_{\text{BH}}}(1-\Gamma)}{\Gamma_{0}^{2}}\to\infty\). In this case, the fidelity (IV.20) approaches the optimal value
\[\lim_{\Gamma_{0}\to 0}F_{1\to M}=\frac{2}{3}+\frac{1}{3M}\;,\] (IV.21)
as is seen in Fig. 4.4. For arbitrary \(N\), the limit is exactly equal to the Gisin-Massar optimal fidelity (IV.10) of an \(N\to M\) cloning machine. We can recognize this result as the special case we treated earlier: If the black hole perfectly reflects incoming states, the black hole behaves just as if early-time modes (\(a\)-modes) were traveling just outside the horizon (except for the redshift). Because those modes (by definition) never enter the black hole, this is akin to a perfectly reflecting black hole.
Another limit of note is that of full absorption: \(\Gamma_{0}\to 1\). In that case \(\xi\to 1\) and \(F_{1\to M}\to 2/3\), the fidelity of a classical cloning machine. It can be shown in general that for full absorption, the \(N\to M\) cloning fidelity is equal to \(N+1/N+2\) independently of \(\omega/T_{\text{BH}}\), which is the result we obtained earlier when sending \(N\) antiparticles in mode \(b\) directly behind the horizon. This is again not surprising, as the absorption of \(c\)-modes stimulates the emission of \(b\) anti-modes behind the horizon, who in turn give rise to anticlones of the antiparticles, that is, clones. But they must be "classical" clones, so the fidelity is that of state estimation by classical means.
While in the limit \(\Gamma_{0}\to 1\) the best we can do to reconstruct the quantum state is to make classical measurements that allow us to optimally estimate the quantum state, note that in the limit \(N\to\infty\) the probability to do this
Figure 4.4: Cloning fidelity \(F_{1\to M}\) of the quantum black hole as a function of the number of copies \(M\), for different values of the quantum absorption probability \(\Gamma_{0}\) and a fixed \(z=e^{-\omega/T_{\text{BH}}}=0.01\). Adapted from (Adami and Ver Steeg, 2015).
correctly tends to one, implying that the quantum state information can be reconstructed with arbitrary accuracy. In a sense, this result mirrors a result from calculating the classical capacity of the black hole channel, where you can show (Adami and Ver Steeg, 2014) that in the limit \(N\rightarrow\infty\) the capacity of the quantum black hole channel to transmit classical information becomes equal to the noiseless channel capacity, even for full absorption.
In the previous discussion we had kept the ratio between the mode energy \(\omega\) and the mass of the black hole constant (by keeping \(z=e^{-\omega/T_{\rm BH}}\) constant). In the limit of large black holes (where the Hawking temperature approaches zero), the limit \(\omega/T_{\rm BH}\rightarrow\infty\) implies \(\xi\rightarrow\infty\) (as long as \(\Gamma_{0}<1\)), and we again recover the optimal universal quantum cloning fidelity (IV.21), as seen in Fig. 4.5. Given that even modest-sized black holes have \(\omega/T_{\rm BH}\geq 10\), most black holes are therefore nearly-optimal universal quantum cloners, unless the absorption probability is exactly equal to 1. These results also hold for \(N\to M\) cloning machines. Just as in the case \(N=1\), the black hole cloner approaches the optimal cloner in the limit \(T_{\rm BH}\to 0\) or \(\Gamma_{0}\to 0\), and turns into a classical cloning machine in the limit \(\Gamma_{0}\to 1\) and for \(M\rightarrow\infty\).
Now that we have seen in which way the black hole is a nearly optimal quantum cloning machine, we can turn our attention to the _quantum_ channel aspects of the black hole. In the previous section we focused our attention on how classical information fares when sent into the black hole horizon. But in our discussion of quantum cloning, we clearly had the opportunity to study how entangled quantum states are affected by black holes. For example, in order to make sure that we measure exactly \(M\) copies of the initial quantum state at future infinity, it was necessary to entangle a late-time particle with another whose state we would measure. It turns out that the _quantum_ channel defined by the mapping (IV.14) is an example of so-called "cloning channels" (Bradler, 2011). More precisely, the black hole acts a weighted ensemble of cloning machines. In the next section, we will study the fate of _quantum information_ (more precisely, quantum entanglement) that interacts with the black hole horizon. We should be mindful that there is no law of physics that prescribes that quantum entanglement must be preserved when interacting with a black hole. We will find that for the most part it is, but if the black hole is perfectly absorbing, it is not.
## V Quantum Information Capacity
Quantum information is a relatively new concept within the canon of theoretical physics. While classical information (defined in 1948) was a concept that was available to workers in the field of quantum gravity who worried about information conservation, quantum information theory rose to prominence in the mid 1990s spurred on by Peter Shor's discovery (Shor, 1994) that a quantum algorithm can factor numbers faster than any classical algorithm. One of the central results of classical information theory is the calculation of the capacity of various channels to transmit information, and the results in the previous section borrow heavily from that theory. Understanding the transmission of _quantum_ information through a quantum channel requires an entirely different formalism, however, mainly because quantum information is something altogether different from classical information. As a consequence, we will see that for most channels it is not even possible to write down a closed expression for the capacity.
Classical information characterizes the relative state of two systems, specifically where one system can make predictions about the state of another. Quantum information is _entanglement_, that is, it is given by a quantum state relative to another. However, knowing this relative state does not make it possible for one system to make predictions
Figure 4.5: Cloning fidelity \(F_{1\to M}\) as a function of the number of copies \(M\), for different \(z=e^{-\omega/T_{\rm BH}}\) and a fixed quantum absorption probability \(\Gamma_{0}=0.95\). Adapted from (Adami and Ver Steeg, 2015).
about the other because entangled states are not separable: when two states are entangled they become one. For example, recall Eq. (III.9): the quantum state at future infinity (for the simplified situation without a beam splitter that would give rise to gray-body factors) with no initial particles present at past infinity,
\[|0\rangle_{\rm out}=e^{-iH}|0,0\rangle_{a}|0,0\rangle_{b}=\prod_{k=-\infty}^{ \infty}\frac{1}{\cosh^{2}g_{k}}\sum_{n_{k},n^{\prime}_{k}}e^{-(n_{k}+n^{\prime }_{-k})\omega_{k}/2T_{\rm BH}}|n_{k},n^{\prime}_{-k}\rangle_{a}|n^{\prime}_{k},n_{-k}\rangle_{b}\;.\] (V.1)
This is a highly entangled state with both particles and anti-particles present behind and in front of the horizon, because the Bogoliubov transformation is, at heart, an entangling operation. Now let us study what happens to entangled states interacting with black holes.
When we say that we want to "send quantum entanglement through a quantum channel", what we mean is that the entanglement that one party has with a quantum system is _transferred_ to another party. Say the two parties are called "Alice" and "Bob", and Alice is entangled with a reference system called "R". To simplify things further, suppose Alice's state is a qubit. In that case we can without loss of generality write the entangled state between Alice's qubit and R as
\[|\psi\rangle_{\rm in}=\sigma|0\rangle_{A}|0\rangle_{R}+\tau|1\rangle_{A}|1 \rangle_{R}\;,\] (V.2)
with complex coefficients \(\sigma\) and \(\tau\) that satisfy the condition \(|\sigma|^{2}+|\tau|^{2}=1\). Now imagine that Bob is an inertial observer at future infinity, and Alice (for the present discussion) delivers her qubit into the forming black hole at past infinity. Is it possible for Bob to be entangled with R in the same way that Alice was? We will see in a moment that for the situation described here (namely a channel descibed by the unitary \(U=e^{-iH}\) with the Hamiltonian given by Eq. (III.5)), the answer is definitely "No": the quantum capacity of this channel vanishes. This, however, does not signal a breakdown of any law of physics: there are plenty of quantum channels with vanishing capacity. But understanding how black holes affect quantum entanglement is still an interesting question, which we now delve into.
Before we define and then calculate the quantum capacity of this channel, we need to discuss quantum channels more generally. It turns out that the mathematics of quantum channels is far more complex than that of classical channels, with many cases where it is not just impossible to calculate the capacity, we cannot even write down an expression for it! The reason for this is that the capacity of channels is defined asymptotically. Classically, the capacity is the maximum rate at which information can be sent through the channel with vanishing error rate, in the limit where the number \(n\) of uses of the channel goes to infinity. Because classically each use of the channel is independent of the previous use, such a limit is easily taken. In quantum physics, however, one message sent earlier can be entangled with a message sent later. Because these uses of the channel are now not independent anymore, the limit \(n\rightarrow\infty\) becomes highly nontrivial. Let us first define the channel.
We begin with the entangled state between Alice and R written as in (V.2). This state becomes the input to the unitary dynamics of the Bogoliubov transformation \(U=e^{-iH}\) with a suitable \(H\) (see Fig. 5.1).
After the action of \(U\), we ask whether the ouput of the channel, namely Bob's quantum state at future infinity, is now entangled with R in the same manner as Alice's qubit was. Note that the channel has two outputs: the recipient A, as well as a secondary
Figure 5.1: Quantum channel with entangled input state \(|\psi\rangle_{RA}\) (the entanglement between R and A is indicated by the dashed vertical line), and outputs B and E. Generally, the βenvironmentβ E is an unobserved variable that provides the noise in the channel. For the black hole quantum channel, A is at past infinity (or when discussing late-time signaling, A sends in her quantum state at future infinity) while B is an inertial observer at future infinity. The complementary channel output E is the inside of the black hole, region II in Fig. 3.1. The ouput of the joint channel is the pure state \(|\Psi\rangle_{RBE}\). The density matrices \(\rho_{RB}\) and \(\rho_{BE}\) are obtained from \(|\Psi\rangle_{RBE}\) by tracing over the unobserved output.
oupt denoted by E. This secondary output is called the "complementary" channel: it is where everything that does not make its way to B must go. In classical information theory, what does not make its way to the receiver is lost to the environment, and in principle someone who has access to it could reconstitute the information from it, that is, such an agent could "eavesdrop" on the channel (hence sometimes this observer is called "Eve"). One of the most important differences between classical and quantum channels is that it is strictly impossible to both have quantum information perfectly reconstructed by B and by E: it is forbidden by the no-cloning theorem.
Let us now return to the asymptotic property of channels. This is the second important distinction between classical and quantum channels. As mentioned earlier, the classical capacity is an asymptotic quantity: it is the rate at which information can be sent through the channel with arbitrary accuracy in the limit of \(n\to\infty\) uses of the channel. The _quantum capacity_ for Alice to send a single qubit (her share of the entangled state \(|\psi_{RA}\rangle\)) to Bob is known as the "coherent information", and is defined as (Barnum _et al._, 1998)
\[C_{1}(|\psi\rangle_{RA})=\max_{|\psi\rangle_{RA}}\bigl{[}S(B)-S(E)\bigr{]}\;,\] (V.3)
where \(S(B)\) is the von Neumann entropy of Bob's density matrix \(\rho_{B}=\mathrm{Tr}_{RE}(|\Psi\rangle_{RBE}\langle\Psi|)\) given by (von Neumann, 1927)
\[S(B)=-\mathrm{Tr}_{B}\,\rho_{B}\log\rho_{B}\;,\] (V.4)
and \(S(E)\) is the entropy of the complementary channel, defined in a similar manner. Note that Shannon entropy [for example, Eq. (II.3)] is simply a von Neumann entropy evaluated in the basis in which the density matrix is diagonal, defined by von Neumann 21 years before Shannon introduced his "uncertainty function"8.
Footnote 8: It is interesting in this context to note that it was von Neumann who suggested to Shannon to call his measure _entropy_ because (as accounted in (Tribus and McIrvine, 1971)) βyour uncertainty function has been used in statistical mechanics under that nameβ.
That quantum channels are not "asymptotically" additive becomes quite obvious when you can show that two channels that each have zero \(C_{1}\) could in fact transmit quantum information when used in parallel (Smith and Yard, 2008), something that is completely impossible for classical channels. The true quantum capacity of a channel is
\[C_{Q}=\lim_{n\to\infty}\frac{1}{n}C_{1}(|\psi\rangle_{RA}^{\otimes n})\;,\] (V.5)
where \(|\psi\rangle_{RA}^{\otimes n}\) represents \(n\) copies of the state \(|\psi\rangle_{RA}\). However, the calculation of this limit \(n\to\infty\) is in most cases intractable (see, for example (Wilde, 2013), which should be consulted for a more comprehensive introduction to quantum channels). There are some exceptions: channels where the "regularization" [the limit \(n\to\infty\) in (V.5)] is unnecessary. One such example is the "symmetric" quantum channel: a channel where the outputs B and E in the channel are interchangeable. This is a very peculiar channel, because if B and E are interchangeable, then we can say that both B and E receive the same amount of quantum information. But this is impossible according to the no-cloning theorem, and as a consequence the quantum channel capacity of a symmetric channel vanishes. This situation is, perhaps, analogous to the classical binary channel where a bit is equally likely to be flipped or not. That channel also has zero capacity.
It turns out that the channel defined by the mapping (V.1) is a symmetric channel, as can be seen by the symmetry between the \(a\) modes in front of the horizon and the \(b\) modes behind it. As a consequence we know this capacity, and it is zero. This, in hindsight, is not surprising. The input state to this channel is zero-dimensional: the vacuum. We would need at least an entangled qubit as input to have a nonzero capacity.
We will now study a channel with input. We use a dual-rail encoding of the logical bit like before, but rather than using particles and anti-particles, we instead encode the qubit in two particle modes that are available to Alice, for example the basis states \(|10\rangle_{a}\) and \(|01\rangle_{a}\). This way, we can simplify the calculation by ignoring the anti-particle component in (III.5). Doing this will also allows us to drop the subscript \(k\), which we used mostly to indicate particle/anti-particle status. For simplicity, we will first show the calculation without the beam-splitter term [that is, Hamiltonian (III.5)], then later perform the full calculation with Hamiltonian (III.22).
Using \(U=e^{-iH}\) with Hamiltonian (III.5) on the input state \((\sigma|01\rangle_{a}+\tau|10\rangle_{a})|0\rangle_{b}\), it is not difficult to see that the channel we have constructed is in fact the same as the quantum cloner discussed in (IV.14). We did not calculate the quantum capacity of the quantum cloning channel, so we will do this now. First, consider the \(N\to M\) single use of the channel, which we term \(\mathcal{N}_{N\to M}\). The \(U\) in Fig. 5.1 is then the quantum cloning machine QCM in Fig. 4.2. Using the single-shot formula for the quantum capacity (V.3), Bradler et al. were able to show that (Bradler _et al._, 2010)
\[C_{1}(\mathcal{N}_{N\to M})=\log_{2}(M+1)-\log_{2}(M-N+1)=\log_{2}\left(\frac{ M+1}{M-N+1}\right)\;.\] (V.6)
While the calculation in (Bradler _et al._, 2010) was not performed in the context of black holes, once we realize that the quantum black hole channel is simply a cloning transformation, their results carry through. Of course, taking into account the redshift will modify these results, but using late-time particles as the signal (as we will do later) should recover this expression as late-time particles do not suffer a redshift.
Now, the full quantum channel is not an \(N\to M\) cloning machine, as the number of output particles is not fixed. Instead, the general channel is a superposition of cloning channels (Bradler, 2011). Let us focus on the case with a single input qubit. The channel \(\mathcal{N}\) is then the superposition
\[\mathcal{N}=\sum_{M=1}^{\infty}p_{M}\mathcal{N}_{1\to M}\;,\] (V.7)
where (Bradler and Adami, 2014; Bradler _et al._, 2010; Bradler, 2011)
\[p_{M}=\frac{1}{2}(1-z)^{3}M(M+1)z^{M-1}\;.\] (V.8)
Here (as before), \(z=e^{-\omega/T_{\rm BH}}\), and \(\sum_{M=1}^{\infty}p_{M}=1\). Note that the complex phases \(\sigma\) and \(\tau\) do not appear anywhere, which is due to the earlier observation that cloning channels are invariant under unitary rotations of the input state, so we can simply set \(\sigma=1\), for example.
We would like to calculate the one-shot capacity of the channel \(\mathcal{N}=\sum_{M=1}^{\infty}p_{M}\mathcal{N}_{1\to M}\), but up to this point we only know the capacity of a cloning channel with fixed \(N\) and \(M\), Eq. (V.6). Fortunately, it is possible to show that the capacity of a convex mixture of channels is equal to the mixture of capacities (see Appendix A in (Bradler and Adami, 2014)):
\[C_{1}(\mathcal{N})=\sum_{M=1}^{\infty}p_{M}C_{1}(\mathcal{N}_{1\to M})\;.\] (V.9)
However, as I remarked before, in general the one-shot capacity is not equal to the quantum capacity of the channel because in quantum physics calculating a capacity requires the regularization (V.5), which is intractable in the general case. On the other hand, we also saw that there are exceptions where regularization is not required. We will now see that the cloning channels are another such exception, but to understand this exception we have to first become familiar with another property of channels, termed _degradability_(Devetak and Shor, 2005).
Roughly speaking, a channel is said to be degradable if the output of the channel can be "degraded" in such a manner that it looks like noise. Recall that in our construction Fig. 5.1 the channel \(\mathcal{N}\) maps the input state \(\rho_{A}={\rm Tr}_{R}|\psi\rangle_{RA}\langle\psi|\) to the output, that is \(\rho_{B}=\mathcal{N}(\rho_{A})\). The noise of the channel is represented by the channel from Alice to the environment, which is called the "complementary channel" \(\mathcal{N}_{c}\), so that \(\rho_{E}=\mathcal{N}_{c}(\rho_{A})\).
A degradable channel is one where there exists a mapping \(\mathcal{D}\) so that
\[\mathcal{D}(\rho_{B})=\rho_{E}\;,\] (V.10)
that is, where the map \(\mathcal{N}\) followed by the map \(\mathcal{D}\) equals the map \(\mathcal{N}_{c}\): \(\mathcal{N}_{c}=\mathcal{D}\circ\mathcal{N}\). It is not difficult to prove (see, for example, Appendix A in (Cubitt _et al._, 2008)) that all degradable channels have additive capacities, meaning that the regularization (V.5) is unnecessary and that therefore \(C_{Q}=C_{1}\).
Because a degradation always adds noise, a channel that is not degradable is one where Bob's output is noisier than Eve's. There are also channels where the environment "simulates" the channel itself (as opposed to the converse where the output simulates the noise). Such channels are called "anti-degradable", and naturally they require the existence of an anti-degrading map \(\mathcal{D}_{c}\) so that \(\mathcal{D}_{c}(\rho_{E})=\rho_{B}\). The relationship between these maps is sketched in Fig. 5.2. Let us check if we can find a degradable map for the cloning channel. To keep things simple, let us focus on the simplest \(1\to 2\) cloning machine. One way to describe this channel is via its action on an arbitrary qubit, written in the Bloch-state representation as
\[\rho_{A}=\frac{1}{2}(\mathbb{1}+\hat{n}\cdot\vec{\sigma})\;,\] (V.11)
where \(\hat{n}\in\mathds{R}^{3}\) is the unit vector in the Bloch sphere, and \(\vec{\sigma}\) are the Pauli matrices. The vector \(\hat{n}\) is determined by the complex coefficients \(\sigma\) and \(\tau\) of an arbitrary qubit, the exact form of which is not important for this argument. The optimal cloner returns (Bradler, 2011)
\[\rho_{B_{i}} = \frac{1}{2}(\mathbb{1}+\frac{2}{3}\hat{n}\cdot\vec{\sigma})\;,\] (V.12) \[\rho_{E} = \frac{1}{2}\left(\mathbb{1}+\frac{1}{3}(n_{c}\sigma_{x}-n_{y} \sigma_{y}+n_{z}\sigma_{z})\right)\;,\] (V.13)
for \(i=1,2\). One thing we can do to turn \(\rho_{B_{i}}\) into \(\rho_{E}\) is to apply a _depolarizing_ map that shrinks the Bloch vector by a factor of two. But this is not enough, as there is a minus sign multiplying \(\sigma_{y}\) in (V.13). However, applying a complex conjugation after the depolarization will indeed turn \(\rho_{B}\) into \(\rho_{E}\). Such combined maps are called "conjugate degrading" maps, and the cloning channel is therefore _conjugate degradable_. It is possible to prove that not only is the optimal \(1\to 2\) cloner conjugate degradable, but so are all \(N\to M\) cloners, and by extension the general cloning channel (V.7) (Bradler, 2011). Fortunately, conjugate degradable channels _also_ are additive, so that the capacity of the quantum cloning channel, and therefore the quantum capacity of the black hole channel (as they are one and the same thing) is given by (V.9). Evaluating (V.9) using (V.6) (with \(N=1\)) and (V.8) we arrive at the expression (Bradler and Adami, 2014)
\[C_{Q}=\frac{1}{2}(1-z)^{3}\sum_{M=1}^{\infty}M(M+1)z^{M-1}\log \left(\frac{M+1}{M}\right)\;,\] (V.14)
which is shown in Fig. 5.3 as a function of \(z=e^{-\omega/T_{\rm BH}}\). Note that unlike the classical capacity of the black hole, which was still positive even as the temperature of the black hole diverges (\(M_{\rm BH}\to 0\)), the quantum capacity vanishes in this limit.
The previous discussion of the capacity of black holes to transmit quantum information was intentionally naive (we sent in information early during the formation of the black hole but neglected the red shift, and we did not treat scattering off the black hole horizon). However, we will see that we can reuse some of that calculation in the full case that we discuss now.
To correctly treat scattering off of the black hole horizon with particles sent towards the horizon long after the formation of the black hole, we define (as before, in Eq. (III.21)) the outgoing annihilation operator
\[A_{k} = e^{-iH}a_{k}e^{iH}=\alpha_{k}a_{k}-\beta_{k}b_{k}^{\dagger}+ \gamma_{k}c_{k}\;,\] (V.15)
except for simplicity we will use a scalar as opposed to a complex field since we will not need to describe anti-particles. In the following, I will only treat the case \(\alpha_{k}=0\) (a perfectly reflecting black hole, which we can call a "white hole"),
Figure 5.3: Quantum capacity of the channel (V.7) with a single incoming qubit (\(N=1\)) as a function of \(z=e^{-\omega/T_{\rm BH}}\).
and a perfectly absorbing black hole (\(\alpha_{k}=1\)). The reason I only treat these extreme cases is that, in this formalism, they are the only ones that are tractable. I will discuss the "gray hole channel" using the formalism of Gaussian states later.
### Quantum capacity of perfectly white holes
If we set \(\alpha_{k}=0\), the Bogoliubov transformation (V.15) becomes
\[A_{k}=\gamma_{k}c_{k}-\beta_{k}b_{k}^{\dagger}\;,\] (V.16)
This transformation is formally identical to the transformation (III.1), which was the naive description of black holes without scattering. Using \(\gamma_{k}^{2}=1+\beta_{k}^{2}\), the outgoing density matrix given \(m\) incident late-time particles is precisely (III.19). I previously pointed out (right after that equation) that it was not consistent to claim that the case treated there should be seen as a perfectly absorbing black hole (\(\Gamma=1\)), as detailed balance would be violated. In fact, we now see that the case described there (and therefore the case studied by Hawking in his initial publication (Hawking, 1975)) only consistently describes a perfectly _reflecting_ black hole: a white hole. This makes sense in hindsight: if you take a look at the Penrose diagram Fig. 3.1 that describes the time evolution of the operators \(a_{k}\) and \(b_{k}\), it is clear that the particles that travel towards future infinity just outside of the horizon (the modes \(a_{k}\)) never enter the horizon9. Thus, if viewed as the signaling particles, they are never absorbed and for them the black hole is a perfect mirror.
Footnote 9: This is also consistent with how Hawking introduced his gray body factor, by following particles from region II _backwards in time_ into region I. If particles can be transmitted with \(\Gamma=1\) from the inside to the outside (a black hole from the inside), then its time reversal must be a white hole with \(\Gamma=0\) from the outside).
We can now be confident that the capacity shown in Fig. 5.3, when treating late-time particles incident on the horizon, is in fact the quantum capacity of a white hole. It is finite for all but vanishing-mass black holes, which implies that (given suitable quantum error correction methods), Alice's quantum state can be perfectly reconstructed by Bob. Now let us take a closer look at the complementary channel (the one to Eve, that is, beyond the horizon). The quantum no-cloning theorem tells us that this capacity better vanish, as otherwise quantum information could be perfectly reconstructed by both Eve and Bob.
It turns out that the complementary channel \({\cal N}_{c}\) gives rise to a quantum state straddling the horizon that is _separable_, meaning that the regions inside and outside of the horizon are not entangled (Bradler, 2011): the horizon has "broken" the potential entanglement between Eve and Alice. Channels that do this are known as "entanglement-breaking channels" (Bradler _et al._, 2010), and have zero capacity. In particular, it can be shown that all anti-degradable channels (see Fig. 5.2) are entanglement-breaking (Bradler _et al._, 2010). Since the cloning channel is degradable (along with being conjugate degradable), the no-cloning theorem ensures that the complementary channel is anti-degradable and therefore entanglement-breaking.
### Quantum capacity of perfectly black holes
We now study the case \(\alpha_{k}=1\), that is, black holes that do not reflect any of the incoming radiation. In this case, the Bogoliubov transformation reads
\[A_{k}=a_{k}-\beta_{k}b_{k}^{\dagger}+\gamma_{k}c_{k}\;,\] (V.17)
but because \(\alpha_{k}^{2}-\beta_{k}^{2}+\gamma_{k}^{2}=1\), setting \(\alpha_{k}=1\) implies \(\beta_{k}=\gamma_{k}\) [we previously identified this case with \(g_{k}=g_{k}^{\prime}\) in (III.5)]. Acting with \(U=e^{-iH}\) on an initial state without any incoming particles \(|000\rangle_{abc}\) using Hamiltonian (III.5) but with \(\alpha_{k}=1\) and \(\beta_{k}^{2}=\gamma_{k}^{2}\equiv g_{k}^{2}\) results in
\[U|000\rangle_{abc}=\frac{1}{1+\frac{1}{2}\beta_{k}^{2}}\sum_{n=0}^{\infty}\sum _{m=0}^{n}\left(\frac{2\beta_{k}}{2+\beta_{k}}\right)^{n}(-\beta_{k})^{m} \sqrt{\binom{n}{m}}|n-m\rangle_{a}|m\rangle_{b}|m\rangle_{c}\;,\] (V.18)
In order to encode a qubit in a dual-rail fashion, we also need to study how a single-late time particle (in mode \(c_{k}\)) fares under the transformation. We find (Bradler and Adami, 2014)
\[U|001\rangle_{abc} = \left(\frac{1}{1+\frac{1}{2}\beta_{k}^{2}}\right)^{2}\sum_{n=0}^{ \infty}\sum_{m=0}^{n+1}\left(\frac{2\beta_{k}}{2+\beta_{k}}\right)^{n}(-\beta_ {k})^{m}\sqrt{\binom{n}{m}}\times\] (V.19) \[\left(\sqrt{m+1}|n-m\rangle_{a}|n\rangle_{b}|m+1\rangle_{c}+g_{k }\sqrt{n-m+1}|n-m+1\rangle_{a}|n\rangle_{n}|m\rangle_{c}\right)\;.\]
If Alice's qubit is described by the density matrix \(\rho_{\rm in}\), we can show that the output matrix is a superposition of channels10\(\mathcal{D}_{M}\) (where \(M\) is again the number of clones) (Bradler and Adami, 2014)
Footnote 10: We should not confuse the depolarizing map \(\mathcal{D}_{M}\) with the degrading map \(\mathcal{D}\) introduced in Eq. (V.10), as the depolarizing map is neither degrading nor anti-degrading.
\[\rho=\sum_{M=1}^{\infty}p_{M}\mathcal{D}_{M}(\rho_{\rm in}).\] (V.20)
Let us look at the first term, \(M=1\). The output under this channel can be calculated to be
\[\mathcal{D}_{1}(\rho_{\rm in})=\frac{1}{3}\rho_{\rm in}+\frac{1}{3}1\;,\] (V.21)
where \(1\) is the unit matrix. It turns out that this is the output of a _quantum depolarizing channel_(Adami and Cerf, 1997; Calderbank and Shor, 1996; King, 2003), whose action on an input \(\rho_{\rm in}\) is given by \(\rho_{\rm depol}=(1-p)\rho_{\rm in}+\frac{p}{2}1\) (\(p\) is the depolarizing parameter of the channel). Thus, the quantum channel for full absorption (\(\alpha_{k}=1\)) is a quantum depolarizing channel where the depolarizing parameter is given by the "classical" cloning fidelity \(F=2/3\), which we recall from section IV.2 is the worst cloning fidelity we can achieve. More importantly, it was shown in (Bradler, 2011) that a depolarizing channel with \(F=2/3\) is in fact entanglement-breaking, and therefore has zero capacity. Indeed, all channels \(\mathcal{D}_{M}\) in (V.20) have this property, and therefore the quantum capacity for a perfectly black hole vanishes.
In hindsight, we could have guessed this result. After all, the quantum channel with \(\alpha_{k}=1\) is the _complementary_ channel for the \(\alpha_{k}=0\) channel, viewed from behind the horizon. And because that channel has positive capacity, the capacity of its complementary channel must vanish so that we cannot reconstruct quantum information in two different places.
Clearly, the channels with \(\alpha_{k}=0\) and those with \(\alpha_{k}=1\) are extreme cases. Because the reflectivity of a black hole depends primarily on the impact parameter of scattering, it is important to understand the black hole quantum channel for all values between \(\alpha_{k}=0\) and \(\alpha_{k}=1\). To study the capacity for black holes with arbitrary \(0\leq\alpha_{k}^{2}\leq 1\), we are going to have to deploy more sophisticated artillery.
### Quantum capacity of black holes with arbitrary transmissivity
Previously, I pointed out the relationship between the Bogoliubov transformation engendered by (III.5) and those that we encounter in quantum optics in order to motivate a discussion of quantum dynamics in terms of optical elements: the two-mode squeezer and the beam splitter. In this section we are going to use this analogy in order to marshal the considerable quantum optics literature of so-called _Gaussian states_, and study the quantum capacity of Gaussian channels. This will allow us to make some statements about the quantum capacity of black holes with arbitrary transmissivity, but we will also see that, because of the problem of regularization of quantum capacities, we cannot as yet answer all questions about the capacity of those channels.
Earlier, we discussed an encoding of information using states with defined particle number. For example, a logical zero would be encoded using \(m\) anti-particles, and a logical one would correspond to sending \(m\) particles instead. However, creating quantum states with defined particle number is exceedingly difficult. In standard quantum optics applications, it is more convenient to construct states with a defined _mean_ number of particles \(\langle m\rangle\) instead. A typical Gaussian quantum state with fixed mean number of particles is a _thermal state_, defined by the density matrix
\[\rho_{\rm therm}=\frac{1}{\langle m\rangle+1}\sum_{m=0}^{\infty}\left(\frac{ \langle m\rangle}{\langle m\rangle+1}\right)^{m}|m\rangle\langle m|\;.\] (V.22)
This is a thermal state because the mean number of particles \(\langle m\rangle=\text{Tr}(\rho_{\text{therm}}a^{\dagger}a)\) is
\[\langle m\rangle=\frac{e^{-\omega/T}}{1-e^{-\omega/T}}\;,\] (V.23)
and we immediately recognize that the output of a black hole channel without any incident particles (III.11) is, in fact, a thermal state with mean particle number \(\langle m\rangle=\beta_{k}^{2}\) in each mode \(k\).
Thermal states are a particular example of the more general Gaussian states, which are defined in terms of correlation matrices acting on _quadratures_, rather than the creation and annihilation operators that we have used throughout. The relation between quadrature operators \(q,p\) and creation/annihilation operators \(a^{\dagger},a\) for a single mode \(k\) is simply (I have reinstated \(\hbar\) here)
\[q_{k}=\sqrt{\frac{\hbar}{2}}(a_{k}+a_{k}^{\dagger})\;,\;p_{k}=i\sqrt{\frac{ \hbar}{2}}(a_{k}-a_{k}^{\dagger})\] (V.24)
so that \(q_{k}\) and \(p_{k}\) observe the standard uncertainty relation
\[[q_{k},p_{k^{\prime}}]=i\hbar\delta_{kk^{\prime}}\;.\] (V.25)
To express an arbitrary density matrix \(\rho\) in the \(n\)-mode quadrature basis (Weedbrook _et al._, 2012), first we define the column vector of \(2n\) operators
\[\mathbf{x}=[q_{1},p_{1},\cdots,p_{n},q_{n}]^{\top}\] (V.26)
so that \(x_{1}=q_{1}\), \(x_{2}=p_{1}\), \(x_{3}=q_{2}\) and so forth until \(x_{2n}=p_{n}\). Then we can write the commutation relation for all \(2n\) operators as
\[[x_{i},x_{j}]=i\hbar\Omega_{ij}\;,\] (V.27)
where the matrix \(\mathbf{\Omega}\) is the direct sum of matrices \(\mathbf{\omega}\) for each mode:
\[\mathbf{\Omega}=\bigoplus_{k=1}^{n}\mathbf{\omega}=\begin{pmatrix}\mathbf{\omega}&&\\ &\ddots&\\ &&\mathbf{\omega}\end{pmatrix}\;,\;\mathbf{\omega}=\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\;.\] (V.28)
The first moment of a density matrix \(\rho\) can then be expressed in this basis as
\[\mathbf{\bar{x}}=\text{Tr}(\rho\mathbf{x}).\] (V.29)
The all-important _second_ moment of \(\rho\) is given by the matrix \(\mathbf{V}\) with elements
\[V_{ij}=\frac{1}{2}\text{Tr}(\rho\{\Delta x_{i},\Delta x_{j}\})\] (V.30)
where \(\Delta x_{i}=x_{i}-\bar{x}_{i}\) and \(\{\;,\;\}\) is the anti-commutator. Gaussian quantum states are then defined as those states for which higher-order moments beyond the second moment vanish, and the covariance matrix \(\mathbf{V}\) is a real symmetric matrix that satisfies the uncertainty principle (Weedbrook _et al._, 2012)
\[\mathbf{V}+i\frac{\hbar}{2}\mathbf{\Omega}\geq 0\;.\] (V.31)
The "positivity" requirement for a matrix such as that written in (V.31) stipulates that all the eigenvalues of the matrix need to be positive, which puts constraints on the real-valued elements.
With these preliminaries out of the way, we can study how Gaussian states behave under the transformations of the black hole channel. As before, we need to look at how the Bogoliubov transformation affects the two vacuum modes \(a_{k}\) and \(b_{k}\), as well as the signal mode \(c_{k}\). We can write this transformation in matrix form [see Eqs. (III.23-III.25) for the definitions of \(\alpha_{k}\), \(\beta_{k}\), and \(\gamma_{k}\)]
\[\begin{pmatrix}A_{k}\\ B_{k}^{\dagger}\\ C_{k}\end{pmatrix}=\begin{pmatrix}\alpha_{k}&-\beta_{k}&\gamma_{k}\\ \beta_{k}&1+\frac{\beta_{k}^{2}}{1+\alpha_{k}}&-\frac{\beta_{k}\gamma_{k}}{1+ \alpha_{k}}\\ -\gamma_{k}&\frac{\beta_{k}\gamma_{k}}{1+\alpha_{k}}&1+\frac{\gamma_{k}}{1+ \alpha_{k}}\end{pmatrix}\begin{pmatrix}a_{k}\\ b_{k}^{\dagger}\\ c_{k}\end{pmatrix}\;.\] (V.32)
We now have to write this transformation in terms of operators acting on quadratures instead. When we do this (see (Bradler and Adami, 2015) for the details), we can write the action of the channel in terms of its effect on the covariance matrix of an input Gaussian state (the first moments of the Gaussian state can always be set to zero). In particular, an input Gaussian one-mode state (sent in at late time) \(\mathbf{V}_{\rm in}\) transforms with a transmission matrix \(\mathbf{T}\) and a noise matrix \(\mathbf{N}\) as
\[\mathbf{V}_{\rm out}=\mathbf{T}\mathbf{V}_{\rm in}\mathbf{T}^{\top}+\mathbf{N}\;.\] (V.33)
Here, \(\mathbf{T}\) and \(\mathbf{N}\) are \(2\times 2\) matrices (they act on the single late-time mode) that take the very simple (and diagonal) form
\[\mathbf{T}=\begin{pmatrix}\sqrt{\gamma_{k}^{2}}&0\\ 0&\sqrt{\gamma_{k}^{2}}\end{pmatrix}\quad\mathbf{N}=\begin{pmatrix}\alpha_{k}^{2} +\beta_{k}^{2}&0\\ 0&\alpha_{k}^{2}+\beta_{k}^{2}\end{pmatrix}\;.\] (V.34)
This form is particularly pleasing because it allows us to compare this channel directly to a complete characterization of all possible one-mode Gaussian (OMG) channels that previously appeared in the literature (Schafer _et al._, 2013; Weedbrook _et al._, 2012). Of the eight channels listed there, three make an appearance in the black hole Gaussian channel. First, let us examine the parameter space of this channel, which is characterized by its transmission potential (parameterized by \(\gamma_{k}^{2}\)), and the noise level (described by \(\alpha_{k}^{2}+\beta_{k}^{2}\)). This much is not surprising: \(\gamma_{k}\), after all, is the amplitude of the Bogoliubov transformation affecting our signal state, while the black hole's modes \(a_{k}\)and \(b_{k}^{\dagger}\) are in a vacuum state and provide the noise to the channel.
Fig. 5.4 depicts the OMG channel parameterized by the noise level and the transmissivity of the channel.
While the regions in gray are not physical channels, we would like to calculate the capacity of the channel to transmit quantum information for the permissive region. Different conditions lead to the disallowed regions: two of them stem from the fact that the black hole's diffraction parameter \(\alpha_{k}\) is bounded from below and above by zero and 1, respectively. In fact, these bounds also correspond to the positivity condition for the output Gaussian state, which is equivalent to the condition \(g_{k}^{\prime}\geq g_{k}\). The second condition that rules out the lower left triangle in Fig. 5.4 is the condition that the mean particle number \(\langle n_{k}\rangle\geq 0\), as \(\alpha_{k}^{2}+\beta_{k}^{2}=(2\langle n_{k}\rangle+1)(1-\gamma_{k}^{2})\geq 1 -\gamma_{k}^{2}\), using \(\langle n_{k}\rangle=\frac{\beta_{k}^{2}}{\alpha_{k}^{2}-\beta_{k}^{2}}\), consistent with (V.23).
Within the family of OMG channels, the channel with \(\gamma_{k}^{2}<1\) is known as the "lossy channel" \(\mathcal{C}_{\rm loss}\) (area to the left of the vertical dashed line in Fig. 5.4). For black holes, this corresponds to the cases where the effective transmissivity \(\Gamma=1-\gamma_{k}^{2}\) lies between zero and one (recall that the effective transmissivity is strictly smaller than the black hole "bare" transmissivity \(\alpha_{k}^{2}\), the parameter that characterizes the beam-splitter, since \(\Gamma=\alpha_{k}^{2}(1-e^{-\omega_{k}^{2}/T_{\rm BH}})\). OMG channels exist for which \(\gamma_{k}^{2}>1\): these are the so-called "amplifying channels", which, as the name implies, _amplify_ the incoming signal. While it is unclear whether black holes exist with such a property, we'll discuss these hypothetical channels here for completeness.
The allowable region of the channel is bounded by the two lines \(\alpha_{k}^{2}+\beta_{k}^{2}=1+\gamma_{k}^{2}\) and \(\alpha_{k}^{2}+\beta_{k}^{2}=-1+\gamma_{k}^{2}\), which correspond to the two cases for which we have been able to calculate the quantum capacity of the black hole channel in section V.1 (\(\alpha_{k}^{2}=0\)) and section V.2 (\(\alpha_{k}^{2}=1\)). For the latter case we determined that the quantum capacity vanishes, and indeed the analysis of (Caruso _et al._, 2006) revealed that the entire region between \(\alpha_{k}^{2}=0.5\) (the balanced beam splitter) and \(\alpha_{k}^{2}=1\) corresponds to a channel that can be written as the composition of an arbitrary channel and an anti-degradable channel, and therefore must have zero capacity11. This area is shaded in yellow in Fig. 5.5.
Footnote 11: We noted earlier that anti-degradable channels are additive and must have zero capacity as they are entanglement-breaking.
We can now attempt to calculate the quantum capacity for the lossy channel. To do this, we must calculate the coherent information of the channel, and optimize it. As we discussed previously, this can only be achieved if the quantum capacity is _additive_, but for the channels with \(\alpha_{k}^{2}<1/2\) we do not know this. However, if we can derive a _positive lower bound_ using the additive capacity (V.14), then we can be assured that the capacity is positive. If the lower bound is zero, then we can only say that it is _possible_ that the capacity is non-zero, but we simply do not know.
Let us then find out under what circumstances the limit \(n\to\infty\) of the "\(n\)-shot" coherent information \(\lim_{n\to\infty}{\cal I}_{\rm coh}(n)\) is positive. The coherent information \({\cal I}_{\rm coh}(n)\) has previously been calculated for both the lossy and the amplifying channel. In particular, it is possible to show that (Bradler and Adami, 2015)
\[{\cal I}_{\rm coh}(n)=g(n)-2g(\xi)\;,\] (V.35)
where \(\xi=\sqrt{1+4n\alpha^{2}}\) and
\[g(x)=(1+x)\log(1+x)-x\log x\;.\] (V.36)
It turns out that in the limit \(n\to\infty\)\(g(\xi)\) vanishes (Bradler, 2015), while
\[{\cal I}_{\rm coh}=\lim_{n\to\infty}g(n)=\frac{\beta_{k}^{2}}{\alpha_{k}^{2}- \beta_{k}^{2}}\log\frac{\beta_{k}^{2}}{\alpha_{k}^{2}}+\log\frac{\gamma_{k}^{ 2}}{\alpha_{k}^{2}}\;.\] (V.37)
The dashed curve in Fig. 5.5 corresponds to the boundary where \({\cal I}_{\rm coh}=0\). We thus see that for parameters where \({\cal I}_{\rm coh}>0\) (purple region in Fig. 5.5) the quantum capacity must be positive (as it is lower-bounded by (V.37)), while the quantum capacity in the white region still cannot be determined.
For \(\gamma_{k}^{2}>1\), the expression (V.37) also turns out to be a lower bound (Bradler and Adami, 2015). The case \(\gamma_{k}=1\), however, has to be treated separately. In this limit \({\cal I}_{\rm coh}\) diverges as \(n\to\infty\), however, this channel is trivial: it represents the so-called "zero-added classical noise" channel (Holevo, 2007), termed \({\cal B}_{2}\) in (Schafer _et al._, 2013). For
Figure 5.5: Quantum capacity for the black hole OMG channel. The region in yellow has vanishing capacity, while for the purple region we have a non-zero lower bound for the capacity. For the region in white (bounded from below by \({\cal I}_{\rm coh}=0\)), the capacity is currently not known because it could be super-additive.
this channel, the transmission matrix \(\mathbf{T}\) in (V.34) is the identity, and the noise vanishes. Such channels have infinite capacity also in classical physics (Cover and Thomas, 1991).
To summarize this section, we have seen that it is possible to make statements about the quantum capacity of black holes for values of the beam-splitter variable \(\alpha_{k}^{2}\) other than the extreme cases \(\alpha_{k}=0\) (for which we saw that the capacity is positive), and \(\alpha_{k}=1\), where the capacity vanishes. We saw that when the beam-splitter absorbs more than it reflects (\(\alpha_{k}^{2}\geq 1/2\)) then the quantum capacity must vanish (so as to conform to the no-cloning theorem). When, in turn, the beam-splitter reflects more than it transmits (\(0\leq\alpha_{k}^{2}<1/2\)) the capacity is positive for some parameters (those for which \(\mathcal{I}_{\rm coh}>0\)), but since for the remaining parameter region \(\mathcal{I}_{\rm coh}\leq 0\), we cannot establish whether the quantum capacity is positive since \(\mathcal{I}_{\rm coh}\) is only a lower bound to the capacity. Needless to say at this point: a vanishing quantum capacity does not point to a flaw in the laws of physics. Rather, when it vanishes it does so because we must _conform_ to the laws of physics, which stipulate that quanta cannot be cloned.
## VI Unitary evaporation of black holes
In everything we have been discussing up to this point, the black hole was treated as a static quantity: it had already formed, and its mass was fixed at \(M_{\rm BH}\). This approximation, which essentially treats the gravitational force as a _background_ field, was necessitated by taking the static-path approximation to the time-dependent operator (III.26). In this approximation, the back-reaction of the radiation on the metric field is neglected: that is the essence of the semi-classical approach.
However, this assumption also precludes us from studying the evaporation of the black hole microscopically. Hawking noticed early on that the energy of the outgoing Hawking radiation must be provided by the black hole, and that therefore the black hole must ultimately disappear. But this seemed to open up another fundamental problem: if we were to assume that a black hole can form from a quantum mechanical pure state (a state \(\rho\) with vanishing von Neumann entropy) that in the future produces Hawking radiation with entropy \(S_{H}\), then (since Hawking radiation is thermal) the final state after black hole evaporation would be a mixed state with positive entropy (Hawking, 1976). However, in a closed system such a transition from a pure state to a mixed state is forbidden: it is tantamount to the non-conservation of probability, a state of affairs I have previously referred to as an abomination.
We contemplated this abomination when it appeared that classical information was lost inside of the black hole, but were able to recognize in the previous sections that all these problems arise simply from ignoring the stimulated emission process. However, there is no "incoming" signaling particle when discussing black hole evaporation, so stimulated emission will not help us understand how this process unfolds. To understand how black hole evaporation returns space-time to the pure state it started out as, we need a description of the interaction of black holes with radiation that goes beyond the semi-classical approach. The dynamics that such a treatment should reveal is that of the celebrated "Page curves". Page first discussed how the quantum entropy of one system might depend on the "size" of a subsystem that it is entangled with, while both together are in a pure state. Specifically, Page asks us to imagine a pure initial state formed using \(n\) particles, \(|\psi\rangle_{\rm in}=|n\rangle\). After this state is entangled with another system, the density matrix of the outgoing system becomes \(\rho_{\rm out}=\sum_{i=0}^{n}p_{i}|i\rangle_{\rm out}\langle i|\), with entanglement entropy
\[S_{e}(\rho_{\rm out})=-\sum_{i=0}^{n}p_{i}\log p_{i}\;.\] (VI.1)
The maximal entropy is reached when all the states \(|i\rangle\langle i|\) are equiprobable, so that \(S_{\rm max}=\log(n+1)\). Page imagined that as the black hole pure state decoheres, the entanglement entropy of the outgoing radiation must also increase until it reaches its maximal value (Page, 1993). As the black hole continues to evaporate, Page argued (using a toy quantum mechanical model) that the entanglement entropy must start to _decrease_ (after a time now dubbed the "Page time"), as in Fig. 6.1.
If a black hole evaporated via a unitary process (that is, if black hole evaporation can be described by an \(S\)-matrix), Page argued that ultimately the entanglement entropy of the black hole must disappear, leaving only a vacuum with zero entropy, in contradiction to Hawking's assessment that black holes must turn pure states into mixed states. Unfortunately, the semi-classical treatment of black holes prevents us from testing this prediction directly: we do not know what the interaction Hamiltonian \(H_{\rm int}\) in (III.3) is (whose matrix elements would form the black hole \(S\)-matrix). Indeed, to proceed we had used the _free-field_ Hamiltonian consisting of the two-mode squeezer and the beam-splitter analogues, which, along with using the single-time-slice approximation of the path integral, gave rise to a consistent picture of black hole dynamics when interacting with classical or quantum information.
It is important to realize at this juncture that the two-mode squeezing (or "optical parametric amplifier", OPA) Hamiltonian (III.14) is itself an approximation that assumes that the number of pump quanta is so large that the
down conversion process does not change the "store" of pump quanta. In other words, in this approximation it is assumed that the down-conversion process does not "react back" on the pump, which is therefore "undepletable", much like the black hole mass is held constant in the semi-classical approximation. But unlike in quantum gravity where we do not know how to move beyond this approximation, in quantum optics it is possible to write down the interaction between the pump modes and the signal and idler modes that represents the canonical _extension_ of the OPA to depletable pumps: it is a _tri-linear_ Hamiltonian
\[H_{\text{tri}}=ir(d_{p}a_{s}^{\dagger}b_{i}^{\dagger}-d_{p}^{\dagger}a_{s}b_{ i})\;.\] (VI.2)
Here, the annihilation operators \(a_{s}\) and \(b_{i}\) refer to the signal and idler modes as before, but \(d_{p}\) and \(d_{p}^{\dagger}\) create and annihilate _pump_ modes instead. The coupling constant \(r\) is related to the gain \(\eta\) of the OPA in Eq. (III.14) and the expected number of pump modes, and is in principle time-dependent.
Given that the quantum optics analogy has been so successful when transferred to black hole dynamics, what if we used the interaction Hamiltonian (VI.2) to calculate the black hole \(S\)-matrix, where the pump modes play the role of black hole modes, and the signal and idler modes are identified with the Hawking and partner modes (just as before)? This was in fact attempted by Nation and Blencowe (Nation and Blencowe, 2010), and later by Alsing (Alsing, 2015). Both found that the entanglement entropy of the Hawking modes decreases after reaching a maximum, but they could not reproduce Page curves because, using effectively a one-time-slice or "static path" approximation (SPA) of the path integral as in (III.3), the calculation quickly became unreliable as the time step \(\Delta t\) is taken to be large. A good introduction to the quantum optics/black hole physics analogy using trilinear Hamiltonians can be found in (Florez Gutierrez, 2022).
We will now see what happens if we use the tri-linear Hamiltonian (VI.2) to calculate the \(S\)-matrix of black hole evaporation, by going beyond the SPA and approximating the black hole \(S\) matrix using enough time slices that \(\Delta t\) can be kept small. In this way, we can follow the evaporation of the black hole (or, in the words of quantum optics, the depletion of the pump) accurately as long as the number of initial quanta \(n\) is not too high. While for black holes the number \(n\) surely must be astronomical, we will have to keep this number comparatively small since the evaluation of the path integral can only be done numerically.
We begin by writing the initial state at \(t=0\) as
\[|\Psi(0)\rangle=|n\rangle_{d}|0\rangle_{a}|0\rangle_{b}\equiv|n\rangle_{d}|0 \rangle_{ab}\;.\] (VI.3)
Here, the Hawking modes (annihilated by operators \(a_{k}\) in region I, as in Fig. 3.1) and the partner modes (annihilated by \(b_{k}\) in region II) are interacting with black hole modes created and annihilated by \(d_{k}^{\dagger}\) and \(d_{k}\), see Fig. 6.2.
We write the time evolution of the joint state \(|\Psi(t)\rangle\) in terms of black hole \(S\)-matrix acting on \(|\Psi(0)\rangle\) as
\[|\Psi(t)\rangle=S(t,0)\,|\Psi(0)\rangle=\mathsf{T}e^{-i\int_{0}^{t}H_{\text{ tri}}(t^{\dagger})dt^{\prime}}\,|n\rangle_{d}\,|0\rangle_{ab}\;,\] (VI.4)
using the trilinear Hamiltonian
\[H_{\text{tri}}=\sum_{k=-\infty}^{\infty}ir_{\omega_{k}}(t)\big{(}d_{k}a_{k}^{ \dagger}b_{k}^{\dagger}-d_{k}^{\dagger}a_{k}b_{k}\big{)}\;.\] (VI.5)
Figure 6.1: A typical Page curve showing the entanglement entropy of Hawking radiation as a function of the βsizeβ of the subsystem (determined by the number of particles in the subsystem). If we assume that the subsystem sizes increases as the black hole evaporates, we can take the subsystem size as a measure of elapsed time since formation of the black hole from a pure state.
Here, \(r_{\omega_{k}}(t)\) is the time-dependent coupling strength that sets the Hawking temperature \(T_{\rm BH}(t)\) and the black hole mass \(M_{\rm BH}(t)=\frac{1}{8\pi T_{\rm BH}(t)}\), via the standard relation (Nation and Blencowe, 2010)
\[T_{\rm BH}(t)=\frac{\omega_{k}}{2\ln\mbox{cotanh }r_{\omega_{k}}(t)}\.\] (VI.6)
In the following, I will again focus on a single mode \(k\) with energy \(\omega_{k}\), and omit the index \(k\) for convenience.
In order to evaluate (VI.4), we need to introduce small time slices \(\Delta t\) that allow us to discretize the path integral so that with \(t=N\Delta t\)
\[U(t)=\mathsf{T}e^{-i\int_{0}^{t}H_{\rm tri}(t^{\prime})}dt^{\prime}\approx \prod_{i=1}^{N}e^{-i\Delta tH_{i}}\.\] (VI.7)
In (VI.7), the \(i\)-th time-slice Hamiltonian \(H_{i}=ir_{0}\big{(}da_{i}^{\dagger}b_{i}^{\dagger}-d^{\dagger}a_{i}b_{i}\big{)}\) acts on the black hole state and the \(i\)-th slice of the \(ab\) Hilbert space \(\ket{0}_{a_{i}b_{i}}\). The initial value of the coupling strength \(r_{0}\) simply sets the energy scale, and we can set \(r_{0}=1\) in the following without loss of generality.
Let us now apply the discretized (VI.7) to the initial state, so that (Hillery and Zubairy, 1982)
\[\ket{\Psi(t)}_{ab}=W\ket{n}_{d}\ket{0}_{ab}=\prod_{i=1}^{N}e^{-i\Delta tH_{i} }\ket{n}_{d}\ket{0}_{ab}\,\] (VI.8)
where I defined he time-sliced basis
\[\ket{0}_{ab}\stackrel{{\rm df}}{{=}}\ket{0}_{a_{N}b_{N}}\otimes \ldots\otimes\ket{0}_{a_{1}b_{1}}\] (VI.9)
as well as the unitary operator acting on time slice \(i\)
\[W^{(i)}=e^{-i\Delta tH_{i}}\] (VI.10)
so that
\[W=W^{(N)}\otimes\ldots\otimes W^{(1)}\.\] (VI.11)
Assuming that the basis states for each time slice appear as product states in Eq. (VI.8) implies that after a black hole mode has been converted to Hawking and partner modes, those modes will never interact with the black hole again, as depicted in Fig. 6.3(a).
While this is certainly reasonable for the outgoing Hawking modes, this is questionable for the partner modes \(b\) behind the black hole horizon. In fact, this is an approximation that is also often made in quantum optics, where the non-linear crystal is assumed to be so thin that the two modes (the signal and idler modes) do not interact with the crystal degrees of freedom after they have been produced. We will test this assumption later by allowing the \(b\) modes to interact with the black hole again, as depicted in Fig. 6.3(b).
Let's evaluate the first time slice:
\[\ket{\Psi(1)}=W^{(1)}\ket{n}_{d}\ket{0}_{a_{1}b_{1}}=\sum_{j=0}^{n}U^{(1)}_{nj }\ket{j}_{d}\ket{n-j}_{a_{1}b_{1}}\,\] (VI.12)
Figure 6.2: Schematics of a black hole (approximate Schwarzschild radius in bold) where black hole modes \(d_{k}\) are transformed into Hawking modes \(a_{k}\) and partner modes \(b_{k}\) at the horizon.
with amplitudes \(U^{(1)}_{nj}\) determined below. The probability \(p(j|n)=|U^{(1)}_{nj}|^{2}\) reflects the probability to convert \(n\) black hole modes into \(j\) Hawking and partner modes in one interaction, and I will outline its calculation (and others like it) below. The full quantum state after time \(t\) in this approximation becomes
\[|\Psi(t)\rangle_{dab}=\sum_{j_{1}\dots j_{N}}U^{(1)}_{nj}\dots U^{(N)}_{j_{N-1} j_{N}}|j_{N}\rangle_{d}|j_{N-1}-j_{N}\rangle_{a_{N}b_{N}}\dots|n-j_{1}\rangle_{a_{1 }b_{1}}\;.\] (VI.13)
As I pointed out earlier, it is possible to approximate the path integral using a single time-slice in the static path approximation (SPA), see e.g. (Arve _et al._, 1988; Lang _et al._, 1993). Such an approximation can yield good results at very low temperatures, when self-consistent temporal fluctuations can safely be ignored. However, SPA calculations of the black hole entropy using the trilinear Hamiltonian lead to an oscillating behavior of the black hole entropy (Alsing, 2015; Nation and Blencowe, 2010), suggesting that self-consistency of fluctuations are an important element of Page curves.
Using the time-dependent out-density matrix
\[\rho_{\rm out}(t)=|\Psi(t)\rangle_{dab}\langle\Psi(t)|\] (VI.14)
we can define the black hole density matrix \(\rho_{\rm BH}\) by tracing over the Hawking and partner modes:
\[\rho_{\rm BH}(t)={\rm Tr}_{ab}\,\rho_{\rm out}(t)\;,\] (VI.15)
so that the black hole entropy is
\[S_{\rm BH}(t)=-{\rm Tr}_{d}\,\rho_{\rm BH}(t)\log\rho_{\rm BH}(t)\;.\] (VI.16)
The density matrix \(\rho_{\rm BH}\) can be written entirely in terms of the probabilities \(p(j|i)=|U_{ij}|^{2}\) introduced earlier, which stand for the probability to turn \(i\) black hole modes into \(j\) Hawking/partner modes (there are always as many partner modes as there are Hawking modes since they are always created in pairs). We find
\[\rho_{\rm BH}(t)=\sum_{j_{N}=0}^{n}p_{j_{N}}|j_{N}\rangle\langle j_{N}|\] (VI.17)
where
\[p_{j_{N}}=\sum_{j_{1}>j_{2}>\dots>j_{N-1}}^{n}|U^{(1)}_{nj_{1}}|^{2}\dots|U^{( N-1)}_{j_{N-2}j_{N-1}}|^{2}|U^{(N)}_{j_{N-1}j_{N}}|^{2}.\] (VI.18)
Figure 6.3: Action of the discretized \(S\)-matrix \(W^{(i)}\) on the discretized basis states. (a) By assuming that once a black hole mode is converted, the Hawking and partner mode never interact with the black hole again allows us to write the sliced basis as a product (VI.8). (b) Relaxing this assumption for the \(b\) modes creates a far more complex set of interactions.
The probabilities \(p(j|i)=|U_{ij}|^{2}\) are difficult to evaluate. Unlike in the case when we were dealing with the "free-field" Hamiltonian (III.5) that allowed the associated \(U=e^{iH}\) to be factorized using the SU(2) and SU(1,1) disentangling theorems, the unitary operator \(W^{(i)}\) does not appear to be factorizable in a simple way. The usual formal factorization formulas (Magnus, 1954; Suzuki, 1976; Trotter, 1959) are not suitable for practical calculations.
In the absence of a disentanglement decomposition of \(W^{(i)}\), we might entertain the idea to simply perform a Taylor expansion of the exponential in (VI.10) in terms of \(r_{0}t\). However, even for moderate \(r_{0}t\), the Taylor expansion is prohibitively inefficient, requiring of the order of about \(2^{500}\) terms for \(n=50\) and \(\Delta t=1/15\). Fortunately, a method developed by Bradler (Bradler, 2015) makes it possible to evaluate matrix elements of \(W^{(i)}\) in terms of an integer lattice known as a generalized Dyck path (Stanley, 1999) as long as \(W^{(i)}\) acts on any state generated by the repeated action of \(da^{\dagger}b^{\dagger}\) on a ground state \(|0\rangle\), defined by \(d^{\dagger}ab|0\rangle=0\). It so happens that the basis elements \(\{|j\rangle_{d}|00\rangle_{ab}\}_{j=0}^{n}\) spanning the input Hilbert space of \(W^{(i)}\) are all ground states of \(H_{i}\).
Using Bradler's nearly miraculous Dyck-path representation of \(W^{(i)}\) (which generates a polynomial rather than exponential number of terms) we can evaluate \(\rho_{\rm BH}\) for black holes with initial quanta up to \(n=50\), using the discretized path integral (VI.13). Fig. 6.4 shows the black hole entropy \(S_{\rm BH}\) as a function of the number of time slices used, for a small \(\Delta t=0.15\), for black holes with \(n=5\), \(n=20\), and \(n=50\). As the maximal entropy of a black hole with \(n\) initial modes is \(\log(n+1)\) (counting the \(n\) states plus the vacuum state), we show in Fig. 6.4 the _normalized_ entropy in order to be able to compare the shape of the curves as the size of the black hole is changed. The resulting entropy curve turns out to be strikingly similar to the one predicted by Page (Page, 1993) as long as we observe evaporation for long enough (several thousand time slices). Most importantly, the entropy that started out as a pure state reaches a maximum (at about the time when half the black hole quanta have been converted, see (Fliorez Gutierrez, 2022)) and appears to vanish as \(t\to\infty\), where the final black hole density matrix \(\rho_{\rm BH}\) approaches a pure vacuum state \(|0\rangle_{d}\) in the limit \(N\to\infty\), for all the input basis states \(|n\rangle\) tried. It thus appears that, from the formation to the decay of black holes, pure states are turned into pure states, and the laws of physics remain inviolate.
Note that using just \(n=50\) initial modes already gives rise to an extremely large Hilbert space. Using a Taylor expansion of \(W^{(i)}\) with \(\Delta t=1/15\) to order up to \(500\) would require \(2^{500}\) terms12, which is of course intractable. Bradler's Dyck path representation (Bradler, 2015) renders the calculation tractable, but it does require High-Performance Compute Clusters. Using a smaller \(\Delta t=1/25\) with commensurately fewer time slices (to keep overall compute time constant) does not change the curves visibly, which suggests that \(\Delta t=1/15\) is sufficiently small to allow for equilibration.
Footnote 12: A Taylor expansion of \(W^{(i)}\) entails the expansion of operators of the type \(e^{A+A^{\dagger}}=\sum_{i=0}^{n}\frac{1}{n!}(A+A^{\dagger})^{i}\), where \(A=d^{\dagger}ab\). Because \(A\) and \(A^{\dagger}\) in general do not commute, the number of summands in \((A+A^{\dagger})^{i}\) is not \(i+1\) but instead \(2^{i}\).
While the curves shown in Fig. 6.4 used the approximation that black hole modes converted to Hawking and partner modes only interact once (by virtue of the initial state (VI.9)), it is possible to relax this assumption and allow the partner modes behind the horizon to interact with black hole modes again, as in Fig. 6.3(b). Doing so
complicates the calculation enormously so that only small systems can be evaluated, for fewer time slices. The overall shape of the entropy curves does not change appreciably when allowing partner modes to interact with the black hole repeatedly. The curves become somewhat more symmetric due to a slower rise of the entropy, and in so doing become more similar to the symmetric curves that Page had imagined. Further, while the calculation assumed that the black hole was initially in a pure state \(|n\rangle_{d}\), the results are unchanged if the initial state is instead in a rotated basis \(|\psi\rangle_{d}=\sum_{j=1}^{n}\epsilon_{j}|j\rangle_{d}\).
Technically speaking, the mapping from the black hole initial state to the final state is an example of an _erasure_ map
\[|n\rangle_{d}|0\rangle_{ab}\stackrel{{ W}}{{\longrightarrow}}| \Psi(t)\rangle_{ab}\stackrel{{ t\rightarrow\infty}}{{\approx}}|0 \rangle_{d}\otimes\rho_{ab}(|n\rangle)\] (VI.19)
that ultimately decouples the black hole from the Hawking and partner modes. It turns out that this map is an explicit realization of the fully-quantum Slepian-Wolf (FQSW) protocol (Abeyesinghe _et al._, 2009), which is the fundamental protocol in quantum information science that quantifies how well quantum entanglement can be transferred, stored, and distilled. The unitary interaction (VI.7) first creates the entanglement between the black hole (which plays the role of the reference in the FQSW protocol) and the outside and the inside of the black hole (re-enacting the parts of Alice and Bob). After the Page time, further dynamics erases the entanglement between the black hole and Hawking radiation, just as in the FQSW protocol the entanglement between Alice and the reference is erased. That the further dynamics reverses the prior entanglement is ensured by the continued unitary dynamics of the black hole's interaction with the radiation field, in the same manner as the erasure map in the FQSW protocol forces the transfer of entanglement from Alice to Bob. It is this same unitarity that enforces the existence of stimulated emission, which in turn preserves information in black hole dynamics.
It is useful to consider the dynamics of black hole formation, evaporation, up to ultimate disappearance, in terms of entropy Venn diagrams. Those diagrams summarize how classical or quantum entropies are distributed among the subsystems of a closed system. In particular, if we begin with a system in a pure state (with zero entropy) those diagrams can reveal how the purity of the system is maintained as long as the sum of all entropies remains at zero. In Box 2 I show schematically how this is indeed achieved in the scenario I have outlined here (see also (Adami and Cerf, 1999a)), and in particular suggest that the information about the black hole's formation is in fact _encrypted_ in the Hawking and partner modes (a process that is otherwise called "scrambling", see (Hayden and Preskill, 2007)). It is also clear that as a consequence, Hawking modes can be purely thermal and yet convey information about the black hole modes.
Entropy Venn diagrams are a useful tool to study how classical or quantum information is distributed among the subsystems of a (larger) composite system. Such diagrams have been used extensively to study quantum information processing and communication (Adami and Cerf, 1999b, 1997; Cerf and Adami, 1997a,b) as well as quantum experiments (Glick and Adami, 2017, 2020). In those diagrams, a circle represents a subsystem, and the intersection of this circle with another circle (subsystem) refers to the shared entropy between the two subsystems. Fig. 6.5 shows a simple bi-partite Venn diagram between systems \(X\) and \(Y\), labeling the conditional and shared entropies.
Shared pairwise entropies (such as \(H(X;Y)\) in Fig. 6.5) can never be negative (either in classical or quantum Venn diagrams), but they can exceed the entropy of \(X\) or \(Y\) in the quantum case, and conditional entropies can be negative in quantum physics (Cerf and Adami, 1997b), something that is impossible for classical (Shannon) entropies. Shared entropies between three or more systems can be negative both in classical and quantum physics.
The quantum entropy diagram for the black hole pure state (Fig. 6.6) shows the black hole entangled with a reference state \(R\) (as we did in the construction of the quantum channel, when we "purified" Alice's density matrix in Fig. 5.1). As the black hole evolves, it loses entropy due to pair formation of Hawking and partner modes at the horizon (modes annihilated by \(a_{k}\) and \(b_{k}\)). I will use the letter \(A\) to denote the Hawking modes (outside the black hole) and \(B\) to denote the partner modes (inside the black hole). They should not be confused with the "Alice" and "Bob" systems defined earlier. Assuming that the entropy of Hawking modes \(H(A)=\Delta S\) (because Hawking and partner
modes are entangled, their entropy must always be the same), the entropy Venn diagram between the black hole and radiation must be that shown in Fig. 6.7(a). Note that in this diagram the reference state R with entropy \(S\) is traced out, so that the joint state of black hole BH and the AB system must also have entropy \(S\). We can also see that the information that the Hawking and partner modes have extracted from the black hole is _encrypted_: the negativity of the shared "triplet" information \(H(BH;A;B)=-\Delta S\) is the tell-tale sign of a symmetric Vernam cipher (Shannon, 1949; Vernam, 1926): each of the three systems is the cryptographic key to unlock the information between the other two (it is easily implemented via the "controlled NOT" (CNOT) operation, for example, A=B.CNOT.BH). Incidentally, the CNOT operation is precisely the one implementing the cloning operation (IV.7). This relationship between black hole and Hawking/partner modes has previously been described as "scrambling" (Hayden and Preskill, 2007), except that in the Hayden-Preskill protocol the decoder has access to the partner modes, which is not possible here.
Note further that when tracing over reference as well as partner modes, the entropy Venn diagram between Hawking radiation and the black hole indicates that they share zero information (Fig. 6.7b): one cannot be used to predict the state of the other. Yet, they are still entangled via their entanglement with the partner modes and the reference. Once \(\Delta S\) has become as large as \(S\), the entire entropy of the black hole modes has been converted, so that \(H(BH|A,B)=0\) (see Fig. 6.8). The decoupling via "entanglement erasure" has been achieved, and the remaining state is that of a fully entangled pure GHZ (Greene-Horne-Zeilinger) state between reference, Hawking, and partner modes.
Finally, a few words about the use of the tri-linear Hamiltonian to simulate interactions between black hole modes and radiation. Obviously, this interaction Hamiltonian does not follow from a fundamental theory of quantum gravity, but instead emerges from taking the quantum optics analogy suggested by the Bogoliubov transformation (III.1) seriously. That transformation suggested a free-field Hamiltonian (the squeezing Hamiltonian) which, in parametric down-conversion, can be extended so as to deal with a depletable pump using the term (VI.2). Alternatives to using such a simple interaction require coupling the black hole to extraneous degrees of freedom, such as for example the dilaton (as in the CGHS model (Callan, Jr and S. Giddings _et al._, 1992)). But generally speaking (and as argued for by Strominger (Strominger, 1996)), _any_ consistent unitary theory that obeys energy conservation rules must have a tri-linear interaction term in the low-energy limit.
Figure 6.7: (a): Entropy Venn diagram of the black hole BH and the Hawking and partner modes, A and B, respectively. As the reference R is traced over, the joint entropy of BH and radiation is also \(S\). The reference stateβs existence is indicated by the gray area, with conditional entropy \(-S\), reminding us that the joint system is still pure. Each of the radiation modes have marginal entropy \(\Delta S\). The entropy of the black hole _given_ the radiation modes (that is, not counting their contribution to the total entropy) is \(H(BH|A,B)=S-\Delta S\). (b): Entropy Venn diagram showing that the black hole BH and the Hawking modes A are uncorrelated when tracing over reference and partner modes. This holds true also for BH and partner modes B (when tracing over Hawking modes).
Figure 6.6: Entropy Venn diagram showing the initial joint state of the black hole \(BH\) purified by the reference \(R\). The joint state has zero entropy, while the inital von Neumann entropy of the black hole is \(S\). As the black hole evolves, the entropy of the reference remains at \(S\), while the entropy of the black hole (minus the entropy of the Hawking radiation) will decrease.
## VII Discussion
In this review I have tried to marshal a set of arguments--some of which are nearly as old as Hawking's original derivation of the spontaneous emission of radiation by black holes--to support the point of view that black hole dynamics is unitary from formation to complete evaporation, and that information is not lost in, swallowed by, or destroyed in, black holes. Because of the effect of stimulated emission of radiation, which must accompany the spontaneous emission of radiation for any black body (as Einstein's original derivation showed), information is always copied at the horizon with an accuracy close to what the laws of physics allow. Stimulated emission not only ensures information preservation, it also ensures CPT invariance, as Box 1 suggests. The copying of information at the event horizon does not, however, violate the quantum no-cloning theorem, which after all is a consequence of the linearity of quantum mechanics. In fact, it is precisely the Hawking radiation that saves the no-cloning theorem, as it is Hawking radiation that prevents perfect cloning of quantum states (just as perfect cloning is impossible in quantum optics due to the "open port" in the nonlinear crystal that gives rise to vacuum fluctuations).
How is it possible that Hawking missed this effect in his initial (and also subsequent) publications on the quantum properties of black holes? It is clear that he was aware of the possibility of stimulated emission, but in (Hawking, 1975) he describes stimulated emission in terms of the phenomenon of "superradiance", which he refers to as a "classical phenomenon" (citing Refs. (Misner, 1972; Press and Teukolsky, 1972; Starobinskii, 1973)). Superradiance is a term used both in quantum optics and in astrophysics, and generally refers to the emission of radiation due to excitations that owe their energy either to inertial motion at superluminal speeds (e.g., Cerenkov radiation) or from rotational motion. In both cases, the energy powering the radiation must come from the kinetic energy of bulk matter. Soviet physicist Yakov Zeldovich had calculated that a rotating cylinder would emit radiation via stimulation (Zeldovitch, 1972), while Alexei Starobinskii argued that a rotating black hole would emit radiation due to that same effect (Starobinskii, 1973). During Hawking's well-documented visit to Moscow in 1973 he talked to Zeldovitch as well as to Starobinskii, who suggested to Hawking that stimulated (superradiant) emission must be accompanied by _spontaneous_ emission in the same modes but _without_ input from bulk kinetic energy. After the visit, Hawking engaged in the calculation we now know. His calculation, however, treated non-rotating (Schwarzschild, rather than Kerr) black holes, and for this reason it seems he discarded the possibility of stimulated emission out of hand.
In hindsight, the importance of the stimulated process should have been apparent if only Hawking had investigated the effect of particles present in the initial vacuum (at past infinity \(\mathscr{I}^{-}\)). But Hawking explicitly disregarded those. His reasons for doing so are not immediately clear. As discussed earlier, to understand the effect of gray-body factors, Hawking relied on an argument following particle trajectories backwards in time from inside the black hole to the outside. In both Refs. (Hawking, 1974) and (Hawking, 1976), Hawking claims that following particles inside the horizon at future infinity back to \(\mathscr{I}^{-}\) outside the horizon (that is, with \(v>v_{0}\) in his notation) has zero amplitude [Eq. (4.22) in (Hawking, 1976) concerns precisely such particles]. It is not clear what reasons he clustered for this assertion that there cannot be any particles at past infinity, but we must keep in mind that Hawking also did not seem to realize that a perfectly black hole will, when viewed from the _inside_, look like a perfectly reflecting mirror (see Box 1). For such a mirror (a white hole), those moving-backwards-in-time particles hitting the horizon would not be transmitted to past infinity, but stimulated emission of particles from those reflected at the white hole horizon, as discussed in section IV.2, would.
Figure 6.8: (a): Entropy Venn diagram after complete evaporation of the black hole. While the diagram shows the outline of the black hole BH, it actually ceases to be a physical entity as the horizon has disappeared. (b): The entropy Venn diagram between radiation modes and the reference after full evaporation of the black hole.
My best guess of why Hawking dismissed stimulated emission as an important element in black hole dynamics is that he thought that stimulated emission needs an energy source so as to lift the system out of its ground state. For Kerr black holes, this energy is supplied by the rotational motion. For atomic gases, inversion is created via pumping. But black holes are different: they are "forever in a pumped state": their heat capacity, after all, is negative. As a consequence, they can be stimulated to emit copies even if they lack a charge or angular momentum.
What empirical evidence do we have today that stimulated emission must play an important role in black hole dynamics? We have yet to detect Hawking radiation from black holes, and are unlikely to do so because the temperature (for average-sized black holes) is extremely low (about \(6\times 10^{-8}\frac{M_{\odot}}{M}[\text{K}]\) for a black hole of mass \(M\), where \(M_{\odot}\) is the solar mass), and the radiation is expected to be extremely faint. However, several teams have created "analogue black holes" by artificially creating the causal split between space-time regions that defines black holes, as initially suggested by Unruh (Unruh, 1981). One way to achieve this is to use gravity waves in flowing fluids. As discussed for example by Leonhardt (see in particular Fig. 8.7 in Ref. (Leonhardt, 2010)) it is possible to create horizons for gravity waves by creating a fluid flow that is faster than the maximum speed for those gravity waves. Weinfurtner et al. (Weinfurtner _et al._, 2011) tested whether a water wave directed towards a (simulated) white hole horizon stimulated pairs of waves with the correct amplitude ratio (III.7) predicted by Hawking. They found this to be the case, and argued that because stimulated emission must always be accompanied by spontaneous emission, it should be possible in principle to observe analogue Hawking radiation in this system also. This was achieved by another group, using atomic Bose-Einstein condensates as the medium (Kolobov _et al._, 2021; Munoz de Nova _et al._, 2019). In this experiment, two regions were created where the flow velocity is larger (smaller) than the speed of sound in this medium, simulating the inside (outside) of the black hole. The group observed the formation of the black hole horizon, and the subsequent emergence of Hawking and partner modes. Because the experiment produces not only a black hole horizon but also a white hole horizon (an area inside of the simulated black hole that reflects waves) the dynamics creates Cerenkov radiation that ultimately stimulates the emission of Hawking/partner pairs.
Earlier speculations (Corley and Jacobson, 1999) and experiments (Steinhauer, 2014) with Bose-Einstein condensates had suggested that perhaps the reflection of waves at the white-hole "inner" horizon could stimulate the emission of more modes at the black-hole horizon, which in turn after reflection at the white-hole horizon could lead to ever-increasing amplification of these waves: a black hole laser. In fact, this idea was introduced rather speculatively already by Press and Teukolsky (Press and Teukolsky, 1972) before the discovery of Hawking radiation. Those authors imagined a black hole encased in a spherical mirror, and radiation amplified superradiantly (in a Kerr black hole, of course) could reflect back onto the mirror, creating an instability they termed the "black hole bomb". However, a close analysis (Wang _et al._, 2017) of Steinhauer's experiment (Steinhauer, 2014) revealed that lasing at most played a minor role in the Bose-Einstein cavity, most likely because the location of the white hole horizon with respect to the black hole horizon is constantly changing, destroying the coherence required for the lasing phenomenon.
However, these discussions open up the possibility of a black hole laser that is not due to an "inner" horizon, but rather emerges for the brief period of time when a black-hole binary is inspiraling, just before the merger event. While the period of time where the binaries are close enough so that a significant amount of stimulated radiation could hit the partner is brief (on the order of a fraction of a second for a typical binary), this is sufficient for on the order of ten reflections (in the rotating frame), leading to significant amplification13. Could such a "flash" of coherent black-hole laser light be detected at the same time that we are recording the gravitational signature of such an event? This is a difficult experimental question, since as of yet we do not have a simulation of such a phenomenon that would allow us to tune experiments to detect this signature. Such a simulation would allow us to distinguish the tell-tale coherent signature from the light emitted by accretion disks (if any). To-date, no electromagnetic signature has been observed coming from regions where a binary black-hole merger was pinpointed to. This suggests that possibly those mergers do not have accretion disks, and consequently there would also be no source of radiation/matter that could initiate the black hole laser. Should we observe an electromagnetic counterpart to a binary black-hole merger in the future, it may be worth while to develop techniques that can test whether that light was due to the black-hole induced simulated emission of radiation, giving us the first direct experimental verification of (stimulated) Hawking radiation.
Footnote 13: An even more speculative notion is the idea that a wormhole that connects two black holes could create a cavity that would coherently amplify radiation trapped within it, giving rise to a _wormhole laser_.
**Funding statement** This research was unsupported.
###### Acknowledgements.
I am indebted to my collaborators in black hole information theory: Greg Ver Steeg and Kamil Bradler. I also thank numerous friends and colleagues who patiently listened to my arguments that black holes do not violate any laws: Paul Davies, Nigel Goldenfeld, Nicolas Cerf, Arend Hintze, Claus Wilke, and Richard Lenski. I dedicate this contribution to the late Jonathan P. Dowling, who led the quantum computing group at the Jet Propulsion Laboratory where the ideas presented in this article were first hatched.
## Appendix A Density matrix of outgoing radiation for arbitrary absorptivity
The density matrix of outgoing radiation in region I when \(m\) particles are incident (notation \(k|m\)) can be calculated from the outgoing demsity matrix \(|\psi\rangle_{\rm out}\langle\psi|\) (constructed from the out-state Eq. (III.30)) by repeated application of the disentangling theorems for SU(2) and SU(1,1), and tracing over the degrees of freedom of region II (modes \(b_{k}\) and \(c_{k}\)). If no antiparticles are accreting on the horizon, the antiparticle part of the density matrix factorizes again and we can write \(\rho_{\rm I}=\rho_{k|m}\otimes\rho_{-k|0}\) where (I omit the index \(k\) in the particle numbers and the coefficients \(\alpha\), \(\beta\) and \(\gamma\) for succinctness in the following)
\[\rho_{k|m}=\sum_{n=0}^{\infty}p(n|m)|n\rangle\langle n|\;,\] (A.1)
with
\[p(n|m) = Z_{m}^{2}\sum_{l=0}^{\infty}\frac{n!\,l!}{m!(l-m+n)!}\left[\sum_{ i=0}^{\min(n,m)}(-1)^{i}{m\choose i}{l-m+n\choose n-i}\Big{(}\frac{\gamma^{2}} {\alpha(1+\alpha)}\Big{)}^{i}\right]^{2}\] \[\times \Big{[}\frac{\beta^{2}(1+\alpha)^{2}}{(1+\alpha+\beta^{2})^{2}} \Big{]}^{n}\Big{[}\frac{\beta^{2}\gamma^{2}}{(1+\alpha+\beta^{2})^{2}}\Big{]}^ {l}\]
where
\[Z_{m}^{2}=\left(\frac{1+\alpha}{1+\alpha+\beta^{2}}\right)^{2}\left(\frac{ \alpha^{2}(1+\alpha)^{2}}{\gamma^{2}}\right)^{m}\;.\] (A.3)
It is possible to rewrite this expression14 for \(p(n|m)\), the probability to detect \(n\) outgoing particles at \(\mathscr{I}^{+}\) if \(m\) were incident on the static black hole, by using a resummation technique described by Panangaden and Wald (Panangaden and Wald, 1977) to read
Footnote 14: Converting the double sums in (A.2) into the single sum in (A.4) is highly non-trivial and took two years to discover after the identity was confirmed numerically by G. Ver Steeg.
\[p(n|m)=R_{nm}\sum_{i=0}^{\min(n,m)}(-1)^{i}{m\choose i}{m+n-i\choose n-i} \left(1-\frac{\gamma^{2}}{\alpha^{2}\beta^{2}}\right)^{i}\] (A.4)
with
\[R_{nm}=\frac{1}{1+\beta^{2}}\left(\frac{\beta^{2}}{1+\beta^{2}}\right)^{m+n} \left(\frac{\alpha^{2}}{\beta^{2}}\right)^{m}\;,\] (A.5)
Expression (A.4) agrees precisely with the conditional probability derived by Bekenstein and Meisels (Bekenstein and Meisels, 1977) using maximum entropy methods, and by Panangaden and Wald (Panangaden and Wald, 1977) in quantum field theory (but using a very different method than the one described here). While the expression given by Bekenstein and Meisels looks quite different from (A.4) as they write (with \(x=\omega/T_{\rm BH}\) as before)
\[p(n|m) = \frac{(e^{x}-1)e^{mx}\,\Gamma^{n+m}}{(e^{x}-1+\Gamma)^{n+m+1}} \sum_{i=0}^{\min(n,m)}\frac{(-1)^{i}(m+n-i)!}{i!(m-i)!(n-i)!}\] (A.6) \[\times \left[1-2\frac{1-\Gamma}{\Gamma^{2}}(\cosh(x)-1)\right]^{i}\;,\]
we can nevertheless see it agrees with (100) (and therefore also with (101)) by noting, for example, that
\[2\frac{1-\Gamma}{\Gamma^{2}}(\cosh(x)-1)=\frac{\gamma^{2}}{\alpha^{2}\beta^{2}}. \tag{102}\]
and
\[\frac{(e^{x}-1)e^{mx}\,\Gamma^{n+m}}{(e^{x}-1+\Gamma)^{n+m+1}}=R_{nm}. \tag{103}\]
The expression (101) has the advantage that it manifestly observes the detailed balance condititions, something that is not immediately apparent from Eqs. (101) and (100).
|
2310.10472 | Pointwise modulus of continuity of the Lyapunov exponent and integrated
density of states for analytic multi-frequency quasiperiodic $M(2,
\mathbb{C})$ cocycles | It is known that the Lyapunov exponent for multifrequency analytic cocycles
is weak-H\"older continuous in cocycle for certain Diophantine frequencies, and
that this implies certain regularity of the integrated density of states in
energy for Jacobi operators. In this paper, we establish the pointwise modulus
of continuity in both cocycle and frequency and obtain analogous regularity of
the integrated density of states in energy, potential, and frequency. | Matthew Powell | 2023-10-16T14:52:08Z | http://arxiv.org/abs/2310.10472v1 | Pointwise modulus of continuity of the Lyapunov exponent and integrated density of states for analytic multi-frequency quasiperiodic \(M(2,\mathbb{C})\) cocycles
###### Abstract.
It is known that the Lyapunov exponent for multifrequency analytic cocycles is weak-Holder continuous in cocycle for certain Diophantine frequencies, and that this implies certain regularity of the integrated density of states in energy for Jacobi operators. In this paper, we establish the pointwise modulus of continuity in both cocycle and frequency and obtain analogous regularity of the integrated density of states in energy, potential, and frequency.
## 1. Introduction
In this paper, we are interested in the regularity of the Lyapunov exponent associated to multifrequency quasi-periodic cocycles. Let \(M(2,\mathbb{C})\) denote the set of \(2\times 2\) matrices with complex entries. Let \(\mathbb{T}^{d}=\mathbb{R}^{d}/\mathbb{Z}^{d}\) denote the \(d\)-dimensional torus and let \(S_{\omega}:\mathbb{T}^{d}\to\mathbb{T}^{d}\) denote the shift by \(\omega\in\mathbb{T}^{d}:x\mapsto x+\omega.\) A \(d\)-dimensional quasi-periodic cocycle is a pair \((A,\omega)\in C(\mathbb{T}^{d},M(2,\mathbb{C}))\times\mathbb{R}\) understood as a linear skew product \((A,\omega):\mathbb{C}^{2}\times\mathbb{T}^{d}\to\mathbb{C}^{2}\times\mathbb{T }^{d}\) with
\[(w,x)\mapsto(A(x)w,S_{\omega}x).\]
Cocycles enjoy the property that they may be iterated, in the following sense: the \(N^{th}\) iterate of \((A,\omega)\) is
\[A_{N}(x,\omega)=\prod_{j=N-1}^{0}A(S_{\omega}^{j}x).\]
We are interested in _analytic quasiperiodic cocycles_, so we assume \(A\) is an analytic \(M(2,\mathbb{C})\)-valued function on \(\mathbb{T}^{d}.\) It is known that continuity of the Lyapunov exponent fails, in general, when \(A\) is only \(C^{\infty}\) (c.f. [18] or [25]). Since we assume that \(A\) is analytic on \(\mathbb{T}^{d},\) we may extend \(A\) to some complex strip, \(|\Im(z_{j})|<\rho,j\leqslant d.\) We denote the space of such \(A\) by \(C_{\rho}(\mathbb{T}^{d},M(2,\mathbb{C})),\) and imbue this with the natural metric
\[\left\|A\right\|_{\rho}:=\sup_{|\Im(z_{j})|<\rho/2}\left\|A(z_{1},...,z_{d}) \right\|.\]
**Remark 1**.: Note that we take the supremum over \(|\Im(z_{j})|<\rho/2\) rather than over \(|\Im(z_{j})|<\rho\) to ensure that the supremum exists. We may do this because \(A\) is also an analytic cocycle over the closed set \(\left\{z\in\mathbb{C}^{d}:|\Im(z_{j})|\leqslant\rho/2\right\}.\)
From this, we can inductively define a topology on the space of _all_ analytic cocycles, but since convergence in this topology is equivalent to convergence in \((C_{\rho},\left\|\cdot\right\|_{\rho})\) for some \(\rho>0,\) we will restrict our attention to \(C_{\rho}\) for a fixed \(\rho.\)
Cocycles of this form have been used extensively to study one-dimensional discrete Jacobi operators:
\[H_{x,\omega}:\ell^{2}(\mathbb{Z})\rightarrow\ell^{2}(\mathbb{Z})\]
given by
\[(H_{x,\omega}\psi)(n)=\overline{a(S_{\omega}^{n-1}x)}\psi(n-1)+a_{(}S_{\omega} ^{n}x)\psi(n+1)+v(S_{\omega}^{n}x)\psi(n), \tag{1}\]
where \(a,v\in C(\mathbb{T}^{d},\mathbb{R}).\) Solutions to the eigenequation \(H_{x,\omega}\psi=E\psi\) may be recovered via the \(N^{th}\) transfer matrix
\[A_{N}(x,\omega,E)=\prod_{j=N-1}^{0}M_{j}(x,\omega,E),\]
where
\[M_{j}(x,\omega,E)=\begin{pmatrix}E-v(S_{\omega}^{j+1}x)&-\overline{a(S_{\omega }^{j}x)}\\ a(S_{\omega}^{j+1}x)&0\end{pmatrix}.\]
Indeed, any solution to \(H_{x,\omega}\psi=E\psi\) satisfies
\[\begin{pmatrix}\psi(N+1)\\ \psi(N)\end{pmatrix}=A_{N}(x,\omega,E)\begin{pmatrix}\psi(1)\\ \psi(0)\end{pmatrix}.\]
Moreover, the transfer matrices is a classic example of an iterate of a quasiperiodic cocycle, by taking \(A(x,\omega)=M_{1}(x,\omega,E).\)
Now observe that any \(M(2,\mathbb{C})\) cocycle \(A_{N}(x)\) for which \(\det(A(x))\) is not identically zero can be renormalized to form an \(SL(2,\mathbb{C})\) cocycle (see e.g.[17]), however the resulting cocycle will lose pointwise boundedness if \(\det(A(x))\) has zeros. This is precisely the nature of difficulty when extending \(SL(2,\mathbb{C})\) results to the \(M(2,\mathbb{C})\) case.
The (upper) Lyapunov exponent is defined as
\[L^{\prime}(A,\omega)=\frac{1}{N}\int_{\mathbb{T}^{d}}\ln\left\|A_{N}(x,\omega) \right\|dx. \tag{2}\]
Note that, while \(L^{\prime}(A,\omega)\) need not be non-negative, the related object
\[L(A,\omega)=\lim_{N\rightarrow\infty}\int_{\mathbb{T}^{d}}L_{N}(\tilde{A}, \omega,x)dx, \tag{3}\]
is, where \(\tilde{A}\in SL(2,\mathbb{C})\) is a renormalization of \(A:\)
\[\tilde{A}=\frac{1}{|\det A|^{1/2}}A. \tag{4}\]
Moreover, \(L_{N}\) and \(L_{N}^{\prime}\) are related by the following relation:
\[L_{N}(A,\omega)=L_{N}^{\prime}(A,\omega)-\frac{1}{2}\int_{\mathbb{T}^{d}}\ln|\det (A(x))|dx. \tag{5}\]
It follows that, when \(\ln|\det(A(x))|\in L^{1},\) both \(L\) and \(L^{\prime}\) share the same regularity properties.
It is often easier to deal with \(L^{\prime}\) when proving general boundedness and finite-scale continuity (see Sections 2 and 3), but it is easier to deal with a non-negative quantity when proving and using our induction scheme (see Sections 4 and 5). Thus both \(L\) and \(L^{\prime}\) play a role in this paper.
Returning to (1), an object related to the Lyapunov exponent for such operators is the integrated density of states (IDS), which maybe generally defined as in [7]. Let \(E_{1}<E_{2}\) and define
\[k(x,\omega,E_{1},E_{2})=\limsup_{N\to\infty}\frac{1}{2N+1}\left\{\text{ eigenvalues of}R_{[-N,N]}H_{x,\omega}R_{[-N,N]}\text{ in }[E_{1},E_{2}]\right\},\]
where \(R_{[-N,N]}\) denotes the projection onto the interval \([-N,N].\) In our setting, this \(\limsup\) is constant for Lebesgue a.e. \(x.\) Clearly one may ask about the regularity of this object in the various parameters, \(V,E_{1},E_{2},\omega.\) We obtain such a statement as a consequence of our main theorem (see Corollary 1.1).
The continuity of the Lyapunov exponent (in both cocycle and frequency) in this setting has been studied extensively in [20], where the author adapted an argument of Bourgain originally used to study \(SL(2,\mathbb{C})\) cocycles to study singular cocycles.
The (pointwise) modulus of continuity of the Lyapunov exponent for one-frequency quasi-periodic cocycles has been studied by many authors. While the Lyapunov exponent is known to be continuous in a very general settings, the (pointwise) modulus of continuity is a more delicate matter. This question was first studied in [14] for one-frequency (the underlying torus is \(\mathbb{T}^{1}\)) Schrodinger operators with fixed strongly diophantine frequency \(\omega.\) There, authors also obtained analogous modulus of continuity for the IDS in the same setting. A key component of this proof was an a priori positivity assumption: \(L(E)>0.\) Shortly afterwards, the Lyapunov exponent for Schrodinger cocycles was shown to be continuous in both energy (for all frequencies) and frequency (at irrational frequencies) without positivity or Diophantine assumptions [6]. This work was unable to obtain a modulus of continuity, however. It turns out that any modulus of continuity better than log-Holder requires both positivity and some arithmetic (perhaps weak) assumption [4].
Many authors have worked to extend various results from [14] with some success. The modulus of continuity of the Lyapunov exponent in the one-frequency setting has been extended to singular cocycles and fixed weaker diophantine frequencies [22]; the multifrequency case has proved more delicate, and the known results still require some restrictive condition on the
frequency [11]. The modulus for both the Lyapunov exponent and IDS has also been obtained and/or sharpened in a variety of settings closely related to the Schrodinger case [1, 2, 8, 9, 10, 12, 13, 15, 16, 23, 24, 26]
We prove the following.
**Theorem 1.1**.: _Let \(A(x)\in C_{\rho}\) be an analytic quasi-periodic \(M(2,\mathbb{C})\)-cocycle on \(\mathbb{T}^{d}\) with a plurisubharmonic extension to the strip \(|\Im(z_{j})|<\rho.\) Suppose that \(\omega\in\mathbb{T}^{d}\) is such that, for some \(\sigma\geq 1,\)\(\|k\cdot\omega\|>\tau|k|^{-\sigma}>0\) for all \(k\in\mathbb{Z}^{d}\) with \(0<|k|,\) and suppose that \(L(A,\omega)>0.\) Then there is \(\delta=\delta(A)\) and \(\gamma=\gamma(d,\sigma)\leq 1,\) such that, for every \(A^{\prime}\in C_{\rho}\) with \(\left\|A-A^{\prime}\right\|_{\rho}+\left\|\omega-\omega^{\prime}\right\|<\delta,\)_
\[\left|L(A,\omega)-L(A^{\prime},\omega^{\prime})\right|<C(A)\exp\left\{-c(A) \left(-\ln\left(\left\|A-A^{\prime}\right\|_{\rho}+\left\|\omega-\omega^{ \prime}\right\|\right)\right)^{\gamma}\right\}. \tag{6}\]
We would like to make a few remarks at this point.
**Remark 2**.: Generally, a pointwise modulus of continuity of the form (6) is called pointwise \(\gamma\)-weak-Holder continuity.
**Remark 3**.: The exponent \(\gamma\) above may be computed explicitly in terms of the Diophantine parameter \(\sigma\) and an exponent from a large deviation estimate, which depends only on the dimension, \(d,\) of the torus \(\mathbb{T}^{d}.\) In particular, improvement of the large deviation estimate would lead to improvement of the exponent \(\gamma,\) which has been remarked in both [14] and [11].
**Remark 4**.: Notice that, while \(\omega\) must satisfy \(\omega\in\mathbb{T}^{d}\) is such that \(\left\|k\cdot\omega\right\|>\tau|k|^{-\sigma}>0\) for all \(k\in\mathbb{Z}^{d}\) with \(0<|k|,\) the frequency \(\omega^{\prime}\) need not. This is a key improvement on [24], where \(\omega^{\prime}\) had to also satisfy the same condition, and is a consequence of our argument from [20].
In the remainder of this paper, when there can be no ambiguity, we will write \(\left\|A\right\|\) in place of \(\left\|A(x)\right\|=\left\|A(x)\right\|_{\rho}.\)
As a corollary, we obtain analogous continuity for the IDS.
**Corollary 1.1**.: _Let \(H_{x,\alpha}\) be as in (1), with \(V\in C^{\omega}(\mathbb{T}^{d},\mathbb{R})\) a not identically singular analytic function on \(\mathbb{T}^{d}.\) Suppose that \(\alpha\in\mathbb{T}^{d}\) is such that \(\left\|k\cdot\alpha\right\|>\tau|k|^{-\sigma}>0\) for all \(k\in\mathbb{Z}^{d}\) with \(0<|k|,\) and suppose \(L(E)>0.\) Let \(k(x,\alpha,E_{1},E_{2})\) be defined as above. Then for a.e. \(x,\)\(k(\alpha,E_{1},E_{2})\) obeys_
\[\left|k(\alpha,E_{1},E_{2})\right|<\exp\left\{-\left(-\ln\left(\left|E_{1}-E_ {2}\right|\right)\right)^{\gamma}\right\}.\]
This may be seen either as a consequence of Theorem 2.1 via an argument of Bourgain [3] (and extended first by Schlag [21] and later by Liu [19] for any Schrodinger operator with a large deviation estimate), or as a consequence of Theorem 1.1 by an argument of Goldstein-Schlag using the Thouless formula, which generalizes to multifrequency Jacobi operators. As these arguments are standard and not new, we refer readers to those works for details.
The rest of this paper is organized as follows. In Section 2, we recall a few well-known results for subharmonic functions and the finite-scale Lyapunov
exponents, which we use throughout. Then we prove a modulus of continuity for the finite-scale Lyapunov exponent in Section 3. In Section 4, we provide an inductive procedure to obtain a locally uniform rate of convergence for the Lyapunov exponent when the frequency is fixed. In Section 5, we combine the uniform rate of convergence with modulus of continuity for the finite scale Lyapunov exponents to obtain corresponding modulus of continuity for the Lyapunov exponent in both frequency and cocycle.
## 2. Preliminaries: uniform bounds on Lyapunov exponents, large deviations, and Avalanche Principle
Throughout this paper, we use \(C\) and \(C(\cdot)\) to denote large constants which depend on uniform measurements of the cocycle and dimension, \(c\) and \(c(\cdot)\) to denote small constants which depend on uniform measurements of the cocycle and dimension, and \(C_{\rho}\) to denote constants which depend on the parameter \(\rho.\) Unless otherwise stated, these constants may change by multiplicative constants throughout their appearance in a proof, but will remain finite, non-zero, and uniform in their respective parameters.
This section is devoted to a few essential preliminary results which will be used in later sections. They may be found in a variety of other papers, and are provided here without proof except where the proof introduces ideas useful in the study of singular cocycles.
We begin with a uniform version of the Lojasiewicz inequality, which is used throughout and may be of independent interest to readers.
**Lemma 2.1** ([11] Lemma 6.1).: _Let \(f(x)\in C_{\rho}(\mathbb{T}^{d},\mathbb{C})\) be such that \(f(x)\) is not identically zero. Then there are constants \(\delta=\delta(f)>0,S=S(f)<\infty,\) and \(b=b(f)>0\) such that if \(g(x)\in C_{\rho}(\mathbb{T}^{d},\mathbb{C})\) with \(\left\|g-f\right\|_{\rho}<\delta,\) then_
\[\left|\left\{x\in\mathbb{T}^{d}:\left|g(x)\right|<t\right\}\right|<St^{b} \tag{7}\]
_for all \(t>0.\)_
This has, as a consequence, uniform \(L^{2}\)-boundedness of \(L_{N}(A,\omega)\) in \(N,\omega,\) and locally in \(A.\) This result may also be found in [11], but our proof differs from the one therein.
**Lemma 2.2**.: _Let \((A,\omega)\) be an analytic \(M(2,\mathbb{C})\) cocycle for which \(\det A(x)\) is not identically zero. There is \(\delta=\delta(A)>0\) and \(C=C(A)\) such that, for every cocycle \((B,\omega^{\prime})\) such that \(\left\|A-B\right\|_{\rho}+\left\|\omega-\omega^{\prime}\right\|<\delta,\)_
\[\left\|L_{N}(B,\omega^{\prime})\right\|_{L^{2}}<C(A).\]
Proof.: Note that it suffices to prove that \(L_{N}^{\prime}(B,\omega^{\prime},x)\) and \(\ln|\det B(x)|\) obey this \(L^{2}\)-estimate. Clearly we have
\[\int_{\mathbb{T}^{d}}\ln|\det B(x)|=\int_{|\det B(x)|\geq 1}+\sum_{j=0}^{ \infty}\int_{2^{-j}>|\det B(x)|\geq 2^{-j-1}}.\]
Moreover, since \(A(x)\) is analytic, there is some \(C(A)<\infty\) such that \(\left\|A(x)\right\|_{\rho}<C.\) Thus, for any \(B\) such that \(\left\|A-B\right\|_{\rho}<C/2,\) we have
\[\left\|B(x)\right\|_{\rho}<2C.\]
It follows that
\[\left|\det B(x)\right|<4C^{2}.\]
On \(\left\{x:\epsilon\leq\left|\det B(x)\right|<1\right\},\)
\[\left|\ln\left|\det B(x)\right|\right|<-\ln\epsilon.\]
Hence, on \(\left\{x:2^{-j-1}\leq\left|\det B(x)\right|<2^{-j}\right\},\)
\[\left|\ln\left|\det B(x)\right|\right|<(j+1)\ln 2.\]
By Lemma 2.1, we have, for \(\left\|A-B\right\|_{\rho}<\delta(A)\)
\[\left|\left\{x:2^{-j-1}\leq\left|\det B(x)\right|<2^{-j}\right\}\right|<S(A) \left(2^{-j}\right)^{b}.\]
Altogether, this yields
\[\int_{\mathbb{T}^{d}}\left|\ln\left|\det B(x)\right||^{2} \leq 4(\ln 4C(A))^{2}+S(A)\sum 2^{-bj}\left((j+1)\ln 2\right)^{2}\] \[\leq(\ln 2C(A))^{2}+S(A)C(A)\] \[<C(A).\]
Now consider \(L^{\prime}_{N}(B,\omega^{\prime},x).\) We claim that, for some \(C=C(A)\) and \(\delta>0,\) and every \(B\) such that \(\left\|A-B\right\|_{\rho}<\delta,\) we have
\[\left|L^{\prime}_{N}(B,\omega^{\prime},x)\right|\leq C+\frac{1}{N}\sum_{j=0}^{ N-1}\left|\ln\left|\det(B(x+j\omega^{\prime}))\right|\right|.\]
The desired conclusion quickly follows from this inequality and the above estimate for \(\ln\left|\det B(x)\right|\), so we turn our attention to a proof.
Let \(\delta>0\) be such that Lemma 2.1 holds for the analytic functions \(\det B(x)\) when \(\left\|B-A\right\|_{\rho}<\delta.\) Observe that there is \(C=C(A)>0\) such that \(\left\|A\right\|_{\rho}<e^{C},\) so \(\frac{1}{N}\ln\left\|A_{N}\right\|_{\rho}\leq C.\) By our assumptions on \(\left\|B-A\right\|_{\rho},\) it follows that
\[\left\|B\right\|_{\rho}<e^{2C},\]
and thus
\[\frac{1}{N}\ln\left\|B_{N}(x)\right\|<2C.\]
Now we consider two sets:
\[F_{+}:=\left\{x:\ln\left\|B_{N}(x)\right\|\geq 0\right\}\]
and
\[F_{-}:=\left\{x:\ln\left\|B_{N}(x)\right\|<0\right\}.\]
Clearly, for \(x\in F_{+},\) the desired inequality holds. Consider \(x\in F_{-}.\) Recall that, for any \(2\times 2\) matrix \(A,\) we have
\[\left|\det(A)\right|\leq 2\left\|A\right\|^{2},\]
so it follows that
\[\frac{1}{N}\ln\|B_{N}(x)\|\geqslant\frac{1}{2N}\ln|\det(B_{N}(x))|=\frac{1}{2N} \sum_{j=0}^{N-1}\ln|\det(B(x+j\omega))|.\]
For \(x\in F_{-}\), this implies
\[|L_{N}^{\prime}(B,\omega,x)|\leqslant\left|\frac{1}{2N}\sum_{j=0}^{N-1}\ln| \det(B(x+j\omega))|\right|,\]
which, after applying triangle inequality, is our desired bound.
We conclude this section with two essential results which will be used in the sequel. The first is a so-called _large deviation estimate_ and the second is a consequence of the Avalanche Principle which, in this form, was originally due to Bourgain for \(SL(2,\mathbb{C})\) cocycles, but extended to non-identically singular \(M(2,\mathbb{C})\) cocycles by us in [20]. We refer readers to either [5] (for the \(SL(2,\mathbb{C})\) case) or [20] (for the \(M(2,\mathbb{C})\) case) for proofs.
**Theorem 2.1**.: _[_[_20_]_ _Theorem 2.2]_ _Let \((A,\omega)\) be an analytic \(M(2,\mathbb{C})\) cocycle, with an analytic extension to \(|\Im(z_{j})|<\rho,\) for which \(\det A(x)\) is not identically zero. Suppose \(\omega\in\mathbb{T}^{d}\) is such that_
\[\|k\cdot\omega\|>\delta_{0}\]
_for all \(0<|k|<K_{0}.\) Moreover, suppose_
\[N>K_{0}\delta_{0}^{-1}.\]
_Then there are \(\delta=\delta(A)>0,\)\(c=c(d)<1,\) and \(C_{\rho}<\infty\) such that for any \(B\) with with an analytic extension to \(|\Im(z_{j})|<\rho\) satisfying \(\left\|B-A\right\|_{\rho}<\delta,\)_
\[\left|\left\{x\in\mathbb{T}^{d}:|L_{N}(B,x)-L_{N}(B)|>\rho^{-1}K_{0}^{-c} \right\}\right|<e^{-C_{\rho}K_{0}^{c}}. \tag{8}\]
**Theorem 2.2**.: _[_20_]_ _Theorem 4.3]_ _Let \((A,\omega)\) be an analytic \(M(2,\mathbb{C})\) cocycle with an analytic extension to \(|\Im(z_{j})|<\rho.\) Fix \(x\in\mathbb{T}^{d}\) and \(\delta>0.\) Let \(N>N_{0}(\delta)\) be sufficiently large and \(N|N_{1}\) with \(N\leqslant N_{1}.\) Suppose that_
\[L_{N}(A,x) >\delta \tag{10}\] \[|L_{N}(A,x)-L_{2N}(A,x)| <\frac{1}{100}L_{N}(A,x)\] (11) \[\max_{n=N,2N}|L_{n}(A,x)-L_{n}(A,x+jN\omega)| <\frac{\delta}{100}, \tag{9}\]
_for all \(j\leqslant N_{1}/N.\) Then_
\[\begin{split}\left|L_{N_{1}}(A,x)+\frac{1}{n}\sum_{j=0}^{n-1}L_{ N}(A,x+& jN\omega)-\frac{2}{n}\sum_{j=0}^{n-1}L_{2N}(A,x+jN\omega)\right|\\ &<\exp\left(-\frac{N}{4}L_{N}(x)\right)+CL_{N}(A,x)\frac{N}{N_{1} }.\end{split} \tag{12}\]
_Here \(C\) is an absolute constant._
**Remark 5**.: In the above, \(N_{0}\) is such that \(N_{0}\delta>2.\)
## 3. Finite-scale weak-Holder continuity
In the Schrodinger cocycle case (and the \(SL(2,\mathbb{C})\) case more generally), one of the key observations is that, for fixed \(N,\)\(L_{N}(A,\omega)\) is jointly continuous in \(A\) and \(\omega\) for any \(\omega.\) This is a simple consequence of the everywhere invertibility of the cocycle, \(A.\) In this section, we prove an analogous result for non-identically-singular cocycles. Indeed, our strategy hinges on this fact, as we observe that, given a sequence of continuous functions (in this case, \(L_{N}(A,\omega)),\) the continuity of the limiting object (in this case, \(L(A,\omega)\)) may be obtained by quantitatively estimating the rate of convergence (\(L_{N}\to L\)) uniformly in \(A\) and \(\omega.\) This argument does not work if \(L_{N}(A,\omega)\) is not continuous!
We do not claim that this result is novel; it is included for completeness, and because the proof introduces some ideas which are useful for studying singular cocycles.
**Theorem 3.1**.: _Let \(A(x)\) be an analytic quasi-periodic \(M(2,\mathbb{C})\)-cocycle on \(\mathbb{T}^{d}\) with an analytic extension to the strip \(|\Im(z_{j})|<\rho.\) There are \(\delta_{1}(A)>0\) and \(b(A)>0\) such that for any \(\alpha\geq 1,\) whenever \(\|A-B\|<\delta_{1},\) we have_
\[\begin{split}|L_{N}^{\prime}(A,\omega)-L_{N}^{\prime}(B,\omega^{ \prime})|&<C(A)e^{-2N^{\alpha-1}b(A)}\\ &\quad+e^{2N^{\alpha}}\left(\|A-B\|+N\left\|\omega-\omega^{\prime }\right\|\right).\end{split} \tag{13}\]
_Moreover,_
\[\left|\int_{\mathbb{T}^{d}}\ln|\det A(x)|-\ln|\det B(x)|dx\right|<C(A)\left\| A-B\right\|^{b/(1+b)}. \tag{14}\]
**Remark 6**.: Note that this implies, by the definition of \(L_{N}\) and \(L_{N}^{\prime},\) that
\[\begin{split}|L_{N}(A,\omega)-L_{N}(B,\omega^{\prime})|& <C(A)e^{-2N^{\alpha-1}b(A)}\\ &\quad+e^{N^{\alpha}}\left(\left\|A-B\right\|+N\left\|\omega- \omega^{\prime}\right\|\right).\end{split} \tag{15}\]
Proof.: We first prove the result for \(L_{N}^{\prime}(A).\) The result for the determinant follows from an analogous argument, and we will discuss how to adjust the proof afterwards.
Let \(C_{0}(A)\) be such that \(\|A(x)\|\leq e^{C_{0}}.\) For \(\left\|A-B\right\|\) sufficiently small, depending only on \(C_{0},\)\(\|B(x)\|\leq e^{2C_{0}}.\) Fix \(\delta_{0}>0\) and set
\[F_{A}(\delta_{0}) =\left\{x\in\mathbb{T}^{d}:\|A_{N}\|<\delta_{0}\right\} \tag{17}\] \[F_{B}(\delta_{0}) =\left\{x\in\mathbb{T}^{d}:\|B_{N}\|<\delta_{0}\right\}. \tag{16}\]
Clearly
\[\int_{\mathbb{T}^{d}}=\int_{F_{A}\cap F_{B}}+\int_{F_{A}^{c}\cap F_{B}}+\int_{ F_{A}\cap F_{B}^{c}}+\int_{F_{A}^{c}\cap F_{B}^{c}}.\]
Note that \(|\det(A_{N}(x))|\leqslant\|A_{N}(x)\|^{2}\) for all \(x.\) Thus
\[F_{A}(\delta_{0})\subset\left\{x:|\det A_{N}(x)|<\delta_{0}^{2}\right\}\subset \bigcup_{j=0}^{N-1}\left\{x:|\det A(x_{j}\omega)|<\delta_{0}^{2/N}\right\},\]
and
\[F_{b}(\delta_{0})\subset\left\{x:|\det B_{N}(x)|<\delta_{0}^{2}\right\}\subset \bigcup_{j=0}^{N-1}\left\{x:|\det B(x+j\omega^{\prime})|<\delta_{0}^{2/N} \right\}.\]
Moreover, we know \(\ln\|A_{N}(x)\|\in L^{p},1\leqslant p<\infty\) and, for \(\|A-B\|<\delta_{1}=\delta_{1}(A),\)\(\ln\|B_{N}(x)\|\in L^{p},1\leqslant p<\infty\) and \(\left\|\ln\|B(x)\|\right\|_{p}<C(p)\left\|\ln\|A(x)\|\right\|_{p}.\) These all hold uniformly in \(N.\) This implies that, for \(\|A-B\|<\delta_{1},\)
\[\int_{F_{A}\cap F_{B}}+\int_{F_{A}^{c}\cap F_{B}}+\int_{F_{A}\cap F_{B}^{c}} \leqslant 2C(A)|F_{A}(\delta_{0})|+C(A)|F_{B}(\delta_{0})|,\]
where \(|F_{B}|\) denotes the Lebesgue measure. Moreover, by the Lojaciewz inequality,
\[|F_{A}(\delta_{0})|<C(A)N\delta_{0}^{2b(A)/N}\]
and
\[|F_{B}(\delta_{0})|<C(A)N\delta_{0}^{2b(A)/N}.\]
Finally, for \(x\in F_{A}^{c}\cap F_{B}^{c},\) we have
\[|\ln\|A_{N}(x)\|-\ln\|B_{N}(x)\|\,| \leqslant\max\left\{\|A_{N}-B_{N}\|\,\|B_{N}\|^{-1}\,,\|B_{N}-A_{N }\|\,\|A_{N}\|^{-1}\right\} \tag{19}\] \[\leqslant\|A_{N}-B_{N}\|\,\delta_{0}^{-1}. \tag{18}\]
Thus, we have
\[\int_{\mathbb{T}^{d}} =\int_{F_{A}\cap F_{B}}+\int_{F_{A}^{c}\cap F_{B}}+\int_{F_{A} \cap F_{B}^{c}}+\int_{F_{A}^{c}\cap F_{B}^{c}}\] \[\leqslant C(A)\delta_{0}^{2b(A)/N}+N^{-1}\,\|A_{N}-B_{N}\|\, \delta_{0}^{-1}. \tag{20}\]
Moreover, a standard telescoping argument implies
\[\|A_{N}-B_{N}\| \leqslant N\,\|A\|^{2N}\left(\|A-B\|+\left\|N\omega-N\omega^{ \prime}\right\|\right) \tag{22}\] \[\leqslant N\,\|A\|^{2N}\left(\|A-B\|+N\left\|\omega-\omega^{ \prime}\right\|\right). \tag{21}\]
We thus have
\[|L^{\prime}_{N}(A,\omega)-L^{\prime}_{N}(B,\omega^{\prime})| <C(A)\delta_{0}^{2b(A)/N}\] \[\quad+\left\|A\right\|^{2N}\left(\|A-B\|+N\left\|\omega-\omega^{ \prime}\right\|\right)\delta_{0}^{-1}. \tag{23}\]
Finally, set
\[\delta_{0}=e^{-N^{\alpha}},\quad\alpha>1.\]
Recall that \(\left\|A\right\|\leq e^{C_{0}},\) and consequently
\[\left|L_{N}^{\prime}(A,\omega)-L_{N}^{\prime}(B,\omega^{\prime})\right| <C(A)e^{-2N^{\alpha-1}b(A)}\] \[\quad+e^{2C_{0}N}\left(\left\|A-B\right\|+N\left\|\omega-\omega^{ \prime}\right\|\right)e^{N^{\alpha}}\] \[\leq C(A)e^{-2N^{\alpha-1}b(A)}+e^{2N^{\alpha}}\left(\left\|A-B \right\|+N\left\|\omega-\omega^{\prime}\right\|\right).\]
To obtain the continuity for \(\ln|\det(A)|,\) we use the above argument where \(F_{A}(\delta_{0})\) and \(F_{B}(\delta_{0})\) are redefined as
\[F_{A}(\delta_{0})^{\prime} :=\left\{x:\left|\det(A(x))\right|<\delta_{0}\right\} \tag{25}\] \[F_{B}(\delta_{0})^{\prime} :=\left\{x:\left|\det(B(x))\right|<\delta_{0}\right\}. \tag{24}\]
At the end, we take \(\delta_{0}=\left\|A-B\right\|^{1/(1+b)},\) where \(b=b(a)\) is as in the Lojasiewicz inequality.
## 4. Lyapunov exponents: local uniform rate of convergence with fixed Diophantine frequency
In this section, we will establish an inductive scheme to obtain a local uniform rate of convergence for the Lyapunov exponents corresponding to fixed Diophantine frequencies. We will break this up into multiple steps.
**Lemma 4.1**.: _Let \(A(x)\) be an analytic quasi-periodic \(M(2,\mathbb{C})\)-cocycle on \(\mathbb{T}^{d}\) with with an analytic extension to \(\left|\Im(z_{j})\right|<\rho\) such that \(\det A(x)\neq 0.\) Suppose that \(\omega\in\mathbb{T}^{d}\) is such that \(\left\|k\cdot\omega\right\|>\delta_{0}>0\) for all \(k\in\mathbb{Z}^{d}\) with \(0<\left|k\right|\leq K_{0},\) where \(K_{0}\) satisfies_
\[K_{0} >(\rho\kappa)^{-C} \tag{27}\] \[N_{0} >\kappa^{-C}\delta_{0}^{-1}K_{0}. \tag{26}\]
_Moreover, suppose that_
\[L_{N_{0}}(A) >10^{3}\kappa \tag{29}\] \[\left|L_{N_{0}}-L_{2N_{0}}\right| <\frac{1}{100}L_{N_{0}}. \tag{28}\]
_Then_
\[\left|L_{N}+L_{N_{0}}-2L_{2N_{0}}\right|<C^{\prime}(A)e^{-C_{\rho}K_{0}^{c}} \tag{30}\]
_when \(N_{0}|N,N=2^{j}N_{0}\) for some \(j\geq 0,\) and_
\[N<N_{0}e^{\frac{1}{2}\rho K_{0}^{c}}. \tag{31}\]
_In the above, \(C\) is a sufficiently large absolute constant depending only on \(d\) which is defined in the proof, \(c=C^{-1},\) and \(C^{\prime}(A)\) is a constant which is uniform in \(A.\)_
Proof.: First, consider the set
\[G=\left\{x:\left|L_{N_{0}}(A,x)-L_{N_{0}}(A)\right|>\kappa\right\}.\]
Our large deviation theorem yields
\[|G|<e^{-C_{\rho}K_{0}^{c}}.\]
Thus, on \(\mathbb{T}^{d}\backslash G\),
\[L_{N_{0}}(A,x)>999\kappa,\]
and
\[\left|\int_{G}L_{N_{1}}(x)+L_{N_{0}}(x)-2L_{2N_{0}}(x)dx\right|<e^{-C_{\rho}K_{0 }^{c}}. \tag{32}\]
Let us now consider a further restriction to the set, \(F\subset\mathbb{T}^{d}\backslash G\), consisting all of \(x\) such that
\[|L_{N_{0}}(A,x)-L_{N_{0}}(A,x+jN_{0}\omega)|<2\kappa\]
for \(1\leq j\leq N_{1}/N_{0}\). Another application of our large deviation theorem implies this set satisfies
\[|\mathbb{T}^{d}\backslash F|<\frac{N_{1}}{N_{0}}e^{-C_{\rho}K_{0}^{c}}<e^{-C_{ \rho}K_{0}^{c}}\]
for \(N_{1}<N_{0}e^{C_{\rho}K_{0}^{c}}\).
Finally, one last application of our large deviation theorem, applied to (29), implies the set
\[H=\left\{x\in\mathbb{T}^{d}\backslash G:|L_{N_{0}}(x)-L_{2N_{0}}(x)|<\frac{1}{ 100}L_{N_{0}}(x)\right\}\]
obeys the measure estimate
\[|\mathbb{T}^{d}\backslash H|<e^{-C_{\rho}K_{0}^{c}}.\]
Now we see that every \(x\in F\cap H\) satisfies the three hypotheses of Theorem 2.2, so on \(F\cap H\) we obtain
\[\begin{split}\left|L_{N_{1}}(A,x)+\frac{1}{n}\sum_{j=0}^{n-1}L_{ N_{0}}(& A,x+jN_{0}\omega)-\frac{2}{n}\sum_{j=0}^{n-1}L_{2N_{0}}(A,x+jN_{0}\omega) \right|\\ &<\exp\left(-\frac{N_{0}}{4}L_{N_{0}}(x)\right)+C^{\prime}L_{N_{ 0}}(A,x)\frac{N_{0}}{N_{1}}.\end{split} \tag{33}\]
We may now integrate the left hand side of the above inequality, over \(\mathbb{T}^{d}:\)
\[\int_{\mathbb{T}^{d}}=\int_{F\cap H}+\int_{F^{c}\cap H}+\int_{F\cap H^{c}}. \tag{34}\]
Using \(L_{N_{0}}(x)>999\kappa\) on \(F\cap G\), our restriction on \(N_{1}\), and (33) we have
\[\int_{F\cap H}<e^{-\frac{999}{4}N_{0}\kappa}+C^{\prime}(A)L_{N_{0}}(A)e^{-C_{ \rho}K_{0}^{c}}. \tag{35}\]
By the uniform \(L^{2}\)-boundedness of the integrand (i.e. \(\left\|L_{N}(A)\right\|_{2}<C(A)\)) and our measure estimates on \(|\mathbb{T}^{d}\backslash F|\) and \(|\mathbb{T}^{d}\backslash H|\), we have
\[\int_{F^{c}\cap H} \leq C^{\prime}(A)e^{-C_{\rho}K_{0}^{c}} \tag{37}\] \[\int_{F\cap H^{c}} \leq C^{\prime}(A)e^{-C_{\rho}K_{0}^{c}}. \tag{36}\]
Combining everything yields our result.
Next, we show that \(L(A)>0\) ensures that \(|L_{N_{0}}-L_{2N_{0}}|\) is always small enough.
**Lemma 4.2**.: _Let \(A(x)\) be an analytic quasi-periodic \(M(2,\mathbb{C})\)-cocycle on \(\mathbb{T}^{d}\) with an analytic extension to the strip \(|\Im(z_{j})|<\rho.\) Suppose that \(\omega\in\mathbb{T}^{d}\) is such that \(\left\|k\cdot\omega\right\|>\delta_{0}>0\) for all \(k\in\mathbb{Z}^{d}\) with \(0<|k|\leqslant K_{0},\) where \(K_{0}\) satisfies_
\[K_{0} >(\rho^{1+c}\kappa)^{-C} \tag{39}\] \[N_{0} >\kappa^{-C}\delta_{0}^{-1}K_{0}. \tag{38}\]
_Moreover, suppose that \(N_{0}=a2^{b},\) for some \(a\in\mathbb{N}\) and \(b>-C\ln\kappa\) (here \(C\) is the same as above) and_
\[L(A)>10^{3}\kappa. \tag{40}\]
_Then_
\[|L_{N_{0}}-L_{2N_{0}}|<\frac{1}{100}L_{N_{0}}. \tag{41}\]
_In the above, \(C\) is a sufficiently large absolute constant depending only on \(d\) which is defined in the proof and \(c=C^{-1}.\)_
**Remark 7**.: The condition \(N_{0}=a2^{b}\) is not, in fact, necessary, but allowing for more general scales requires a technical approximation argument. All we will need for applications is \(N_{0}=a2^{b},\) so that is what we will prove. We refer readers to [20] Section 5 for a proof for arbitrary \(N_{0}\) large.
Proof.: First, as before, we remark that large deviation implies
\[L_{N_{0}}(x)>L_{N_{0}}-\kappa\geqslant L-\kappa>999\kappa \tag{42}\]
away from a set, call it \(G\), of measure \(|G|<e^{-C_{\rho}K_{0}^{c}}.\) Since \(L_{N_{0}}(x)\in L^{2},\) we have
\[\left|\int_{\mathbb{T}^{d}\cap G}L_{N_{0}}(x)-L_{2N_{0}}(x)dx\right|<Ce^{-C_{ \rho}K_{0}^{c}}. \tag{43}\]
Now consider the set \(F\) consisting of all \(x\in\mathbb{T}^{d}\backslash G\) such that
\[|L_{N}(x)-L_{N}(x+j\omega)|<\kappa \tag{44}\]
for all \(N=2^{-s}N_{0},\) with \(0\leqslant s\leqslant s_{0}:=-C_{1}\ln\kappa,\) and \(j\leqslant 2N_{0}\) such that \(2^{-s}N_{0}|j\) for some \(j.\)
Note that \(K_{0}^{-c}<\rho^{1+c}\kappa\), by our assumption on \(K_{0}\). Thus, for all such \(N\),
\[\left\{x:\left|L_{N}(x)-L_{N}\right|>\kappa\right\}\subset\left\{x:\left|L_{N}(x )-L_{N}\right|>\rho^{-1-c}K_{0}^{-c}\right\}.\]
Moreover,
\[2^{-s}N_{0} >2^{-s}\kappa^{-C}\delta_{0}K_{0} \tag{46}\] \[>\kappa^{C_{1}-C}\delta_{0}K_{0}. \tag{45}\]
Taking \(C\geq C_{1}\) ensures that the right hand side is at least \(\delta_{0}K_{0}\). All of this together implies that we may apply our large deviation to control the measure of \(\mathbb{T}^{d}\backslash F\) as follows.
Large deviation, and our condition on \(K_{0}\) implies
\[\left|\mathbb{T}^{d}\backslash F\right| <\sum_{s=0}^{s0}\sum_{j}\left|\left\{\left|L_{N}(x)-L_{N}(x+j \omega)\right|>\kappa\right\}\right| \tag{48}\] \[\leq\sum_{s}\frac{2N_{0}2^{s}}{N_{0}}e^{-C_{\rho}K_{0}^{c}}\] (49) \[\leq(2^{s_{0}}-1)\frac{2N_{0}}{N_{0}}e^{-C_{\rho}K_{0}^{c}}\] (50) \[\leq\kappa^{-C_{1}}\frac{2N_{0}}{N_{0}}e^{-C_{\rho}K_{0}^{c}}\] (51) \[\leq K_{0}^{C_{1}/C}\frac{2N_{0}}{N_{0}}e^{-C_{\rho}K_{0}^{c}}\] (52) \[\leq\frac{2N_{0}}{N_{0}}e^{-C_{\rho}K_{0}^{c}}. \tag{47}\]
Thus we have
\[\left|\mathbb{T}^{d}\backslash F\right|<e^{-C_{\rho}K_{0}^{c}}<\kappa.\]
Throughout this computation, we needed to take \(K_{0}\) sufficiently large, depending only on \(\rho,C,\) and \(C_{1}\). Throughout, \(\rho\) is fixed, and we will later fix \(C\) and \(C_{1}\) in the proof independent of all other parameters, so this will not pose an obstacle.
Since \(L_{N}(A,x)\) is uniformly \(L^{2}\)-bounded in \(N\),
\[\int_{\mathbb{T}^{d}\backslash F}\left|L_{N_{0}}(x)-L_{2N_{0}}(x)\right|dx<C(A )e^{-C_{\rho}K_{0}^{c}}<C(A)\kappa.\]
Let us now define
\[J:=\left\{x\in F:L_{2^{-s_{0}}N_{0}}(x)<\kappa^{-1}\right\}.\]
By Chebyschev's inequality, \(|\mathbb{T}^{d}\backslash J|<C(A)\kappa\), and thus
\[\int_{\mathbb{T}^{d}\backslash J}\left|L_{N_{0}}(x)-L_{2N_{0}}(x)\right|dx<C(A )\kappa.\]
Now consider \(x\in J.\) By definition, \(L_{N_{0}}(x)>999\kappa.\) Define scales \(N_{0s}\) inductively by \(N_{01}=2^{-1}N_{0}\) and \(N_{0(s+1)}=2^{-1}N_{0s}.\) Since \(x\in F,\) we have
\[999\kappa N_{0}\leq N_{0}L_{N_{0}}(x) \leq\sum_{j=0}^{N_{0}/N_{0s}}N_{0s}L_{N_{0s}}(x+jN_{0s}\omega) \tag{54}\] \[\leq\sum_{j=0}^{N_{0}/N_{0s}}(N_{0s}L_{N_{0s}}(x)+N_{0s}\kappa)\] (55) \[=N_{0}L_{N_{0s}}(x)+N_{0}\kappa. \tag{53}\]
By a similar computation, we have, for \(1\leq s<s^{\prime}\leq s_{0}\)
\[L_{N_{0s}}(x)\leq L_{N_{0s^{\prime}}}(x)+\kappa. \tag{56}\]
For each such \(x,\) we define \(s(x)\) and new length scales \(N_{00}(x)=N_{0s(x)}=2^{-s(x)}N_{0}\) such that
\[N_{00}(x) \leq\kappa N_{0} \tag{58}\] \[|L_{N_{00}(x)}(x)-L_{2N_{00}(x)}| <\frac{1}{100}L_{N_{00}(x)}. \tag{57}\]
The first condition is achievable as long as \(C_{1}>1.\) The second condition may be rewritten as
\[\frac{99}{100}L_{N_{00}(x)}(x)<L_{2N_{00}(x)}<\frac{101}{100}L_{N_{00}(x)}(x).\]
The right inequality holds by \(L_{2N_{00}(x)}<L_{N_{00}(x)}+\kappa.\) The left inequality holds by taking \(C_{1}\) sufficiently large (say \(C_{1}>5\)) and using \(L_{2^{-s_{0}}N_{0}}(x)<\kappa^{-1}.\)
The hypotheses of Theorem 2.2 are now satisfied for \(N_{00}(x)\) with \(\delta=100\kappa,\) so we have
\[\begin{split}\left|L_{N}(x)&+\frac{N_{00}}{N}\sum_{j =0}^{\frac{N}{N_{00}}-1}L_{N_{00}(x)}(x+jN_{00}\omega)\right.\\ &\qquad\left.-\frac{2N_{00}}{N}\sum_{j=0}^{\frac{N}{N_{00}}-1}L_{ 2N_{00}(x)}(x+jN_{00}\omega)\right|\\ &\qquad\left.<e^{-cN_{00}L_{N_{00}}(x)}+CL_{N_{00}}(A,x)\frac{N_{ 00}(x)}{N}\right.\end{split} \tag{59}\]
for \(N=N_{0},2N_{0}.\) Since \(x\in J,\) we have \(L_{N_{00}}(x+jN_{00}\omega)=L_{N_{00}}(x)+O(\kappa)\) and \(L_{N_{00}}(x)>998\kappa.\) Moreover, \(L_{N_{00}}(A,x)<L_{2^{-s_{0}}N_{0}}(A,x).\) Consequently,
\[\left|L_{N_{0}}(x)-L_{2N_{0}}(x)\right|<10\kappa+C(A)\frac{N_{00}(x)}{N_{0}}+C (A)\frac{N_{00}(x)}{2N_{0}}.\]
Since \(N_{00}(x)<\kappa N_{0},\) we have
\[\left|L_{N_{0}}(x)-L_{2N_{0}}(x)\right|<C(A)\kappa.\]
Hence, for \(x\in F,\) we must have
\[|L_{N_{0}}(x)-L_{2N_{0}}(x)|<\frac{1}{200}L_{N_{0}}(x).\]
It now follows that
\[|L_{N_{0}}-L_{2N_{0}}|<\frac{1}{200}L_{N_{0}}+C(A)e^{-C_{\rho}K_{0}^{c}}<\frac{1 }{100}L_{N_{0}}. \tag{60}\]
The previous two lemmas immediately imply the following.
**Theorem 4.1**.: _Let \(A(x)\) be an analytic quasi-periodic \(M(2,\mathbb{C})\)-cocycle on \(\mathbb{T}^{d}\) with a plurisubharmonic extension to the strip \(|\Im(z_{j})|<\rho\) such that \(\det A(x)\not\equiv 0.\) Suppose that \(\omega\in\mathbb{T}^{d}\) is such that \(\|k\cdot\omega\|>\delta_{0}>0\) for all \(k\in\mathbb{Z}^{d}\) with \(0<|k|\leq K_{0},\) where \(K_{0}\) satisfies_
\[K_{0} >(\rho^{1+c}\kappa)^{-C} \tag{62}\] \[N_{0} >\kappa^{-C}\delta_{0}^{-1}K_{0}. \tag{61}\]
_Moreover, suppose that \(N_{0}=a2^{b},\) for some \(a\in\mathbb{N}\) and \(b>-C\ln\kappa\) (here \(C\) is the same as above) and_
\[L(A)>10^{3}\kappa. \tag{63}\]
_Then_
\[|L_{N}+L_{N_{0}}-2L_{N_{0}}|<C(A)e^{-C_{\rho}K_{0}^{c}} \tag{64}\]
_when \(N_{0}|N,N=2^{j}N_{0}\) for some \(j\geq 0,\) and_
\[N<N_{0}e^{C_{\rho}K_{0}^{c}}. \tag{65}\]
Elementary iteration allows us to extend this result from \(N<N_{0}e^{C_{\rho}K_{0}^{c}}\) to \(N<\exp(\exp(C_{\rho}K_{0}^{c})).\) This formulation will be used to establish weak-Holder continuity in frequency.
**Corollary 4.1**.: _Let \(A(x)\) be an analytic quasi-periodic \(M(2,\mathbb{C})\)-cocycle on \(\mathbb{T}^{d}\) with a plurisubharmonic extension to the strip \(|\Im(z_{j})|<\rho.\) Suppose that \(\omega\in\mathbb{T}^{d}\) is such that \(\|k\cdot\omega\|>\delta_{0}>0\) for all \(k\in\mathbb{Z}^{d}\) with \(0<|k|\leq K_{0},\) where \(K_{0}\) satisfies_
\[K_{0} >(\rho^{1+c}\kappa)^{-C} \tag{67}\] \[N_{0} >\kappa^{-C}\delta_{0}^{-1}K_{0}. \tag{66}\]
_Moreover, suppose that \(N_{0}=a2^{b},\) for some \(a\in\mathbb{N}\) and \(b>-C\ln\kappa\) (here \(C\) is the same as above) and_
\[L(A)>10^{3}\kappa. \tag{68}\]
_Then_
\[|L_{N}+L_{N_{0}}-2L_{N_{0}}|<C(A)e^{-\rho K_{0}^{c}} \tag{69}\]
_when \(N_{0}|N,N=2^{j}N_{0}\) for some \(j\geqslant 0,\) and_
\[N<\exp(\exp(\frac{1}{2}\rho K_{0}^{c})). \tag{70}\]
By imposing a Diophantine condition on \(\omega\), we are able to perform a more delicate iteration scheme to extend this result to the limit.
**Theorem 4.2**.: _Let \(A(x)\) be an analytic quasi-periodic \(M(2,\mathbb{C})\)-cocycle on \(\mathbb{T}^{d}\) with a plurisubharmonic extension to the strip \(|\Im(z_{j})|<\rho.\) Suppose that \(\omega\in\mathbb{T}^{d}\) is such that \(\|k\cdot\omega\|>\tau|k|^{-\sigma}>0\) for all \(k\in\mathbb{Z}^{d}\) with \(0<|k|.\) Then for every \(N_{0}=a2^{b}>N^{\prime},\) we have, for \(\eta=\eta(d,\sigma),\)_
\[|L+L_{N_{0}}-2L_{2N_{0}}|<C(A)\exp\left\{-C_{\rho}N_{0}^{\eta}\right\}. \tag{71}\]
_Here \(C\) is the same sufficiently large absolute constant from Lemma 4.2 and \(c=C^{-1}\) above. Here \(N^{\prime}\) is a constant defined in our proof such that \(N_{0}\) satisfies the divisibility criterion needed to appeal to Lemma 4.2._
Proof.: For simplicity, we will assume \(\rho=1,\) though the argument and result hold for general \(\rho,\) we would just have to adjust \(N_{s},K_{s},\) and \(\kappa_{s}\) accordingly. We will begin by applying Theorem 4.1. We will define the appropriate parameters first. Fix \(c,C,C_{1}\) as in Theorem 4.1, and let \(K=K(c,C,C_{1},\sigma)\) be large enough such that
\[K^{C_{1}/C}e^{-K^{c}}<e^{-C_{\rho}K^{c}} \tag{72}\]
and
\[e^{C_{\rho}K^{c/2}}>2K^{1+\sigma/2}. \tag{73}\]
The first condition ensures that every \(K_{0}>K\) satisfies the largeness condition we imposed during the proof of Lemma 4.2, while the second condition will ensure that our base step can be iterated (see below for details).
Take any \(N_{0}=2^{b}>\tau^{-1}K^{\sigma+2}=N^{\prime}.\) We now define the rest of our base parameters. Define \(K_{0},\kappa_{0},\) and \(\delta_{0}\) via
\[N_{0} =:\tau^{-1}K_{0}^{\sigma+2}, \tag{75}\] \[K\leqslant K_{0} =:\kappa_{0}^{-C},\] (76) \[\delta_{0} :=\tau K_{0}^{-\sigma}. \tag{74}\]
Lemma 4.2 is applicable with these parameters, and we have
\[|L_{N_{1}}+L_{N_{0}}-2L_{N_{0}}|<C(A)e^{-C_{\rho}K_{0}^{c}}\]
for \(N_{0}|N_{1},N_{1}=2^{j}N_{0}<N_{0}e^{C_{\rho}K_{0}^{c}}.\)
Now define
\[\kappa_{1} =\kappa_{0}^{2}, \tag{78}\] \[K_{1}=\kappa_{1}^{-C} =\kappa_{0}^{-2C}=K_{0}^{2}, \tag{77}\]
and
\[\delta_{1}=\tau K_{1}^{-\sigma}. \tag{79}\]
Let \(N_{1}=N_{0}e^{\frac{1}{2}K_{0}^{c}}\), where this is meant as: \(N_{1}\) is the largest integer multiple of \(N_{0}\) no greater than \(N_{0}e^{C_{\rho}K_{0}^{c}}\). We clearly have \(N_{1}\geq\frac{1}{2}N_{0}e^{C_{\rho}K_{0}^{c}}\). Moreover, by our choice of \(\kappa_{1},K_{1},\) and \(\delta_{1}\), we have
\[N_{1} \geq\frac{1}{2}\kappa_{0}^{-C(\sigma+2)}e^{C_{\rho}K_{1}^{c/2}} \tag{81}\] \[=\frac{1}{2}\kappa_{1}^{-C}\kappa_{1}^{-C\sigma/2}e^{C_{\rho}K_{1 }^{c/2}}\] (82) \[=\frac{1}{2}\tau^{-1/2}\kappa_{1}^{-C}\delta_{1}^{-1/2}e^{C_{\rho }K_{1}^{c/2}}\] (83) \[\geq\kappa_{1}^{-C}\delta_{1}^{-1}K_{1}, \tag{80}\]
for \(K_{1}\) sufficiently large. In fact, we just need \(K_{0}\) sufficiently large so that \(e^{C_{\rho}K_{0}^{c/2}}>2K_{0}^{1+\sigma/2}\), which is certainly satisfied by our initial choice of \(K\).
Consequently, Theorem 4.1 is applicable with \(\kappa_{1},K_{1},\delta_{1}\), and \(N_{1}\) as above:
(84) \[|L_{N_{2}}+L_{N_{1}}-2L_{2N_{1}}| <C(A)e^{-C_{\rho}K_{1}^{c}}\] (85) \[=C(A)e^{-C_{\rho}K_{0}^{2c}},\] (86)
for \(N_{1}|N_{2},N_{2}=2^{j}N_{1}<N_{1}e^{C_{\rho}K_{1}^{c}}\).
Now we define
\[\kappa_{2} :=\kappa_{1}^{2}=\kappa_{0}^{4}, \tag{88}\] \[K_{2} :=\kappa_{2}^{C}=\kappa_{0}^{4C},\] (89) \[\delta_{2} :=\tau K_{2}^{-\sigma}, \tag{87}\]
and
\[N_{2}:=N_{1}e^{\frac{1}{2}K_{1}^{c}}\geq\frac{1}{2}N_{1}e^{C_{\rho}K_{1}^{c}}. \tag{90}\]
Clearly, by our choice of \(\kappa_{2},K_{2},\delta_{2}\), and what we know of \(N_{1}\), we have
\[N_{2} \geq\frac{1}{2}\kappa_{1}^{-C}\delta_{1}^{-1}K_{1}e^{C_{\rho}K_{1 }^{c}} \tag{92}\] \[=\frac{1}{2}\tau^{-1}\kappa_{2}^{-C/2}K_{2}^{\sigma/2+1/2}e^{C_{ \rho}K_{2}^{c/2}}\] (93) \[=\frac{1}{2}\tau^{-1/2}\kappa_{2}^{-C/2}\delta_{2}^{-1/2}K_{2}^{1 /2}e^{C_{\rho}K_{2}^{c/2}}\] (94) \[\geq\kappa_{2}^{-C}\delta_{2}^{-1}K_{2} \tag{91}\]
whenever \(K_{2}\) satisfies \(2K_{2}^{1+\sigma/2}\leq e^{C_{\rho}K_{2}^{c/2}}\). It is easy to see that our original condition on \(K\) ensures that \(K_{2}\) is large enough.
We may now apply Lemma 4.2 with \(\kappa_{2},K_{2},\delta_{2}\), and \(N_{2}\) as above to obtain
\[|L_{N_{3}}+L_{N_{2}}-2L_{2N_{2}}|<C(A)e^{-C_{\rho}K_{2}^{c}}<C(A)e^{-C_{\rho}K_ {0}^{4c}}. \tag{95}\]
Continuing in this way, we inductively define
\[\kappa_{s} :=\kappa_{s-1}^{2}, \tag{97}\] \[K_{s} :=\kappa_{s}^{-C},\] (98) \[\delta_{s} :=\tau K_{s}^{-\sigma}, \tag{96}\]
and
\[N_{s}:=N_{s-1}e^{\frac{1}{2}K_{s-1}^{c}}\geq\frac{1}{2}N_{s-1}e^{C_{\rho}K_{s- 1}^{c}}. \tag{99}\]
Moreover, \(N_{s-1}\geq\kappa_{s-1}^{-C}\delta_{s-1}^{-1}K_{s-1}\), and \(e^{C_{\rho}K_{s-1}^{c/2}}>2\tau^{1/2}K_{s-1}^{1+C/2}\). Following the same argument as before, we thus have
\[N_{s} \geq\frac{1}{2}\kappa_{s-1}^{-C}\delta_{s-1}^{-1}K_{s-1}e^{C_{\rho }K_{s-1}^{c}} \tag{101}\] \[=\frac{1}{2}\tau^{-1}\kappa_{s}^{-C/2}\delta_{s-1}^{-1}K_{s-1}e^ {C_{\rho}K_{s-1}^{c}}\] (102) \[=\frac{1}{2}\tau^{-1/2}\kappa_{s}^{-C/2}\delta_{s}^{-1/2}K_{s}^{ 1/2}e^{C_{\rho}K_{s}^{c/2}}\] (103) \[\geq\kappa_{s}^{-C}\delta_{s}^{-1}K_{s}. \tag{100}\]
It follows that
\[|L_{N_{s}}+L_{N_{s-1}}-2L_{2N_{s-1}}|<C(A)e^{-C_{\rho}K_{0}^{2^{s-1}c}}. \tag{104}\]
At this point, we note that, at each step, we could have used \(2N_{s}\), rather than \(N_{s}\), and obtained identical inequalities. Combining this fact with the established inequalities, we obtain
\[|L_{N_{s}}+L_{N_{0}}-2L_{2N_{0}}|<C(A)\sum_{j=0}^{s-1}e^{-C_{\rho}K_{0}^{2^{j} c}}. \tag{105}\]
Altogether, this yields
\[|L_{N_{s}}+L_{N_{0}}-2L_{2N_{0}}| <\sum_{j=0}^{\infty}C(A)e^{-C_{\rho}K_{0}^{2^{j}c}} \tag{107}\] \[\leq 2C(A)e^{-C_{\rho}K_{0}^{c}}. \tag{106}\]
Taking \(s\to\infty\), we obtain
\[|L+L_{N_{0}}-2L_{2N_{0}}|\leq 2C(A)e^{-C_{\rho}K_{0}^{c}}. \tag{108}\]
Now, using our initial choice of \(K_{0}\), we finally obtain
\[|L+L_{N_{0}}-2L_{2N_{0}}|\leq C(A)e^{-C_{\rho}N_{0}^{c/(\sigma+2)}} \tag{109}\]
as desired, with \(\eta=c(d)/(\sigma+2)\).
## 5. Weak-H older continuity of the Lyapunov exponent
We begin this section by combining Theorem 4.2 and Theorem 3.1 to obtain weak-Holder continuity in \(A.\) Then we will show that weak-Holder continuity in \(\omega\) follows from Theorem 4.2 and the continuity of the Lyapunov exponent in \(\omega\).
**Theorem 5.1**.: _Let \(A(x)\) be an analytic quasi-periodic \(M(2,\mathbb{C})\)-cocycle on \(\mathbb{T}^{d}\) with an analytic extension to the strip \(|\Im(z_{j})|<\rho.\) Suppose that \(\omega\in\mathbb{T}^{d}\) is such that \(\|k\cdot\omega\|>\tau|k|^{-\sigma}>0\) for all \(k\in\mathbb{Z}^{d}\) with \(0<|k|.\) Then there is \(\delta=\delta(A)\) and \(\gamma=\gamma(d,\sigma)\leqslant 1,\) such that, for every \(\|A-B\|<\delta,\)_
\[|L(A,\omega)-L(B,\omega)|<C(A)\exp\left(-c(A)\left(-\ln\|A-B\|\right)^{\gamma }\right). \tag{110}\]
Proof.: Since this result involves a fixed frequency, we will omit it in our notation. Consider \(\delta(A)>0\) such that Theorem 3.1 applies for \(\|A-B\|<\delta.\) Then we have, for every \(N=a2^{b}>N^{\prime},\) where \(N^{\prime}\) is the same absolute constant from Theorem 4.2, and every \(\alpha>1,\)
\[|L(A)-L(B)| \leqslant|L(A)+L_{N}(A)-2L_{2N}(A)|\] \[\quad+|L_{N}(B)+L_{N}(A)|+2|L_{2N}(B)-L_{2N}(A)|\] \[\quad+|L(B)+L_{N}(B)-2L_{2N}(B)|\] \[\leqslant C(A)e^{-C_{\rho}N^{\eta}}\] \[\quad+C(A)e^{-2N^{\alpha-1}b(A)}+e^{N^{\alpha}}\left\|A-B\right\|\] \[\quad+C(B)e^{-2N^{\alpha-1}b(A)}+e^{N^{\alpha}}\left\|A-B\right\|\] \[\quad+C(B)e^{-C_{\rho}N^{\eta}}.\]
Moreover, the constants \(C(A),C(B)\) in the above depend on uniform measurements of \(A\) and \(B.\) In particular, the \(L^{2}\)-norms and constants from the Lojaciewicz inequality. Since \(A\) and \(B\) are close in norm, these constants are close, and we can, in fact, control them all by \(2C(A).\)
At this point, we set \(N=\left(-\ln\|A-B\|\right)^{\beta},\) for some \(\beta\) to be defined later. This yields
\[|L(A)-L(B)| \leqslant C(A)(e^{-C_{\rho}(-\ln\|A-B\|)^{\beta\eta}}+e^{-2(-\ln\|A-B\|)^{ \beta(\alpha-1)}b(A)})\] \[\quad+e^{(-\ln\|A-B\|)^{\beta\alpha}}\left\|A-B\right\|.\]
Now, if \(\beta<1/\alpha,\) then the \(\|A-B\|\) term above is bounded by \(\|A-B\|^{1/2},\) and is thus irrelevant, so we will take \(\beta<1/\alpha.\) We have
\[|L(A)-L(B)| \leqslant C(A)e^{-C_{\rho}(-\ln\|A-B\|)^{\beta\eta}}\] \[\quad+C(A)e^{-2(-\ln\|A-B\|)^{\beta(\alpha-1)}b(A)}.\]
Finally, setting \(\alpha=\eta+1,\) we obtain our desired result with \(\gamma=\beta\eta.\)
Weak-Holder continuity in the frequency, at Diophantine frequencies, requires an additional estimate from our proof that the Lyapunov exponent is
continuous at the frequencies with rationally independent components. We recall that below without proof.
**Lemma 5.1** ([20] Lemma 7.1).: _Let \(A(x)\) be an analytic quasi-periodic \(M(2,\mathbb{C})\)-cocycle on \(\mathbb{T}^{d}\) with an analytic extension to the strip \(|\Im(z_{j})|<\rho.\) Assume \(\omega\in\mathbb{T}^{d}\) such that_
\[\left\|k\cdot\omega\right\|>\delta\quad\text{for }k\in\mathbb{Z}^{d_{2}},0<|k| \leqslant K.\]
_Moreover, suppose \(\rho>K^{-c}.\) Then_
\[|L_{N}-L|<K^{-c}\]
_for all \(N>K^{2}/\delta.\) Here \(c=c(d)<1\) is the same constant from before._
Now we may obtain weak-Holder continuity in \(\omega.\)
**Theorem 5.2**.: _Let \(A(x)\) be an analytic quasi-periodic \(M(2,\mathbb{C})\)-cocycle on \(\mathbb{T}^{d}\) with a plurisubharmonic extension to the strip \(|\Im(z_{j})|<\rho.\) Suppose that \(\omega\in\mathbb{T}^{d}\) is such that \(\left\|k\cdot\omega\right\|>\tau|k|^{-\sigma}>0\) for all \(k\in\mathbb{Z}^{d}\) with \(0<|k|.\) Then there is \(\delta>0\) and \(\gamma=\gamma(d,\sigma)\leqslant 1,\) such that, for every \(\left\|\omega-\omega^{\prime}\right\|<\delta,\)_
\[|L(A,\omega)-L(A,\omega^{\prime})|<C(A)\exp\left\{-c(A)\left(-\ln\left\|\omega -\omega^{\prime}\right\|\right)^{\gamma}\right\}. \tag{111}\]
**Remark 8**.: We note that \(\delta\) arises in the form \(\left\|\omega-\omega^{\prime}\right\|<e^{-cN_{0}^{\beta}}\sim\delta,\) and thus \(\delta\) depends only on \(N_{0},\) which needs to be sufficiently large to apply Lemma 4.2.
Proof.: Since, this result involves a fixed cocycle, we will omit it in our notation. We have, for every \(N>N_{0},\) where \(N_{0}\) is an absolute constant, and every \(\alpha>1,\)
\[|L(\omega)-L(\omega^{\prime})| \leqslant|L(\omega^{\prime})+L_{N}(\omega^{\prime})-2L_{2N}( \omega^{\prime})|\] \[\quad+|L_{N}(\omega^{\prime})+L_{N}(\omega)|+2|L_{2N}(\omega^{ \prime})-L_{2N}(\omega)|\] \[\quad+|L(\omega)+L_{N}(\omega)-2L_{2N}(\omega)|.\]
Now, since \(\omega\) satisfies the Diophantine condition \(\left\|k\cdot\omega\right\|>\tau|k|^{-\sigma},\) we may appeal to Theorem 4.2 to control the last term. Moreover, the middle two terms may be controlled using Theorem 3.1, and we obtain
\[|L(\omega)-L(\omega^{\prime})| \leqslant|L(\omega^{\prime})+L_{N}(\omega^{\prime})-2L_{2N}( \omega^{\prime})|\] \[\quad+C(A)e^{-2N^{\alpha-1}b(A)}+e^{N^{\alpha}}N\left\|\omega- \omega^{\prime}\right\|\] \[\quad+C(A)e^{-2N^{\alpha-1}b(A)}+e^{N^{\alpha}}N\left\|\omega- \omega^{\prime}\right\|\] \[\quad+C(A)e^{-C_{\rho}N^{\eta}}.\]
Since \(\omega^{\prime}\) need not be Diophantine, we are unable to appeal to Theorem 4.2 directly, however, we may use Corollary 4.1 to obtain
\[|L_{N^{\prime}}(\omega^{\prime})+L_{N}(\omega^{\prime})-2L_{2N}(\omega^{ \prime})|<e^{-c^{\prime}N^{\eta}}\]
for all \(N^{\prime}\leqslant\exp(\exp(C_{\rho}N^{\eta})).\)
Now, if we assume \(\left\|\omega-\omega^{\prime}\right\|<e^{-N^{1/\beta}}\), then we have
\[\left\|k\cdot\omega^{\prime}\right\|>|k|^{-\sigma}\]
for all \(0<|k|\leqslant e^{N^{1/\beta}/(1+\sigma)}\), and thus, for \(N^{\prime}=\exp(\exp(C_{\rho}N^{\eta}))\), we may appeal to Lemma 5.1 to obtain
\[|L_{N^{\prime}}(\omega^{\prime})-L(\omega^{\prime})|<e^{-cN^{1/\beta}}.\]
It follows that
\[|L(\omega^{\prime})+L_{N}(\omega^{\prime})-2L_{2N}(\omega^{\prime})|<e^{-cN^{ 1/\beta}}+e^{-c^{\prime}N^{\eta}}.\]
Now we note that the condition on \(\left\|\omega-\omega^{\prime}\right\|\) is equivalent to
\[N<(-\ln\left\|\omega-\omega^{\prime}\right\|)^{\beta},\]
which is analogous to setting \(N=(-\ln\left\|\omega-\omega^{\prime}\right\|)^{\beta}\) for some \(\beta>0\). Taking everything together, we are in precisely the setting from the proof of Theorem 5.1, and we may proceed as we did there to conclude.
Theorem 1.1 now follows by combining Theorems 5.1 and 5.2.
## Acknowledgement
We would like to thank W. Liu for useful comments on an earlier version of this work and S. Jitomirskaya for fruitful discussions. This research was partially supported by NSF DMS-2052572, DMS-2052899, DMS-2155211, and Simons 681675.
|
2303.08305 | Acoustically driven magnon-phonon coupling in a layered antiferromagnet | Harnessing the causal relationships between mechanical and magnetic
properties of van der Waals materials presents a wealth of untapped opportunity
for scientific and technological advancement, from precision sensing to novel
memories. This can, however, only be exploited if the means exist to
efficiently interface with the magnetoelastic interaction. Here, we demonstrate
acoustically-driven spin-wave resonance in a crystalline antiferromagnet,
chromium trichloride, via surface acoustic wave irradiation. The resulting
magnon-phonon coupling is found to depend strongly on sample temperature and
external magnetic field orientation, and displays a high sensitivity to
extremely weak magnetic anisotropy fields in the few~mT range. Our work
demonstrates a natural pairing between power-efficient strain-wave technology
and the excellent mechanical properties of van der Waals materials,
representing a foothold towards widespread future adoption of dynamic
magneto-acoustics. | Thomas P. Lyons, Jorge Puebla, Kei Yamamoto, Russell S. Deacon, Yunyoung Hwang, Koji Ishibashi, Sadamichi Maekawa, Yoshichika Otani | 2023-03-15T01:33:12Z | http://arxiv.org/abs/2303.08305v1 | # Acoustically driven magnon-phonon coupling in a layered antiferromagnet
###### Abstract
Harnessing the causal relationships between mechanical and magnetic properties of van der Waals materials presents a wealth of untapped opportunity for scientific and technological advancement, from precision sensing to novel memories. This can, however, only be exploited if the means exist to efficiently interface with the magnetoelastic interaction. Here, we demonstrate acoustically-driven spin-wave resonance in a crystalline antiferromagnet, chromium trichloride, via surface acoustic wave irradiation. The resulting magnon-phonon coupling is found to depend strongly on sample temperature and external magnetic field orientation, and displays a high sensitivity to extremely weak magnetic anisotropy fields in the few mT range. Our work demonstrates a natural pairing between power-efficient strain-wave technology and the excellent mechanical properties of van der Waals materials, representing a foothold towards widespread future adoption of dynamic magneto-acoustics.
From uncertain beginnings, the technological advantages of antiferromagnets over ferromagnets are now well known, including fast operation, immunity against device crosstalk and stray fields, and amenability to low power control via spin currents or proximitized materials [1; 2]. However, these very same advantageous properties can be a double-edged sword, being partly responsible for a general lack of understanding of antiferromagnets as compared to ferromagnets. The high spin-wave frequencies can be prohibitive for probes based on microwave electronics, while the insensitivity to measurement techniques such as SQUID magnetometry or the magneto-optical Kerr effect limit the effectiveness of these popular conventional magnetic probes. A less well-known probe, which has proven itself useful in the study of ferromagnets, relies not on optical or direct magnetic sensing but instead employs the magnetoelastic interaction between spin-waves and acoustic waves [3; 4]. When in contact with a piezoelectric material, the magnetic film can be irradiated with surface acoustic waves (SAWs). Beyond the magnetic film, the transmitted SAWs can be measured, providing information on the magnet's response to external stimuli [3; 5]. Aside from the energy efficient generation, inherently low attenuation, suitability for miniaturization and long distance propagation of SAWs [3; 6; 7], a particular advantage of this technique is that it does not discriminate between ferromagnetic and antiferromagnetic order, and indeed may even be stronger for the latter [8].
SAW technology is relatively mature, having found multiple applications in the microelectronics industry, yet continues to play a key role at the forefront of fundamental research, with recent notable advances including SAW-driven transport of single electrons in gallium arsenide [9], semiconductor interlayer excitons in van der Waals heterobilayers [10], and manipulation of the charge density wave in layered superconductors [11], amongst other advances [7]. Utilizing SAWs as a probe of ferromagnetism has proven highly effective, for instance in understanding the fundamentals of magnetoelasticity and magnetostriction, or more recently in revealing the various mechanisms of SAW nonreciprocity [3; 12; 13; 14; 15; 16; 17; 12; 13; 17]. Such works have laid the foundations for the active field of SAW-spintronics, in which dynamically applied strain can modulate magnetic properties [6; 18]. This technique is mature for ferromagnets, and has recently been proven effective for multiferroics [19] and synthetic antiferromagnets [14; 20], but a demonstration of SAW-driven magnon-phonon coupling in a crystalline antiferromagnet remains elusive.
Here, we utilize SAWs to drive spin-wave resonance in a layered crystalline antiferromagnet, chromium trichloride (CrCl\({}_{3}\)), a material characterized by layers of alternating magnetization weakly bound by van der Waals attraction [21; 22]. The antiferromagnetic order occurs only between adjacent monolayers rather than within them, giving rise to relatively weak interlayer exchange and associated lower frequency range of spin excitations in CrCl\({}_{3}\) as compared to conventional antiferromagnets [21; 23]. The combination of easy flake transfer onto arbitrary substrates, with sub-10 GHz spin excitations, is advantageous for integration of CrCl\({}_{3}\) into SAW devices, where antiferromagnetic magnetoelasticity can be probed directly. After first demonstrating acoustic antiferromagnetic resonance, we proceed to study the influence of temperature and angle of applied external magnetic field on the magnon-phonon coupling. The sets of experimental data are analyzed by extending the established the
oretical model for SAW-spin wave coupling in ferromagnetic films [4; 5]. Combined with a mean-field calculation of the temperature dependence, our model reproduces the observed features well, confirming the amenability of SAWs as a powerful probe to elucidate the dynamics of van der Waals magnets, especially given their excellent plasticity [24]. Considering also that acoustic magnetic resonance generates spin currents, which have been shown to travel over long distances in antiferromagnets, our results offer an alternative route towards novel spintronic devices with layered crystals [25; 26; 27; 28].
Two devices are studied in this work. They each consist of lithium niobate (LiNbO\({}_{3}\)) substrates with aluminium interdigital transducers (IDTs) either side of a CrCl\({}_{3}\) flake (Fig. 1a). Each IDT, 1 or 2, can generate SAWs at 1.1 GHz and wavelength 3.2 \(\upmu\)m, which subsequently propagate along the surface of the LiNbO\({}_{3}\), interact with the CrCl\({}_{3}\) flake, and then reach the other IDT where they are detected. By measuring SAW transmission in this way, any absorption of acoustic energy by the antiferromagnet can be detected (see methods). Sample 1 is quasi-bulk, at \(\sim 4\)\(\upmu\)m thick, while Sample 2 is much thinner at \(\sim 120\) nm (see Supplementary Information (SI)).
Below the Neel temperature of \(\sim 14\) K, layered CrCl\({}_{3}\) is composed of stacked ferromagnetic layers ordered antiferromagnetically [21; 22]. Alternate layers belong to one of two spin sublattices oriented collinearly in the layer plane, owing to easy plane anisotropy of strength \(\sim 250\) mT (Fig. 1b) [21]. Two magnon modes arise from in-phase or out-of-phase precession of the two sublattice macrospins, described as acoustic and optical modes, respectively [23]. In our experiments we apply an external magnetic field perpendicular to the crystal c-axis, inducing the two spin sublattice magnetizations to cant towards the applied field direction (Fig. 1b). Such noncollinear canting modifies their precession frequency, thereby bringing the magnon modes into resonance with the acoustic wave.
We first apply an external magnetic field at an angle \(\phi=45^{\circ}\) to the SAW propagation direction in Sample 1, and measure the amplitude of the SAW transmission.
Figure 1: **Magnon-phonon coupling in layered CrCl\({}_{3}\).** (a) Schematic of the devices used in this work. See text for description. (b) CrCl\({}_{3}\) consists of stacked ferromagnetic layers of alternating in-plane magnetization, represented by two spin sublattices (green and blue arrows). In the absence of an external magnetic field, the sublattice magnetizations point away from each other, while an applied field causes them to cant. In-phase and out-of-phase precession of the sublattice magnetizations are associated with acoustic and optical magnon modes, respectively. (c) SAW transmission signal through CrCl\({}_{3}\) in Sample 1 as a function of applied magnetic field strength at an angle \(\phi=45^{\circ}\), at various sample temperatures. (d) Extracted resonance field strengths for the acoustic and optical magnon modes at various Sample 1 temperatures. Overlaid curves are calculated from the model described in the text. (e) Calculated frequency dependence of the acoustic and optical magnon modes as a function of applied magnetic field, at \(T=4,6\) and \(8\) K.
The result is shown in Fig. 1c, where clear transmission dips can be seen arising from absorption of SAWs by magnons. At \(T=6\) K, absorption is observed at approximately 30 and 150 mT, attributed to the acoustic and optical modes, respectively. Examples of other external field orientations can be seen in the SI. Upon heating the sample, the optical mode absorption shifts to lower resonance field strengths while the acoustic mode stays largely insensitive to temperature (Fig. 1d). At \(T=13\) K, the two modes are no longer resolved, and at \(T=14\) K, close to the Neel temperature [21], they have disappeared.
The observed temperature dependence of the resonance field can be modelled by combining a simple mean-field theory with the known formulae for spin wave resonance in easy-plane antiferromagnets [23]
\[H_{\rm res}=\begin{cases}\sqrt{2H_{E}/(2H_{E}+M_{s})}\omega/\gamma&\text{acoustic }\\ \sqrt{4H_{E}^{2}-2H_{E}\omega^{2}/(M_{s}\gamma^{2})}&\text{optical}\end{cases} \tag{1}\]
Here \(H_{E}\) is the interlayer exchange field, \(M_{s}\) is the saturation magnetization, \(\omega\) is the SAW frequency, and \(\gamma/2\pi=28\) GHz/T is the gyromagnetic ratio respectively. We solve the molecular field equation self-consistently in the macrospin limit \(S\rightarrow\infty\) to obtain \(M_{s}(T)\). This approximation also implies \(H_{E}(T)\propto M_{s}(T)\), which predicts the optical mode resonance field tends towards zero as the Neel temperature is approached while the acoustic mode remains unchanged. The calculated temperature dependence is plotted in Fig. 1d and agrees well with the experimental data. The small increase of the observed acoustic mode resonance field towards higher temperature [29] points to breakdown of the mean-field approximation near the phase transition. The same model can be used to calculate the effective magnon frequency evolution as a function of applied magnetic field strength, as shown in Fig. 1e.
We now consider the coupling between SAWs and the acoustic magnon mode in greater detail. Figure 2 shows absorption by the acoustic mode as a function of external magnetic field orientation in the plane of Sample 2, where the vertical axis (0\({}^{\circ}\) - 180\({}^{\circ}\) line) is the SAW propagation axis. At \(T=4.2\) K, we observe four lobes of strong absorption, seen only when the external magnetic field is applied at angles smaller than 45\({}^{\circ}\) to the SAW propagation axis. As the temperature is increased to \(T=12\) K, they migrate to new positions which are more rotationally symmetric. By \(T=14\) K, close to the Neel temperature, the absorption has disappeared, in agreement with Sample 1.
Figure 2: **Acoustic magnon mode dependence on external field angle and temperature** (a-f) Polar plots of SAW absorption by the acoustic magnon mode in Sample 2, at various sample temperatures, as a function of applied external magnetic field orientation in the sample plane. Asymmetry at lower temperatures arises due to very weak uniaxial anisotropy \(\sim 2\) mT. Upon heating, the expected symmetric response of the magnetoelastic interaction is recovered. Absorption disappears at \(T=14\) K, close to the NΓ©el temperature.
To fully understand the results in Fig. 2, we must consider the interplay between antiferromagnetic resonance and magnon-SAW coupling. Each has its own dependence on external magnetic field orientation, with the latter defining the window through which we can observe the former. Firstly we focus on the magnetic response of CrCl\({}_{3}\) itself. Close inspection of Fig. 2a reveals that not only the magnitude of absorption but also the resonance field depends strongly on the magnetic field angle \(\phi\) at \(T=4.2\) K, indicating the presence of magnetic uniaxial anisotropy. To reproduce this observation, we calculate the acoustic mode resonance frequency as a function of \(\phi\) computed for a model that includes an in-plane uniaxial anisotropy field \(\mu_{0}H_{u}\approx 2.1\) mT, oriented approximately along the line \(171^{\circ}\) - \(351^{\circ}\). Although this anisotropy is itself very weak, it induces a sizable zero-field magnon frequency gap of \(\gamma\mu_{0}\sqrt{2H_{u}(2H_{E}+M_{s}+H_{u})}\sim 1.2\) GHz, above the SAW frequency of 1.1 GHz. As can be seen at \(T=4\) K in Fig. 3a, for \(30^{\circ}\lesssim\phi\lesssim 130^{\circ}\) and \(210^{\circ}\lesssim\phi\lesssim 310^{\circ}\), the frequency monotonically increases as \(H\) increases so that the acoustic magnon never becomes resonant with the SAWs. Only in the remaining angular ranges are acoustic spin-wave resonances observable, which correspond to the lobes in Fig. 2a.
According to the well-known formula \(H_{u}(T)\propto M_{s}(T)^{2}\)[30], the uniaxial anisotropy tends to zero as \(T\) increases towards the Neel point. We find it reduces to \(\approx 0.6\) mT at \(T=12\) K, lowering the zero-field magnon frequency below the SAW frequency, and thereby allowing acoustic magnon resonance at 1.1 GHz for all angles at around \(25-30\) mT (Fig. 3a). While uniaxial anisotropy of \(\sim 1\) mT has been observed before in CrCl\({}_{3}\)[31], the origin remains ambiguous. Here, we tentatively ascribe it to negative thermal expansion in CrCl\({}_{3}\), in which the \(a\)-axis lattice constant gradually increases upon cooling the crystal below \(T=50\) K, owing to magnon induced expansion of the lattice [32; 33]. Our results hint at the applicability of SAWs to further investigate this poorly understood effect, or moreover exploit it for highly sensitive static strain or force sensing applications.
To complete the picture, we now consider the magnon-SAW coupling dependence on external field orientation, which has proven the key to accessing various parameters in ferromagnetic materials [5]. Given that, unlike ferromagnets, the antiferromagnetic sublattice magnetizations do not simply align with the external field, we model the magnetoelastic coupling in CrCl\({}_{3}\) by a free energy density \(F_{\rm me}=b\epsilon_{ab}(a_{a}^{n}n_{a}^{A}+n_{a}^{B}n_{a}^{B})+2c\epsilon_{ ab}n_{a}^{A}n_{b}^{B}\). Here \(\epsilon_{ab}\) is the strain tensor, \(n_{a}^{A}\), \(n_{a}^{B}\) are components of the normalized sublattice magnetization vectors, and Einstein's summation convention is assumed. \(b\) is an intrasublattice magnetoelastic coefficient, a direct generalization of the ferromagnetic magnetoelasticity. \(c\) is an intersublattice coefficient, unique to antiferromagnets, which was studied in literature [34]. Let \(\phi_{A},\phi_{B}\) be the angles between the SAW propagation direction and the respective sublattice magnetizations. The corresponding magnon-SAW couplings \(g_{A},g_{B}\) exhibit the following angle dependence (see SI):
\[g_{A}\propto b\sin\phi_{A}\cos\phi_{A}+c\sin\phi_{A}\cos\phi_{B}, \tag{2}\] \[g_{B}\propto b\sin\phi_{B}\cos\phi_{B}+c\sin\phi_{B}\cos\phi_{A}. \tag{3}\]
The acoustic and optical modes see \(g_{A}\pm g_{B}\) respectively, reflecting the phase relations between the two sublattices. For acoustic mode resonance, \(H\) is small so that \(\phi_{B}\approx\phi_{A}+\pi\approx\phi\pm\pi/2\), yielding \(g_{A}+g_{B}\propto\sin 2\phi\). This acoustic magnon-SAW coupling filters the nominally observable resonance frequencies shown in Fig. 3a to give the cumulative responses shown in Fig. 3b, c, in which vanishing absorption can be seen at \(\phi=0^{\circ},90^{\circ},180^{\circ},270^{\circ}\). The agreement with Fig. 2a, e is satisfactory.
Next, we consider optical magnon-phonon coupling. Figures. 4a, b show the optical mode absorption in Sample 2, seen to some extent at every angle of applied field. This isotropic behaviour, in stark contrast to that displayed by the acoustic mode, arises because
Figure 3: **Theoretical model for acoustic mode** (a) Calculated acoustic mode frequency dependence on external magnetic field orientation \(\phi\). (b, c) Simulated polar plots of SAW absorption by the acoustic magnon as a function of external magnetic field orientation, using parameters for Sample 2. The striking difference in response is largely attributed to a change in anisotropy of only \(\sim 1\) mT.
the two canted spin sublattices adopt an almost parallel configuration at the relatively high field strength needed to reach resonance, i.e. \(\phi_{A}\approx\phi+\delta,\phi_{B}\approx\phi-\delta,|\delta|\ll\pi\). Equations (2) and (3) therefore yield \(g_{A}-g_{B}\propto(b\cos 2\phi+c)\sin 2\delta\). We note that the intra-sublattice coupling \(b\) alone gives a vanishing absorption at \(\phi=45^{\circ}\), inconsistent with both Sample 1 (Fig. 1c) and Sample 2 (Fig. 4a, b). Hence we take \(b=0,c\sim 10^{6}\) J/m\({}^{3}\) with the aforementioned temperature dependent \(H_{E},M_{s},H_{u}\) to generate Figs. 4c, d, which show the simulated optical mode absorption at \(T=4\) K and 13 K, respectively. The agreement with experiment is satisfactory at \(T=4.2\) K, and reasonable at \(T=13\) K, given the simplifications to the model (such as an absence of broadening/disorder) and the expected breakdown of the mean-field approximation close to the phase transition.
In conclusion, we demonstrate GHz-range SAW-driven magnon-phonon coupling in a crystalline antiferromagnet. This demonstration paves the way towards acoustically driven spintronic devices based on designer van der Waals heterostructures, which may combine antiferromagnetic, semiconducting, metallic and insulating layers to realise diverse outcomes in spin conversion [28, 35]. Moreover, it has been proposed that monolayer CrCl\({}_{3}\) exhibits true 2D XY-ferromagnetism, allowing study of the Berezinskii-Kosterlitz-Thouless phase transition [36], and predicted to play host to topological spin textures [37]. Creation and manipulation of such excitations by SAWs is a tantalising prospect, as has been recently achieved in conventional ferromagnetic systems [38].
## Methods
### Sample fabrication
First, IDTs (35 nm aluminium) and electrodes (5 nm titanium / 200 nm gold) are deposited onto 128\({}^{\circ}\) Y-cut LiNbO\({}_{3}\) chips. The IDT fingers are 400 nm wide with 1.2 \(\upmu\)m spacing, giving a SAW wavelength 3.2 \(\upmu\)m and frequency 1.1 GHz. The distance between IDT1 and IDT2 is approximately 600 \(\upmu\)m. Next, bulk CrCl\({}_{3}\) is exfoliated onto polydimethylsiloxane (PDMS) sheets using sticky tape (Nitto). Flakes with uniform thickness are transferred onto LiNbO\({}_{3}\) between IDT1 and IDT2 using a conventional PDMS dry stamping technique. Bulk CrCl\({}_{3}\) crystals are obtained from the commercial suppliers 2D Semiconductors (USA) and HQ Graphene (Netherlands).
### Acoustic antiferromagnetic resonance measurements
The LiNbO\({}_{3}\) chip is mounted on a radio-frequency compatible chip carrier and loaded into either a Montana closed-cycle cryostat with external electromagnet in 1 axis (Sample 1), or a helium bath cryostat with superconducting magnet coils in 2 axes (Sample 2). The former has a base temperature around 5 K and the latter around 4.2 K. Both cryostats allow variable sample temperature up to at least 30 K. Coaxial cables are used to connect the chip carrier to a vector network analyzer which is capable of measuring SAW transmission at 1.1 GHz. A time gating function is applied to the signal in order to filter out electromagnetic noise and retrieve the signals S21 and S12 at longer timescales 150 - 250 ns.
## Acknowledgements
The authors would like to thank Joseph Barker, Olena Gomonay, Hidekazu Kurebayashi, and Sean Stansill for helpful comments. TPL acknowledges support from the JSPS postdoctoral fellowships for research in Japan scheme, and KY from JST PRESTO Grant No. JPMJPR20LB, Japan and JSPS KAKENHI (No. 21K13886). JP is financially supported by Grants-in-Aid for Scientific Research (S) (No. 19H05629) and JSPS KAKENHI (20H01865), from MEXT, Japan. RSD is supported by Grants-in-Aid for Scientific Research (S) (No. 19H05610), from MEXT, Japan. Y.H.
Figure 4: **Optical magnon mode dependence on external field angle and temperature** (a,b) Experimental and (c,d) simulated polar plots of SAW absorption by the optical mode in Sample 2 as a function of external magnetic field orientation at \(T=4\) K (experimental base temperature \(T=4.2\) K) and 13 K.
is supported by the RIKEN Junior Research Associate Program. SM is financially supported by JST CREST Grant (No.JPMJCR19J4, No.JPMJCR1874 and No.JPMJCR20C1) and JSPS KAKENH (No.17H02927 and No.20H10865) from MEXT, Japan. YO is is financially supported by Grants-in-Aid for Scientific Research (S) (No. 19H05629).
## Author Contributions
T. P. L., J. P. and R. S. D. performed experiments. T. P. L., J. P. and Y. H. fabricated samples. All authors contributed to data interpretation and analysis. K. Y. and S. M. developed the theoretical model. T. P. L. and K. Y. wrote the paper. J. P. and Y. O. initiated and supervised the project.
|
2303.17225 | FreeSeg: Unified, Universal and Open-Vocabulary Image Segmentation | Recently, open-vocabulary learning has emerged to accomplish segmentation for
arbitrary categories of text-based descriptions, which popularizes the
segmentation system to more general-purpose application scenarios. However,
existing methods devote to designing specialized architectures or parameters
for specific segmentation tasks. These customized design paradigms lead to
fragmentation between various segmentation tasks, thus hindering the uniformity
of segmentation models. Hence in this paper, we propose FreeSeg, a generic
framework to accomplish Unified, Universal and Open-Vocabulary Image
Segmentation. FreeSeg optimizes an all-in-one network via one-shot training and
employs the same architecture and parameters to handle diverse segmentation
tasks seamlessly in the inference procedure. Additionally, adaptive prompt
learning facilitates the unified model to capture task-aware and
category-sensitive concepts, improving model robustness in multi-task and
varied scenarios. Extensive experimental results demonstrate that FreeSeg
establishes new state-of-the-art results in performance and generalization on
three segmentation tasks, which outperforms the best task-specific
architectures by a large margin: 5.5% mIoU on semantic segmentation, 17.6% mAP
on instance segmentation, 20.1% PQ on panoptic segmentation for the unseen
class on COCO. | Jie Qin, Jie Wu, Pengxiang Yan, Ming Li, Ren Yuxi, Xuefeng Xiao, Yitong Wang, Rui Wang, Shilei Wen, Xin Pan, Xingang Wang | 2023-03-30T08:42:49Z | http://arxiv.org/abs/2303.17225v1 | # FreeSeg: Unified, Universal and Open-Vocabulary Image Segmentation
###### Abstract
Recently, open-vocabulary learning has emerged to accomplish segmentation for arbitrary categories of text-based descriptions, which popularizes the segmentation system to more general-purpose application scenarios. However, existing methods devote to designing specialized architectures or parameters for specific segmentation tasks. These customized design paradigms lead to fragmentation between various segmentation tasks, thus hindering the uniformity of segmentation models. Hence in this paper, we propose **FreeSeg**, a generic framework to accomplish **Unified**, **Universal** and **Open-Vocabulary** Image Segmentation. FreeSeg optimizes an all-in-one network via one-shot training and employs the same architecture and parameters to handle diverse segmentation tasks seamlessly in the inference procedure. Additionally, adaptive prompt learning facilitates the unified model to capture task-aware and category-sensitive concepts, improving model robustness in multi-task and varied scenarios. Extensive experimental results demonstrate that FreeSeg establishes new state-of-the-art results in performance and generalization on three segmentation tasks, which outperforms the best task-specific architectures by a large margin: **5.5%** mIoU on semantic segmentation_, **17.6%** mAP on instance segmentation_, **20.1%** PQ on panoptic segmentation for the unseen class on COCO. Project page: [https://FreeSeg.github.io](https://FreeSeg.github.io).
## 1 Introduction
Image segmentation has been one of the most widely researched topics in computer vision, aiming to simultaneously group and categorize object pixels in the image. In the recent literature, the image segmentation community has witnessed tremendous success at cost of large-scale datasets [1, 3, 30], where objects are exhaustively annotated with pixel-level masks and category labels. However, due to the time-consuming and laborious annotations, the template categories sizes of existing segmentation tasks are still limited to an order of \(10\) or \(10^{2}\), which is in orders of magnitude much smaller than the vocabulary that humans use to describe the real world. Such learning objective binds the segmentors' scalability into a limited cognitive space, and it becomes a critical bottleneck when this system is popularized to handle richer and more generalized semantics.
As a viable path to handle categories of custom specification beyond the training dataset, open-vocabulary learning leverages large-scale visual-language pre-training models (such as CLIP [26], ALIGN [14]) to calculate matching similarity between visual concept and text corpus. Recently, a series of segmentation-based open-vocabulary studies [1, 37, 38] have emerged to design task-specific architectures and parameters for individual segmentation task. For example, ZSSeg [38] leverages the off-the-shelf pre-trained CLIP model and achieves competitive performance in open vocabulary semantic segmentation. However, current works suffer from two obvious shortcomings when popularized to general segmentation scenes: i) _task-insensitive_: they can not capture task-aware characteristics and be effectively generalized to diverse segmentation tasks; ii) _resource-unfriendly_: the model needs to be trained from scratch when switching tasks, and diverse tasks require deploying multiple customized models. Although MaskFormer [6] succeeds in accomplishing multiple segmentation tasks into one compact system, it still needs to train a customized model for each task and it is not designed for open-vocabulary tasks. These observations motivate us to raise a question: _how to design a unified open-vocabulary framework to accomplish universal segmentation tasks?_
To address the above question, As shown in Fig.1, we propose **FreeSeg**, a novel framework to accomplish **Unified**, **Universal** and **Open-Vocabulary** Image Segmentation. In FreeSeg, our goals are mainly three-fold: i) Unified: FreeSeg designs a unified (all-in-one) network that employs the same architecture and inference parameters to handle multiple segmentation tasks; ii) Universal: FreeSeg adapts to various tasks, namely semantic, instance and panoptic segmentation; iii) Open-Vocabulary: FreeSeg is capable of generalizing to arbitrary segmentation categories.
In general, FreeSeg advocates a two-stage segmentation framework, with the first stage extracting universal mask proposals and the second stage accomplishing zero-shot classification on these masks. Specifically, FreeSeg conducts a one-shot training procedure to optimize a unified segmentation model with multi-task labels, which helps to capture task-special characteristics for universal segmentation. An adaptive prompt learning scheme is introduced to encode task-aware and category-sensitive concepts into the text abstraction. It enables FreeSeg to flexibly accomplish different segmentation tasks of arbitrary categories, handling all tasks and categories in one model. To sum up, FreeSeg is a _task-flexible, category-arbitrary and performance-excellent_ framework, the main contributions of our work are listed as follows:
* To the best of our knowledge, we offer the first attempt to tackle a novel computer vision task, namely, unified open-vocabulary segmentation. A universal framework FreeSeg is proposed to employ an all-in-one model with the same architecture and inference parameters to accomplish open-vocabulary semantic, instance, and panoptic segmentation.
* Adaptive prompt learning explicitly encodes multi-granularity concepts (task, category) into compact textual abstraction and helps the unified model generalize to arbitrary text descriptions. FreeSeg further designs the semantic context interaction and test time prompt tuning mechanism to improve cross-model alignment and generalization for unseen classes.
* We evaluate FreeSeg on three image segmentation tasks (semantic, instance, and panoptic segmentation) using COCO, ADE20K and VOC 2012. As shown in Fig.1 (c), extensive experiments demonstrate that FreeSeg establishes new state-of-the-art results in terms of performance and generalization. In addition to reducing the research effort by at least three times, it outperforms the best-specialized architectures and is more feasible for multi-task deployment.
## 2 Related Work
### Open Vocabulary Segmentation
Deep learning [18, 19, 34, 35, 36, 39] and image segmentation has recently witnessed tremendous success [3, 4, 6, 24, 25, 30, 40]. Open vocabulary segmentation aims to segment the target categories that can not access during the training procedure. The existing approaches can be divided into two aspects: mapping visual features into semantic space [1, 11, 37] and cross-modal alignment with pre-trained models [7, 17, 38]. For the mapping aspect, SPNet [37] encodes visual features to the semantic embedding space and then projects each pixel feature to predict probabilistic outcomes through a fixed semantic word encoding matrix. ZS3Net [1] generates the pixel-level features of unseen classes in the semantic embedding space and adopts the generated features to supervise a visual segmentation model. STRICT [23] introduces a self-training technique into SPNet to improve the segmentation performance of unseen classes. Cross-modal alignment employs robust zero-shot capabilities of the pre-trained cross-modal models such as CLIP [26] to conduct open vocabulary segmentation tasks. LSeg [17] learns a CNN model to compute per-pixel image features to match with the text embeddings embedded by the pre-trained text model. ZegFormer [7] and ZSSeg [38] leverage the visual model to generate the class-agnostic masks, and use the pre-trained text encoder to retrieve the unseen class masks. XPM [13] utilizes the region-level features to match CLIP-based text embeddings to accomplish the open vocabulary instance segmentation. MaskCLIP [8] attempts to establish relationships between the class-agnostic masks in the CLIP visual encoder to complete the open vocabulary panoptic segmentation.
### Universal Segmentation Architecture
The goal of the universal segmentation framework is to employ the same architecture in arbitrary segmentation tasks, so current universal segmentation approaches [5, 6, 41] regularly constrain multiple tasks (_semantic_, _instance_, _panoptic_) to a unified training paradigm. MaskFormer [6] unifies the segmentation tasks into a classification problem for masks, _i.e_., outputting binary masks and the corresponding categories, which achieves state-of-the-art performance in both semantic and panoptic segmentation tasks. K-Net [41] standardizes instance segmentation into semantic segmentation via learnable kernels to accomplish the semantic, instance, and panoptic segmentation tasks simultaneously. Mask2Former [5] employs the masked attention mechanism into MaskFormer to improve the generalization of the unified model and the performance of each task. However, these unified frameworks still require training a separate model for each task to achieve the best performance. Our proposed FreeSeg conduct one-shot training to optimize an all-in-one model to finish multiple segmentation tasks.
### Prompt Learning
Prompt learning achieved a remarkable leap in the field of NLP [12, 16, 33], and then is rapidly popularized into the vision or vision-language models [28, 45]. CoOp [45] brings continuous prompt optimization from downstream data to adapt the pre-trained vision-language model. DenseCLIP [28] finetunes the pre-trained text encoder with the given prompt templates to perform text and visual feature matching for downstream intensive prediction tasks such as detection and segmentation. For open vocabulary segmentation tasks [7, 38], prompt templates are generated from the given category names, and then are encoded to the text embeddings for matching the unseen classes.
## 3 Methodology
### FreeSeg Framework
The proposed unified open-vocabulary segmentation aims to optimize an all-in-one model to obtain semantic, instance, and panoptic segmentation results on arbitrary categories. To address this novel task, we propose a novel framework to accomplish unified and universal open vocabulary segmentation in this paper, termed as FreeSeg. FreeSeg advocates a two-stage framework, with the first stage extracting universe mask proposals and the second stage leveraging CLIP to perform zero-shot classification on the masks which are generated in the first stage. The whole framework of FreeSeg is illustrated in Fig. 2.
**Training.** The training data in the first stage contains images \(I\), seen category set \(C_{seen}\), task names \(T_{train}\) and multi-task labels \(M^{gt}\). The training procedure only accesses the seen categories \(C_{seen}\) and the corresponding labels. The mask proposal extractor encodes the image into visual concepts \(F_{v}\in\mathcal{R}^{N\times D}\) and class-agnostic masks \(M\in\mathcal{R}^{N\times H\times W}\), where \(N\) and \(D\) denote the number of queries and feature dimensions. To encapsulate multiple learned tasks in a unified model, We leverage three task-specific labels, _i.e_., \(M^{gt}\in(M^{gt}_{sem},M^{gt}_{ins},M^{gt}_{pan})\) to selectively supervise the mask proposal extractor with mask loss:
\[\mathcal{L}_{mask}=\mathcal{L}_{F}(M,M^{gt})+\mathcal{L}_{D}(M,M^{gt}), \tag{1}\]
where \(\mathcal{L}_{F}\) denotes the Focal [20] loss and \(\mathcal{L}_{D}\) is the Dice [22] loss. Simultaneously optimizing all tasks is often difficult due to gradient conflicts across tasks during training, thus only one task label is selected for supervision per iteration, which is randomly selected from the \((M^{gt}_{sem},M^{gt}_{ins},M^{gt}_{pan})\).
To facilitate FreeSeg to handle task and categories characteristics, we design a novel adaptive prompt learning to explicitly embed task and category concepts into joint text embeddings \(F_{t}\in\mathcal{R}^{C\times D}\) via a pre-trained CLIP-based text encoder, where \(C\) denotes the number of categories. The cross-modal classification supervision is set up to enable FreeSeg to classify generated masks according to arbitrary text. Specifically, the visual concepts \(F_{v}\) are leveraged to compute the similarity matching map with text embeddings \(F_{t}\). The cosine similarity score \(\mathcal{S}\in\mathcal{R}^{N\times C}\) between pairs of \(F^{i}_{v}\) and \(F^{j}_{t}\) is computed as:
\[S(i,j)=cos(F^{i}_{v},F^{j}_{t})=\frac{F^{i}_{v}\cdot F^{j}_{t}}{\left\|F^{i}_{v }\right\|\left\|F^{j}_{t}\right\|}, \tag{2}\]
where \(i\in[1,N]\), \(j\in[1,C]\). The obtained similarity matching map indicates the probability of the predicted category for all class-agnostic masks, which is supervised by the class labels with the cross-entropy loss \(\mathcal{L}_{cla}\). The total training loss is formulated as:
\[\mathcal{L}=\mathcal{L}_{cla}+\mathcal{L}_{mask}, \tag{3}\]
**Testing.** In the testing phase, the trained mask proposal extractor generates a set of binary masks with textual guidance and leverages the pre-trained CLIP visual encoder to obtain mask-level visual concepts. FreeSeg calculates the similarity between mask representation and compact text embedding and outputs task-oriented segmentation results according to the adaptive task prompt. With the aid of adaptive prompt learning, FreeSeg can handle arbitrary tasks and categories. The test category set \(C_{test}\) consists of seen classes \(C_{seen}\) and additional unseen classes \(C_{unseen}\).
### Adaptive Prompt Learning
To encode arbitrary tasks and categories into compact textual abstraction, we propose the adaptive prompt learn
ing module containing the adaptive task prompt \(P_{t}\) and the adaptive class prompt \(P_{c}\). Fixed prompt puts all category and task names into the same templates, which is not the optimal representation for task-category pair contexts. While adaptive prompt learning turns the task and category texts into a set of learnable vectors, which are concatenated as text embeddings to facilitate model training.
**Adaptive Task Prompt**. The adaptive task prompt promotes capturing task-specific characteristics, encapsulating multiple learned tasks in a unified framework, and effectively disentangles the parameter spaces to avoid different tasks' training conflicts. Specifically, the adaptive task prompt \(P_{t}\) is generated according to the template {\(\circ\circ\)... \(t\)... \(\circ\) }, where \(\circ\) denotes the learnable vectors. \(t\) is the corresponding task name in a task set \(T\), which contains "_semantic segmentation._", "_instance segmentation._", or "_panoptic segmentation._". Then the task prompts are embedded by the pre-trained CLIP text encoder \(\Psi\):
\[E_{t}=\Psi(P_{t}(t)),t\in T, \tag{4}\]
where \(E_{t}\) denotes the task embeddings.
**Adaptive Class Prompt**. An adaptive class prompt is introduced to popularize FreeSeg to generalize to broader unseen categories and improve open-domain performance. Given the semantic categories \(C_{seen}\) involved in training, the class prompts \(P_{c}\) are obtained by the template {\(\circ\circ\)... \(c\)... \(\circ\) }, where \(c\) is the filled class names. The adaptive class prompt \(P_{c}\) is embedded to generate the class text embeddings \(E_{c}\):
\[E_{c}=\Psi(P_{c}(c)),c\in C_{seen}, \tag{5}\]
To model a joint task-category textual space, the class text embeddings \(E_{c}\) and the task text embeddings \(E_{t}\) are fused to get the multi-granularity embeddings \(F_{t}\):
\[F_{t}=Cat(E_{c},E_{t}), \tag{6}\]
where \(Cat\) denotes the concatenation operation. It is worth noting that the input category can be arbitrary, so \(F_{t}\) can seamlessly adapt to unseen categories for open vocabulary segmentation.
### Semantic Context Interaction
The vanilla visual concepts ignore task and category information that can provide more reliable cues for comprehensive inference. To address this issue, we creatively introduce a _semantic context interaction module_ to improve the cross-modal feature matching and alignment by effectively aggregating adaptive textual embedding into visual concepts. Specifically, the semantic context interaction module employs the cross-attention module to model the correlations between text embeddings and multiple-scale visual features.
\[Attn(Q^{z},K,V)=softmax(\frac{Q^{z}K^{T}}{\sqrt{d_{k}}})V^{T}, \tag{7}\]
\[Q^{z}=\phi_{q}(F_{v}^{z}),\ \ \ K=\phi_{k}(F_{t}),\ \ \ V=\phi_{v}(F_{t}), \tag{8}\]
where \(F_{v}^{z}\) denotes \(z\)-layer visual feature from decoder in mask proposal extractor. \(Q^{z},K,V\) denote the query, key, and value embeddings generated by the projection layers \(\phi_{q},\phi_{k},\phi_{v}\). \(\sqrt{d_{k}}\) represents the scaling factor. Then the attention relationship is utilized to enhance the visual features:
\[\hat{F_{v}^{z}}=\mathcal{H}\{Attn[\phi_{q}(F_{v}^{z}),\phi_{k}(F_{t}),\phi_{v} (F_{t})]\}, \tag{9}\]
where \(\mathcal{H}\) denotes the output projection layer. The enhanced visual feature \(\hat{F_{v}^{z}}\) is beneficial to emphasize the visual feature concerning the given text classes.
Figure 2: Overview of our two-stage FreeSeg framework. i) one-shot training: optimizes an all-in-one segmentation model via multi-task supervision to generate universal mask proposals; ii) Multi-task inference: leverages pre-trained CLIP to classify mask proposals according to adaptive task and class prompt.
### Test Time Prompt Tuning
To improve the cross-modal alignment of unseen categories, we leverage the test time adaptation (TTA) algorithm [15, 31, 32] to refine the adaptive class prompt during testing, termed as _Test Time Prompt Tuning_.
In the testing phase, we filter out the cosine similarity scores \(S_{u}\) of unseen classes and calculate the corresponding entropy:
\[entro=-\frac{1}{N_{u}}\sum_{i=1}^{N_{u}}s_{i}log(s_{i})\text{,} \tag{10}\]
where \(entro\) denotes the entropy value of each sample. \(N_{u}\) is the number of the unseen classes and \(s_{i}\) is the score of \(i^{th}\) class of \(S_{u}\). Then we select the high-confidence queries according to the entropy \(S_{u}^{*}=S_{u}[entro<\tau]\), where \(\tau\) is the threshold of the high confidence. Because the low entropy value indicates the high confidence level of the sample predictions. We calculate the entropy loss \(\mathcal{L}_{ent}\) to optimize the parameters of the adaptive class prompt:
\[\mathcal{L}_{ent}=-\frac{1}{N_{u}K}\sum_{i=1}^{N_{u}}\sum_{j=1}^{K}s_{ij}log(s _{ij})\text{,} \tag{11}\]
where \(s_{ij}\) denotes the score of \(j\)-th selected queries. \(K\) is the queries number in \(S_{u}^{*}\).
## 4 Experimental Results
### Datasets and Evaluation Metrics
#### 4.1.1 Datasets
**COCO.** COCO dataset [21] contains multi-tasks ground-truth labels towards the same image. We collect semantic labels of COCO stuff [2] and panoptic labels of COCO and merge them to get the unified, category-wide annotations \(M^{gt}\). We follow [37, 38] to divide all 171 categories into 156 seen and 15 unseen classes to complete the open vocabulary segmentation task.
**ADE20K.** ADE20K [44] contains 20,000 training images and 2,000 validation images with 150 categories. We split 15 categories into unseen classes, and the remaining 135 are treated as seen/training classes.
**PASCAL VOC2012.** We conduct experiments on PASCAL VOC2012 [9] to accomplish semantic segmentation. Following [37, 38], we divide 20 foreground classes into 15 seen classes and 5 unseen classes to evaluate the effectiveness of the open vocabulary segmentation.
#### 4.1.2 Evaluation Metrics
**Semantic segmentation.** We follow [38, 7] to adopt the mean of Interaction over Union (mIoU) to respectively evaluate the open vocabulary semantic segmentation performance for seen and unseen classes. We also employ the harmonic mean IoU (hloU) among the seen and unseen classes to measure comprehensive performance.
**Instance segmentation.** We report the mean Average Prediction (mAP) of seen and unseen classes for open vocabulary instance segmentation.
**Panoptic segmentation.** For open vocabulary panoptic segmentation, we follow the setting of fully supervised panoptic segmentation and use task-aware metrics (, PQ, SQ, RQ) to evaluate panoptic segmentation quality.
### Implementation Details
**COCO.** We employ Mask2Former [5] as the mask proposal extractor and ResNet101 as the backbone. VIT-B/16 is adopted as the backbone of CLIP [26]. All experiments are conducted on 8\(\times\)A100 GPUs. We take the batch size of 32 per GPU and set the input image size as 640\(\times\)640. The optimizer is AdamW with a learning rate of 0.0002 and weight decay of 0.0002. The number of training iterations is 60,000. In addition, the learnable parameter size of the task prompt is 8\(\times\)512, and the class prompt is 16\(\times\)512. We follow the comparison methods [7, 23, 38] to employ the self-training technique for training.
**ADE20K and PASCAL VOC2012.** ADE20K dataset uses
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{COCO} & \multicolumn{3}{c|}{VOC2012} & \multicolumn{3}{c}{ADE20K} \\ & **mIoU\({}^{s}\)** & **mIoU\({}^{u}\)** & **hloU** & **mIoU\({}^{s}\)** & **mIoU\({}^{u}\)** & **hloU** & **mIoU\({}^{s}\)** & **mIoU\({}^{u}\)** & **hloU** \\ \hline Full Sup. & 42.9 & 54.3 & 47.9 & 92.3 & 89.5 & 91.1 & 46.1 & 41.5 & 44.0 \\ \hline SPNet [37] & 34.6 & 26.9 & 30.3 & 77.8 & 25.8 & 38.8 & - & - & - \\ ZS5 [1] & 34.9 & 10.6 & 16.2 & 78.0 & 21.2 & 33.3 & - & - & - \\ CaGNet [11] & 35.6 & 13.4 & 19.5 & 78.6 & 30.3 & 43.7 & - & - & - \\ STRICT [23] & 35.3 & 30.3 & 32.6 & 82.7 & 35.6 & 73.3 & - & - & - \\ ZegFormer [7] & 36.6 & 33.2 & 34.8 & 86.4 & 63.6 & 73.3 & - & - & - \\ ZSSeg [38] & 39.6 & 43.6 & 41.5 & 79.2 & 78.1 & 79.3 & 39.1 & 20.3 & 31.6 \\ Ours & **42.2** & **49.1** & **45.3** & **91.8** & **82.6** & **86.9** & **44.2** & **28.6** & **39.8** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with state-of-the-art methods in open vocabulary semantic segmentation. mIoU\({}^{s}\) and mIoU\({}^{u}\) denote the mIoU(%) of seen classes and unseen classes. The variant βFull Sup.β denotes training FreeSeg with all seen and unseen classes.
512\(\times\)512 input image size and the number of iterations is set to 20,000 on PASCAL VOC2012. The remaining training settings on these two datasets are the same as COCO.
### Comparison to State-of-the-art Methods
Open Vocabulary Semantic SegmentationWe compare FreeSeg with current state-of-the-art open vocabulary semantic segmentation methods in Tab.1, including SPNet [37], ZS5 [1], CaGNet [11], STRICT [23], ZegFormer [7], ZSSeg [38]. Tab.1 can be summarized as the following observations: i) FreeSeg achieves 49.1% and 28.6% mIoU towards unseen classes on COCO and ADE20K, which surpasses the previous best method ZSSeg by +5.5% and +8.3%, respectively. It indicates that FreeSeg can adapt to more generalized scenarios. ii) We also report the result of the fully supervised baseline, denoted as "Full Sup.", which is trained on both seen and unseen classes. Remarkably, FreeSeg is only 0.7% and 5.2% worse than the fully supervised baseline "Full Sup." in seen and unseen classes on COCO, respectively. iii) To compare with competitive methods that are only trained on VOC benchmark, we also report the result of FreeSeg in the same setting as previous work. The experimental results show that FreeSeg obtains 91.8%/82.6% mIoU on the seen and unseen classes, which outperforms ZSSeg by 12.6%/4.5%. It further proves that FreeSeg is both robust and excellent for handling multi-tasks and single task.
Open Vocabulary Instance SegmentationAs shown in Tab.2, we compare the open vocabulary instance segmentation performance on COCO and ADE20K datasets, including ZSSeg [38], PL [27], BLC [42], and ZSI [43]. Since ZSSeg did not report the result on this task, we reproduce the results by training on the instance segmentation labels with the official code. The variant "CLIP" denotes the direct matching results with the pre-trained CLIP [26] text and visual encoder. FreeSeg achieves 20.6% mAP of unseen classes on COCO, which outperforms the best-performance method ZSI by +7.0% mAP. However, the mAP of the seen classes of ZSI is higher than FreeSeg. It is because ZSI [43] uses box-level supervision, which is more favorable for instance segmentation, while FreeSeg uses more general mask supervision for various segmentation tasks. In addition to COCO, FreeSeg also achieves promising results on ADE20k. For example, FreeSeg achieves 16.3% / 15.4% mAP on seen / unseen classes, which outperforms the baseline CLIP by +10.7% and +11.9% mAP.
Open Vocabulary Panoptic SegmentationSince few works study open vocabulary panoptic segmentation, we report the results of FreeSeg and the CLIP [26] baseline in Tab. 3. We also re-implement ZSSeg [38] on panoptic segmentation labels to accomplish this task. We observe that FreeSeg achieves 29.8% PQ, 79.2% SQ, and 37.6% RQ of the unseen classes, outperforming ZSSeg by 20.1%, 7.5%, and 25.4%, respectively. The main performance improvement comes from the unseen classes, indicating that this semantic segmentation-oriented method like ZSSeg is hard to generalize to other tasks, while our FreeSeg has noticeable generalization capability. On the ADE20K dataset, FreeSeg also achieves the best results with 25.4%, 75.2%, and 30.6% of unseen classes on PQ, SQ, and RQ, respectively. These above multi-task results prove the generalization ability of FreeSeg for unified open vocabulary segmentation tasks.
segmentation tasks. As shown in Tab. 5, FreeSeg achieves 24.6% mIoU semantic segmentation, 6.5% mAP instance segmentation, 16.3% PQ, 71.8% SQ, 21.6% RQ panoptic segmentation results on ADE20K with \(G_{coco}\), which outperforms the SOTA method MaskCLIP [8] with 0.9% mIoU, 0.6% mAP, 1.2% PQ, and 2.4% RQ, respectively. FreeSeg also obtains the best performance when validating \(G_{ade}\) on COCO datasets, achieving 21.7% mIoU, 6.6% mAP, 16.5% PQ, 72.0% SQ, and 21.6% RQ for semantic, instance, and panoptic segmentation, respectively. The generalization results on VOC2012 with \(G_{coco}\) and \(G_{ade}\) also verify the transferability of FreeSeg in Tab.6
### Ablation Study
Component AnalysisWe conduct ablation studies to analyze essential components of FreeSeg on COCO datasets in Tab.4. Note that the self-training technique is not applied in these ablations. The primary vision model achieves an inferior performance of 4.9% mIoU and 0.7% mAP on the unseen classes without any text guidance. By introducing the adaptive class prompt, the performance is improved significantly on COCO, especially for the unseen classes. Then the adaptive task prompt and semantic context interaction module is gradually inserted into the framework, which brings out the performance improvement of 2.8% and 1.3% mIoU on COCO dataset, respectively. Furthermore, the experimental results show that test time prompt tuning also improves the unseen classes' performance during inference.
We also explore the effectiveness of the proposed modules on the open vocabulary instance and panoptic segmentation and obtain a highly consistent conclusion with semantic segmentation. It demonstrates that adaptive prompt learning promotes FreeSeg to capture task-aware and category-sensitive characteristics. The semantic context interaction and test-time prompt-tuning help to improve the cross-modal alignment of visual and text features.
Multi-Task Analysis.To validate the advantages of multi-task learning in FreeSeg, we compare the results of the unified multi-task training with the single-task training for specific tasks. As shown in Tab.7, all the results from the multi-task row are obtained from one unified model, while the single-task results are from three individual models. All results are obtained without the self-training technique. Multi-task training achieves 41.9% and 43.3% mIoU for the seen and unseen classes on open vocabulary semantic segmentation, suppressing the performance of the single-task model. Open-vocabulary instance and panoptic segmentation also show consistent results as semantic segmentation, especially in the performance of unseen classes. FreeSeg improves all metrics of unseen classes on all tasks, proving that the multi-task training scheme can efficiently improve the generalization of the networks. Furthermore, the unified open vocabulary model conducts a one-shot training procedure with multi-task labels, which achieves superior performance while reducing nearly 2/3 of training costs.
Adaptive Prompt Analysis.We compare the results of different prompt settings to verify the importance of the
\begin{table}
\begin{tabular}{l c} \hline \hline
**Method** & **mIoU** \\ \hline \hline & _COCO \(\rightarrow\) VOC2012_ & _ADE20K \(\rightarrow\) VOC2012_ \\ \hline CLIP [26] & 71.6 & 67.1 \\ ZSSeg [38] & 82.1 & 69.2 \\ Ours & **91.9** & **80.1** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Generalization performance (in%) of the open vocabulary semantic segmentation on VOC2012 datasets.
\begin{table}
\begin{tabular}{c c|c|c|c c|c c c|c c c c} \hline \hline
**Adaptive Prompt** & **Context** & **Prompt** & **Semantic** & \multicolumn{3}{c|}{**Instance**} & \multicolumn{3}{c}{**Panopnic**} \\ \hline
**Class** & **Task** & **Interaction** & **Tuning** & **mIoU\({}^{s}\)** & **mIoU\({}^{u}\)** & **mAP\({}^{s}\)** & **mAP\({}^{u}\)** & **PQ\({}^{s}\)** & **SQ\({}^{s}\)** & **RQ\({}^{s}\)** & **PQ\({}^{u}\)** & **SQ\({}^{u}\)** & **RQ\({}^{u}\)** \\ \hline \hline
**β** & **β** & **β** & 38.4 & 4.9 & 19.2 & 0.7 & 25.9 & 70.4 & 32.0 & 0.1 & 0.2 & 0.1 \\ β & **β** & **β** & 39.0 & 38.5 & 20.1 & 8.8 & 26.2 & 70.7 & 32.5 & 12.0 & 62.6 & 15.3 \\ β & **β** & **β** & 40.8 & 41.3 & 22.2 & 11.8 & 28.5 & 74.3 & 35.8 & 15.7 & 67.4 & 19.8 \\ β & **β** & **β** & 42.1 & 42.6 & 23.7 & 13.9 & 30.0 & 76.5 & 37.3 & 18.1 & 70.5 & 23.2 \\ β & **β** & **β** & **β** & **41.9** & **43.3** & **23.9** & **14.6** & **30.4** & **76.7** & **38.1** & **19.2** & **71.4** & **24.1** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies of the proposed modules on COCO datasets.
\begin{table}
\begin{tabular}{l c|c|c c c} \hline \hline
**Method** & **mIoU** & **mAP** & **PQ** & **SQ** & **RQ** \\ \hline \hline & \multicolumn{4}{c}{_COCO \(\rightarrow\) ADE20K_} \\ \hline CLIP [26] & 13.8 & 3.9 & 8.2 & 53.1 & 10.5 \\ Lseg+ [10] & 13.0 & - & - & - & - & - \\ OpenSeg [10] & 15.3 & - & - & - & - \\ ZSSeg [38] & 16.4 & 4.0 & 9.3 & 58.0 & 12.2 \\ MaskCLIP [8] & 23.7 & 5.9 & 15.1 & 70.4 & 19.2 \\ Ours & **24.6** & **6.5** & **16.3** & **71.8** & **21.6** \\ \hline \hline & \multicolumn{4}{c}{_ADE20K \(\rightarrow\) COCO_} \\ \hline CLIP [26] & 14.7 & 2.7 & 8.1 & 66.3 & 11.0 \\ ZSSeg [38] & 17.7 & 4.3 & 11.2 & 66.5 & 14.9 \\ Ours & **21.7** & **6.6** & **16.5** & **72.0** & **21.6** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Generalization performance (in%) of the open vocabulary segmentation on cross datasets.
adaptive prompt for open vocabulary segmentation in Tab.7. The fixed template prompt uses the template sentence "A photo of {_class_}." where {_class_} is placed in specific class names. The task name is filled into the template "for {_task_}." to get the fixed task prompt. Then the task prompt and the class prompt are encoded into the text features. As shown in Tab.7, the adaptive prompt brings out 3.4% and 9.6% mIoU performance improvement than the fixed prompt regarding seen and unseen classes, respectively. Similarly, the adaptive prompt outperforms the fixed prompt by 0.9% and 2.2% mAP on instance segmentation and by 8.2% and 7.5% PQ on panoptic segmentation. It reveals that adaptive prompt facilitates the prompt to capture task-aware and category-sensitive concepts via learnable parameters.
### Qualitative results
We visualize the qualitative results of the unified open vocabulary segmentation in Fig.3. It can be observed that CLIP fails to segment the instances of some unseen classes like "cow" and "skateboard" in the first and fourth images. However, FreeSeg accurately segments the unseen class regions such as "giraffe" or "grass" for semantic segmentation. These figures show our capability of specifying arbitrary classes in instance and panoptic segmentation. These results demonstrate that FreeSeg is capable of generalizing to arbitrary segmentation categories in universal segmentation tasks.
## 5 Conclusion
In this paper, we provide a universal framework, _i.e_., FreeSeg to accomplish unified open-vocabulary segmentation. To the best of our knowledge, we offer the first attempt to employ a single model with the same architecture and inference parameters to accomplish open-vocabulary semantic, instance, and panoptic segmentation. Compared with single-task training, FreeSeg successfully reduced the training cost by about two-thirds and achieved better generalization performance. Only one unified model is needed in real-scene deployment, reducing the inference procedure's computational capacity, memory cost, and bandwidth. We believe our work can provide inspired insight and suggest a new path forward in open-vocabulary segmentation.
\begin{table}
\begin{tabular}{c|c c|c c c|c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Semantic**} & \multicolumn{2}{c|}{**Instance**} & \multicolumn{4}{c}{**Panoptic**} \\ \cline{2-13} & **mIoU\({}^{s}\)** & **mIoU\({}^{u}\)** & **mAP\({}^{s}\)** & **mAP\({}^{u}\)** & **PQ\({}^{s}\)** & **SQ\({}^{s}\)** & **RQ\({}^{s}\)** & **PQ\({}^{u}\)** & **SQ\({}^{u}\)** & **RQ\({}^{u}\)** \\ \hline \multirow{2}{*}{**Train Paradigm**} & Single-Task & 41.3 & 42.9 & 24.1 & 12.7 & 30.1 & 75.0 & 37.6 & 17.5 & 69.7 & 21.1 \\ & Multi-Task & 41.9 & 43.3 & 23.9 & 14.6 & 30.4 & 76.7 & 38.1 & 19.2 & 71.4 & 24.1 \\ \hline \multirow{2}{*}{**Prompt**} & Fixed & 38.5 & 33.7 & 21.4 & 8.2 & 26.1 & 73.0 & 32.4 & 12.1 & 63.6 & 17.5 \\ & Adaptive & 41.9 & 43.3 & 23.9 & 14.6 & 30.4 & 76.7 & 38.1 & 19.2 & 71.4 & 24.1 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparison of different training paradigms and prompt solutions on COCO.
Figure 3: Qualitative results of the multi-task open vocabulary segmentation. We compare the segmentation results of the proposed FreeSeg and CLIP [26]. The class column represents the class names, where red and black words denote the unseen and seen classes, respectively. |
2301.07897 | New late-time constraints on $f(R)$ gravity | Modification of general relativity (GR) inspired by theories like $f(R)$
gravity is among the most popular ones to explain the late-time acceleration of
the Universe as an alternative to the $\Lambda$CDM model. In this work, we use
the state-of-the-art BAO+BBN data and the most recent Type Ia supernovae (SNe
Ia) sample namely PantheonPlus, including the Cepheid host distances and
covariance from SH0ES samples, to robustly constrain the $f(R)$ gravity
framework via two of the most popular $f(R)$ models in literature, namely, the
Hu-Sawicki and Starobinsky models. Additionally, we consider how the time
variation of the Newton's gravitational constant affects the supernovae
distance modulus relation. We find a minor evidence for $f(R)$ gravity under
the Hu-Sawicki dynamics from BAO+BBN and BAO+BBN+uncalibrated supernovae joint
analysis, but the inclusion of Cepheid host distances, makes the model
compatible with GR. Further, we notice tendency of this model to relax the
$H_0$ tension. In general, in all the analyses carried out in this study with
the late time probes, we find both the $f(R)$ models to be consistent with GR
at 95\% CL. | Suresh Kumar, Rafael C. Nunes, Supriya Pan, Priya Yadav | 2023-01-19T05:52:30Z | http://arxiv.org/abs/2301.07897v2 | # New late-time constraints on \(f(R)\) gravity
###### Abstract
Modification of general relativity (GR) inspired by theories like \(f(R)\) gravity is among the most popular ones to explain the late-time acceleration of the Universe as an alternative to the \(\Lambda\)CDM model. In this work, we use the state-of-the-art BAO+BBN data and the most recent Type Ia supernovae (SNe Ia) sample namely PantheonPlus, including the Cepheid host distances and covariance from SH0ES samples, to robustly constrain the \(f(R)\) gravity framework via two of the most popular \(f(R)\) models in literature, namely, the Hu-Sawicki and Starobinsky models. Additionally, we consider how the time variation of the Newton's gravitational constant affects the supernovae distance modulus relation. We find a minor evidence for \(f(R)\) gravity under the Hu-Sawicki dynamics from BAO+BBN and BAO+BBN+unclibrated supernovae joint analysis, but the inclusion of Cepheid host distances, makes the model compatible with GR. Further, we notice tendency of this model to relax the \(H_{0}\) tension. In general, in all the analyses carried out in this study with the late time probes, we find both the \(f(R)\) models to be consistent with GR at 95% CL.
## I Introduction
Astronomical data are precious for modern cosmology. From the detection of the cosmic microwave background anisotropy to the late-time dynamics of the Universe, we have witnessed the crucial role played by the astronomical data. For instance, the dynamics of our Universe at its late time got abruptly changed since 1998 from the observations of Type Ia supernovae (SNe Ia) which first reported one of the trailblazing results in modern cosmology -- the accelerating expansion of our Universe [1; 2]. This late-time accelerating expansion demands that a revision of the standard cosmology is essential and we need to invoke some exotic type of fluids into the gravitational equations. This can be done effectively by two distinct ways, either one can modify the matter sector of the Universe without touching the gravitational sector described by the Einstein's General Relativity (GR) which leads to various Dark Energy (DE) models [3; 4; 5; 6; 7; 8], or the Einstein's GR can be modified in various ways, known as modified gravity (MG) theories [9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. Following both the approaches, over the last several years, a cluster of DE and MG models have been tested with the available astronomical data (see Refs. [4; 6; 7; 10; 11; 12; 13; 14; 15; 16; 17; 18] and the references therein).
Among the existing DE and MG models, the \(\Lambda\)-Cold Dark Matter (\(\Lambda\)CDM) cosmological model, where \(\Lambda>0\) acts as a DE candidate in the context of GR, is an excellent cosmological model that fits to a large span of astronomical datasets. Nevertheless, \(\Lambda\)CDM cosmology faces many theoretical and observational challenges. Recent observations of \(\Lambda\)CDM-based Planck 2018 [19] and the SH0ES (Supernovae and \(H_{0}\) for the Equation of State of dark energy) collaboration [20; 21] suggest that the Hubble constant \(H_{0}\) from Planck \(\Lambda\)CDM is at more than \(5\sigma\) tension with the SH0ES measurement [20; 21]. In addition, measurements of the parameter \(S_{8}\) (\(=\sigma_{8}\sqrt{\Omega_{\rm m}/0.3}\); \(\sigma_{8}\) is amplitude of the matter power spectrum and \(\Omega_{\rm m}\) is the matter density parameter at present time) estimated by the Planck 2018 [19], weak lensing experiments [22; 23; 24; 25] and Redshift-Space Distortions measurements [26; 27; 28] are in tension at more than \(3\sigma\). These suggest that a revision of the \(\Lambda\)CDM cosmology is needed to agree with the observational evidences. As a consequence, several alternative proposals to the \(\Lambda\)CDM cosmology appeared to explain such observational discrepancies (see the recent reviews in this direction [29; 30; 31; 32]). However, despite many new and appealing cosmological models, it has been observed that simultaneous solution to both the tensions are quite difficult to obtain [33]. Thus, understanding the nature of the cosmological tensions and their solutions demands further attention through new observational probes and cosmological models.
In this article we focus on one of the viable alternatives to the \(\Lambda\)CDM cosmology -- the modified gravity theory, where in particular, we consider the most natural modification to the Einstein's GR, namely, the \(f(R)\) gravity which has been greatly investigated considering both the theoretical and observational perspectives [34; 35; 36; 37; 38; 39; 40; 41; 42], as well as to assuage the current cosmological tensions [43; 44; 45]. However, unlike in the past works, our approach in this work significantly differs in the treatment of its observational analysis that enters through the physics of SNe Ia, and such a difference is caused due to the consideration of new scalar degree(s) of freedom beyond GR which results in a time dependent Newton's gravitational constant \(G\). Such a varying \(G\) may induce a redshift (\(z\))-dependent effect on the peak luminosity of SNe Ia from
the mass of the white dwarf progenitors [46; 47; 48] and this may result in changes in the cosmological constraints of the modified gravity models. The revision in the evolution of intrinsic luminosity of SNe Ia due to variation of \(G\) has been considered to constrain several cosmological models [49; 50; 51; 52; 53].
This means that for precise understanding of the cosmology of modified gravity theories, the impact of modified gravity theories on the astrophysics of SNe Ia should be considered, and through the estimation of the cosmological parameters using the modified formalism, such impact can be decoded. Following this, the key aim of this article is to employ the above modifications to constrain the \(f(R)\) gravity models, and study the resulting implications mainly in light of the \(H_{0}\) tension. To test this hypothesis, we use for the first time the Pantheon+ sample to constrain the free parameters of the \(f(R)\) gravity models. In addition to these perspectives, we also consider for the first time in this work how the state-of-the-art assumptions on BAO+BBN joint analysis can constrain the behavior of the \(f(R)\) gravity models at late times.
The article is organized as follows. In Sec. II, we provide a brief introduction to the cosmology of \(f(R)\) gravity and introduce two well known models that we investigate in this article. In Sec. III, we describe the observational data-sets and our methodology to constrain the baseline of the proposed \(f(R)\) gravity models. In Sec. IV, we describe the observational constraints on the \(f(R)\) models and discuss our main results. Finally, we describe our conclusions and perspectives in Sec. V.
## II \(f(R)\) gravity and cosmology
The gravitational action of \(f(R)\) gravity is given by
\[\mathcal{S}=\frac{1}{16\pi G}\ \int d^{4}x\sqrt{-g}\ f(R)+\mathcal{S}_{\rm m}+ \mathcal{S}_{\rm r}\,, \tag{1}\]
where \(R\) denotes the Ricci scalar and \(G\) is the Newton's gravitational constant. Additionally, eqn. (1) includes the actions for the matter sector (\(\mathcal{S}_{\rm m}\)) and the radiation sector (\(\mathcal{S}_{\rm r}\)). We assume that there is no interaction at the non-gravitational level between matter sector and the radiation sector, that means both these sectors are independently conserved. Now varying the action (1) with respect to the metric \(g_{\mu\nu}\), we obtain the gravitational equations
\[FG_{\mu\nu}=-\frac{1}{2}g_{\mu\nu}\left(FR-f(R)\right)+\nabla_{ \mu}\nabla_{\nu}F-g_{\mu\nu}\Box F\] \[+8\pi G\,\left[T^{\rm(m)}_{\mu\nu}+T^{(r)}_{\mu\nu}\right]\,, \tag{2}\]
where \(G_{\mu\nu}=R_{\mu\nu}-\left(1/2\right)g_{\mu\nu}R\) stands for the Einstein tensor; \(\nabla_{\mu}\) is the covariant derivative, \(\Box\equiv g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\); \(F=F(R)\equiv f_{,R}=df(R)/dR\) (similarly by \(f_{,RR}\) we shall mean \(d^{2}f(R)/dR^{2}\)); \(T^{\rm(m)}_{\mu\nu}\) and \(T^{\rm(r)}_{\mu\nu}\) respectively denote the energy-momentum tensor for the matter sector and the radiation sector. Note that for \(f(R)=R\) in eqn. (1), one recovers the Einstein-Hilbert action for General Relativity. Now we proceed towards the cosmological evolution in the context of \(f(R)\) gravity theory. As usual, we start with the homogeneous and isotropic background of our Universe which is well described by the Friedmann-Lemaitre-Robertson-Walker (FLRW) line element
\[\mathrm{d}s^{2}=-\mathrm{d}t^{2}+a^{2}(t)\left[\frac{\mathrm{d}r^{2}}{1-kr^{2 }}+r^{2}(\mathrm{d}\theta^{2}+\sin^{2}\theta\,\mathrm{d}\phi^{2})\right]\,, \tag{3}\]
where \((t,r,\theta,\phi)\) are the co-moving coordinates; \(a(t)\) describes the expansion scale factor of the Universe and \(k\) corresponds to the spatial geometry of the Universe where \(k=0\), \(+1\) and \(-1\), respectively denote a spatially flat, closed and open Universe. Now for the spatially flat FLRW line element (\(k=0\)), eqn. (2) leads to
\[3FH^{2}=8\pi G\left(\rho_{\rm m}+\rho_{\rm r}\right)+\frac{1}{2 }\left(FR-f(R)\right)-3H\dot{F}\,, \tag{4}\] \[-2F\dot{H}=8\pi G\left(\rho_{\rm m}+p_{\rm m}+\rho_{\rm r}+p_{\rm r }\right)+\ddot{F}-H\dot{F}\,, \tag{5}\]
where an overhead dot denotes the derivative with respect to the cosmic time \(t\); \(H=\dot{a}(t)/a(t)\) is the Hubble parameter; \((\rho_{\rm m},\,p_{\rm m})\), \((\rho_{\rm r},p_{\rm r})\) denote the (energy density, pressure) of the matter sector and the radiation sector respectively. Note that in the spatially flat FLRW Universe, the Ricci scalar \(R\) takes the form \(R=6(2H^{2}+\dot{H})\). As the matter and radiation sectors enjoy independent conservation, therefore, their conservation equations can be expressed as
\[\dot{\rho}_{\rm m}+3H(1+w_{\rm m})\rho_{\rm m}=0, \tag{6}\] \[\dot{\rho}_{\rm r}+3H(1+w_{\rm r})\rho_{\rm r}=0, \tag{7}\]
where \(w_{\rm m}=p_{\rm m}/\rho_{\rm m}\) and \(w_{\rm r}=p_{\rm r}/\rho_{\rm r}\) are respectively the equation of state parameters of the matter sector and the radiation sector. We assume the standard cases where \(w_{\rm m}=0\) (i.e., pressure-less matter) and \(w_{\rm r}=1/3\). Therefore, from eqns. (6) and (7), one can derive that \(\rho_{\rm m}\propto a^{-3}\) and \(\rho_{\rm r}\propto a^{-4}\), respectively.
Now, for a given \(f(R)\) model, using the gravitational equations (4) and (5) together with the conservation equations for the matter and radiation sectors, in principle, one can determine the cosmological dynamics. However, an arbitrary \(f(R)\) model may suffer from a number of cosmological problems, e.g. the matter instability [54], instability at the level of perturbations [55], absence of matter dominated era [56], inability to satisfy the local gravity constraints [57] etc. Thus, in order to construct viable \(f(R)\) models, one needs to impose the following conditions [11]:
\[f_{,R}>0\ {\rm and}\ f_{,RR}>0\ {\rm for}\ R\geq R_{0}\ (>0), \tag{8}\]
where \(R_{0}\) is the present value of \(R\). The condition \(f_{,R}>0\) ensures that there are no ghosts and \(f_{,RR}>0\)
ensures the avoidance of tachyonic instability [11]. Moreover, from the observational perspectives, a viable \(f(R)\) model reproducing the matter dominated era, satisfying the local gravity constraints plus to be consistent with the equivalence principle, should behave like
\[f(R)\to R-2\Lambda,\ \text{for}\ \ R\geq R_{0}, \tag{9}\]
where \(\Lambda\) is a constant and to depict a late-time stable de Sitter solution [11], the \(f(R)\) model also needs to satisfy
\[0<\left(\frac{Rf_{,RR}}{f_{,R}}\right)_{r}<1\ \ \text{at}\ \ r=-\frac{Rf_{,R}}{f}=-2. \tag{10}\]
Combining all these conditions altogether, the viable \(f(R)\) models up to two parameters can be recast as
\[f(R)=R-2\Lambda y(R,b), \tag{11}\]
where the function \(y(R,b)\) gives an idea about the deviation of the underlying \(f(R)\) model from GR in which \(b\) is a free parameter. In the following, we consider two viable \(f(R)\) models, namely the Hu-Sawicki \(f(R)\) model [58] and the Starobinsky \(f(R)\) model [59].
1. The Hu-Sawicki \(f(R)\) model reads as [58] \[f(R)=R-\frac{c_{1}R_{\text{HS}}\left(R/R_{\text{HS}}\right)^{p}}{c_{2}\left(R/ R_{\text{HS}}\right)^{p}+1},\] (12) where \(c_{1}\), \(c_{2}\), \(R_{\text{HS}}\) and \(p\) (\(>0\)) are the free parameters of the model. One can rewrite eqn. (12) to the form of eqn. (11) where \(y(R,b)\) adopts the following expression \[y(R,b)=1-\frac{1}{1+\left(\frac{R}{\Lambda b}\right)^{p}},\] (13) in which \(\Lambda=\frac{c_{1}R_{\text{HS}}}{2c_{2}}\) and \(b=2c_{2}^{1-1/p}/c_{1}\). Notice that, for \(b\to 0\), \(y(R,b)\to 1\) and consequently, \(f(R)\to R-2\Lambda\), that means, we recover \(\Lambda\)CDM cosmology in the limit \(b\to 0\).
2. The Starobinsky \(f(R)\) model is given by [59]. \[f(R)=R-\lambda R_{\text{S}}\left[1-\left(1+\frac{R^{2}}{R_{\text{S}}^{2}} \right)^{-n}\right],\] (14) where \(\lambda\) (\(>0\)), \(R_{\text{S}}\) and \(n\) (\(>0\)) are the free parameters of this model. In a similar fashion, one can rewrite eqn. (14) to the form of eqn. (11) where \(y(R,b)\) takes the form \[y(R,b)=1-\frac{1}{\left[1+\left(\frac{R}{\Lambda\,b}\right)^{2}\right]^{n}},\] (15) in which \(\Lambda=\lambda R_{\text{S}}/2\) and \(b=2/\lambda\). One further notices that for \(b\to 0\), \(f(R)\to R-2\Lambda\), and hence, in the limit \(b\to 0\), the Starobinsky \(f(R)\) model recovers the \(\Lambda\)CDM cosmology.
Thus, one can see that the free parameter \(b\) quantifies the deviation from GR (\(b=0\)). Now, in order to understand the evolution of the Universe for the proposed \(f(R)\) models, one needs to trace the expansion rate of the Universe. We use the same methodology as in Refs. [60; 61] to derive the expansion rate of the Universe, i.e., the \(H(z)\) function, for the proposed \(f(R)\) models. Fig. 1 shows the theoretical prediction for the expansion rate of the Universe at late times for both the models under consideration in this work taking reasonable and different values of \(b\). We quantify the difference from the \(\Lambda\)CDM model by using the fixed values of \(H_{0}\) and \(\Omega_{\text{m}}\) to their canonical values from CMB observations [62], i.e., \(H_{0}=67.4\)
Figure 1: Left panel: Relative difference in the expansion rate of the Universe, \(\Delta H(z)=\left(H^{f(R)\,\text{Gravity}}(z)/H^{\Lambda\text{CDM}}(z)\right)-1\), for the Hu-Sawicki \(f(R)\) model considering several values of \(b\). Right panel: Same as in left panel, but for the Starobinsky \(f(R)\) model.
km/s/Mpc and \(\Omega_{\rm m}=0.31\). For the Hu-Sawicki model (see the left panel of Fig. 1), we note that the expansion of the Universe is very sensitive to \(b\), irrespective of the positive or negative values as clearly depicted here for \(b\in[-0.1,0.1]\). Specifically, for \(b>0\) and \(z>0.38\), the expansion rate of the Universe within the Hu-Sawicki \(f(R)\) model is greater than the \(\Lambda\)CDM model while for \(z<0.38\), we notice the inverse scenario. For \(b<0\), the dynamics is opposite to the previous case assuming \(b>0\). On the other hand, for the Starobinsky \(f(R)\) model (see the right panel of Fig. 1), we see that irrespective of the positive and negative values of \(b\) within \([-0.1,0.1]\), the behaviour in the expansion rate within this \(f(R)\) gravity model remains similar. In fact, the expansion rate of the Universe within this \(f(R)\) model is unresponsive to variations in the sign of parameter \(b\) with respect to the \(\Lambda\)CDM model. Overall, we find that for \(z>0.12\), the expansion rate of the Universe is higher than the \(\Lambda\)CDM model while for \(z<0.12\), the expansion rate of the Universe is lower than the \(\Lambda\)CDM model. In summary, one can see that the Hu-Sawicki and the Starobinsky \(f(R)\) models are quantitatively not the same at late times.
## III Data and methodology
In order to derive constraints on the model baseline, we use the following datasets.
* **BAO**: Baryon Acoustic Oscillation (BAO) data consist of isotropic BAO measurements of \(D_{V}(z)/r_{d}\), where \(D_{V}(z)\) and \(r_{d}\) stand for spherically averaged volume distance, and sound horizon at baryon drag respectively and anisotropic BAO measurements of \(D_{M}(z)/r_{d}\) and \(D_{H}(z)/r_{d}\) (with \(D_{M}(z)\) the comoving angular diameter distance and \(D_{H}(z)=c/H(z)\) the Hubble distance) from the final measurements of the SDSS collaboration that cover eight distinct redshift intervals, acquired and ameliorated over the past 20 years [63]. All the above mentioned BAO-only measurements are compiled in Table 3 of Ref. [63].
* **BBN**: The Big Bang Nucleosynthesis (BBN) are considered with the state-of-the-art assumptions, which consist of measurements of the primordial abundances of helium, \(Y_{P}\), from [64], and the deuterium measurement, \(y_{DP}=10^{5}n_{D}/n_{H}\), obtained in [65]. This BBN likelihood is sensitive to the physical baryon density \(\omega_{b}\equiv\Omega_{b}h^{2}\) and the effective number of neutrino species \(N_{\rm eff}\) constraints. In the present work, we fix \(N_{\rm eff}=3.046\).
* **Type Ia supernovae and Cepheid**: Type Ia supernovae (SNe Ia) have generally been an important astrophysical tools in establishing the standard cosmological model. SNe Ia distance moduli measurements constrain the uncalibrated luminosity distance \(H_{0}d_{L}(z)\), or in other words the slope of the late-time expansion rate, which as a result constrains the matter density parameter \(\Omega_{\rm m}\). For a supernova at redshift \(z\), the theoretical apparent magnitude \(m_{B}\) is given by \[m_{B}=5\log_{10}\left[\frac{d_{L}(z)}{1Mpc}\right]+25+M_{B},\] (16) where \(M_{B}\) is the absolute magnitude. The distance modulus reads as \(\mu(z)=m_{B}-M_{B}\). The calibrated SNe Ia absolute magnitude \(M_{B}\) is in general assumed to be truly a constant, i.e., the parameter \(M_{B}\) should be independent of the redshift. It has been argued that a possible variation of the absolute magnitude \(M_{B}\) and equivalently of the absolute luminosity as \(L\sim 10^{-2M_{B}/5}\), could be due to a variation in the value of Newton's gravitational constant \(G\)[47; 48]. This is due to the fact that the absolute luminosity is proportional to the Chandrasekhar mass as \(L\sim M_{\rm Chandra}\), which depends on \(G\) as \(L\sim G^{-3/2}\). Therefore, any modification of gravity will generate an effective gravitational constant in the form of \(G_{\rm eff}\) that will induce a natural correction to the distance modulus. The presence of a varying effective gravitational constant leads to rewrite eq. (16) as \[\mu_{\rm th} =m_{B}-M_{B}\] \[=5\log_{10}d_{L}(z)+25+\frac{15}{4}\log_{10}\frac{G_{\rm eff}(z)}{ G}.\] (17) Taking the quasi-static approximation and the modified Poisson equation, it is well known that in \(f(R)\) gravity context, we have [66] \[\frac{G_{\rm eff}(z)}{G}=\frac{1}{f_{R}}\left(\frac{1+4k^{2}m/a^{2}}{1+3k^{2} m/a^{2}}\right),\] (18) where \(m=f_{RR}/f_{R}\) and \(G_{\rm eff}(z)\) is the effective gravitational constant in the \(f(R)\) gravity framework. Note that eq. (17) reduces to GR when \(f(R)=R-2\Lambda\), i.e., the \(\Lambda\)CDM model. We follow Refs. [60; 67] and set \(k=0.1\) h/Mpc, which is necessary as now the Newton's gravitational constant depends on the scale \(k\) as well. We use the SNe Ia distance moduli measurements from the Pantheon+ sample [68], which consists of 1701 light curves of 1550 distinct SNe Ia ranging in the redshift interval \(z\in[0.001,2.26]\), publicly available at [https://pantheonpluses.github.io/](https://pantheonpluses.github.io/). We refer to this dataset as PantheonPlus. We also consider the SH0ES Cepheid host distance anchors, which facilitate constraints on both \(M_{B}\) and \(H_{0}\). When utilizing
SH0ES Cepheid host distances, the SNe Ia distance residuals are modified following the relationship eq.(14) of Ref. [68]. We refer to this dataset as PantheonPlus&SH0ES.
Thus, it is possible that the modification on the distance moduli induced from the \(f(R)\) gravity framework may carry useful information about the dynamics of these scenarios.
In our analyses, we allow the parameters \(\omega_{b}\), \(\omega_{\rm cdm}\) (physical cold dark matter density), \(H_{0}\), \(b\) and \(M_{B}\) (in the analyses with PantheonPlus data) with wide ranges of flat priors. We ran CLASS+MontePython code [69; 70; 71; 72] using Metropolis-Hastings mode to derive constraints on cosmological parameters for the \(f(R)\) gravity models defined in Sec. II using several combinations of the datasets. All of our runs reached a Gelman-Rubin convergence criterion of \(R-1<10^{-2}\). We use MCEvidence1 algorithm to compute the Bayesian evidence and perform a model comparison through the Jeffreys' scale [73]. For model comparison, we use the log-Bayesian evidence for each of the models relative to the \(\Lambda\)CDM model, i.e., \(\Delta{\rm ln}{\cal Z}={\rm ln}{\cal Z}_{\Lambda{\rm CDM}}-{\rm ln}{\cal Z}_{ \rm f(R)gravity}\). For the interpretation of the results, we refer to the revised Jeffreys's scale and accordingly the evidence is inconclusive if \(0\leq|\Delta{\rm ln}{\cal Z}|<1\), weak if \(1\leq|\Delta{\rm ln}{\cal Z}|<2.5\), moderate if \(2.5\leq|\Delta{\rm ln}{\cal Z}|<5\), strong if \(5\leq|\Delta{\rm ln}{\cal Z}|<10\), and very strong if \(|\Delta{\rm ln}{\cal Z}|\geq 10\)[74; 75]. In what follows, we discuss the main results of our analyses.
Footnote 1: github.com/yabebalFantaye/MCEvidence
## IV Main results and discussions
In Table 1, we report the summary of the statistical analyses considering the Hu-Sawicki and Starobinsky \(f(R)\) models obtained from various observational datasets, namely, BAO+BBN, BAO+BBN+PantheonPlus, BAO+BBN+PantheonPlus&SH0ES and PantheonPlus&SH0ES. In addition, we also show the constraints on the \(\Lambda\)CDM model using the same datasets in Table 1 in order to compare the \(\Lambda\)CDM results with the Hu-Sawicki and Starobinsky \(f(R)\) models. Moreover, in Fig. 2, we display the parametric space at 68% CL and 95% CL for the Hu-Sawicki (left panel) and Starobinsky (right panel) \(f(R)\) models.
The combined dataset BAO+BBN probes the background history of the model independently of both CMB and supernovae data. As the \(H_{0}\) tension directly invites a straight conflict between the CMB and the local distance ladder measurements, thus, it will be interesting to find new routes to estimate the Hubble constant. The joint analysis BAO+BBN has been proved to be a competitive cosmological test [76; 77; 78] which can provide accurate confidence limits on the baseline parameters of the models. Thus, we choose BAO+BBN to be our minimum data set. As well known, the constraints on \(H_{0}\) from BAO+BBN data in the \(\Lambda\)CDM context fully agree with the CMB data. When applying BAO+BBN in the context of \(f(R)\) gravity, we notice this behavior, being \(H_{0}\) compatible with low values obtained in the CMB measurements, and at 1.6% and 3% accuracy from the Hu-Sawicki and Starobinsky models, respectively. For the parameter that quantifies the deviation from GR, i.e., the parameter \(b\), we find different results in the two different \(f(R)\) gravity models. For the Hu-Sawicki model, we obtain \(b>0\) at more than 68% CL for BAO+BBN (\(b=0.64^{+0.38}_{-0.29}\) at 68% CL). On the other hand, for the Starobinsky \(f(R)\) model, we find that \(b\) is compatible to zero within 68% CL for BAO+BBN, that means, no deviation from GR is suggested within this \(f(R)\) model for BAO+BBN. This is not unexpected because the models have different dynamical behaviors at late times, as previously discussed (see Fig. 1). We further noticed that the free parameter \(b\) in the Starobinsky model is not much sensitive to its sign change. Thus, the posterior tends to be bimodal based on the prior adopted in our analysis. The addition of SH0ES Cepheid host distances tends only to smooth out the bimodal effect.
Now, we move on considering the addition of SNe Ia and Cepheid host distance measurements from the SH0ES team, while considering the additional corrections on the distance moduli, i.e., eq. (17) due to the \(f(R)\) gravity model being one of the main motivations of this work. As argued in Refs. [79; 80; 81], the tension on \(H_{0}\) should be replaced as the tension on the supernova absolute magnitude \(M_{B}\), as the estimate of \(H_{0}\) from SH0ES collaboration comes directly from the estimate of \(M_{B}\). So, in our analysis we first consider the uncalibrated supernovae sample, i.e., the PantheonPlus. When analyzing with BAO+BBN+PantheonPlus, we find that \(b>0\) at 95% CL (\(b=0.46^{+0.35}_{-0.38}\)) for the Hu-Sawicki \(f(R)\) model, while for the Starobinksy \(f(R)\) model, the null hypothesis \(b=0\) is fully compatible within 68% CL. The constraints on \(H_{0}\) from BAO+BBN+PantheonPlus for both the Hu-Sawicki and Starobinksy \(f(R)\) models are fully compatible with the estimates of \(H_{0}\) obtained from BAO+BBN.
Now, following Ref. [68] we consider the inclusion of the Cepheid-host distances measurements in direct combination with the SNe Ia sample, i.e., the full dataset PantheonPlus&SH0ES. We notice that the parameter \(b\) in both cases becomes fully compatible with GR, i.e., \(b=0\). Thus, the inclusion of the Cepheid host distances and the full covariance matrix from SH0ES samples, makes the dynamics of the \(f(R)\) models similar to \(\Lambda\)CDM. On the other hand, \(H_{0}\) and \(\Omega_{\rm m}\) get larger values compared to the previous analyses without the inclusion of SH0ES measurement. Moreover, it is possible to quantify the level of tension between two estimates \(H_{0,i}\) and \(H_{0,j}\) of \(H_{0}\) by means of the simple 1-dimensional tension
metric, which can be constructed as
\[T_{H_{0}}\equiv\frac{|H_{0,i}-H_{0,j}|}{\sqrt{\sigma_{H_{0,i}}^{2}+\sigma_{H_{0,j} }^{2}}}\,, \tag{19}\]
measured in equivalent Gaussian standard deviations. In particular, we find that for the Hu-Sawicki \(f(R)\) model (Starobinsky \(f(R)\) model), the \(H_{0}\) value from BAO+BBN+PantheonPlus&SH0ES is at 2.5\(\sigma\) (2.6\(\sigma\)) and 2.7\(\sigma\) (2.4\(\sigma\)) tensions with the \(H_{0}\) values from the BAO+BBN+PantheonPlus and BAO+BBN analyses, respectively.
We also consider the PantheonPlus&SH0ES data without external probes. Considering the fact that this sample is at more than 2\(\sigma\) tension with BAO+BBN, we analyze their effects separately. As also shown in Ref. [68], for the flat \(\Lambda\)CDM, the joint analysis from PantheonPlus&SH0ES tends to generate high values of \(H_{0}\) (see our results in Table 1). When analyzing PantheonPlus&SH0ES for the \(f(R)\) gravity model, we noticed the same behavior, i.e., \(H_{0}\) gets high values compatible with local measurements. That is, without external probes, \(H_{0}\) inferred from PantheonPlus&SH0ES sample for non-standard models of type \(f(R)\) gravity, \(H_{0}\) is constrained to high values. We note that Ref. [82] pointed out a possible insensitivity of the local \(H_{0}\) constraint
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Data & BAO+BBN & BAO+BBN+PantheonPlus & BAO+BBN+PantheonPlus\&SH0ES & PantheonPlus\&SH0ES \\ \hline Model & Hu-Sawicki & Hu-Sawicki & Hu-Sawicki & Hu-Sawicki & Hu-Sawicki \\ & Starobinsky & Starobinsky & Starobinsky & Starobinsky & Starobinsky \\ & \(\Lambda\)CDM & \(\Lambda\)CDM & \(\Lambda\)CDM & \(\Lambda\)CDM \\ \hline \(b\) & \(0.64^{+0.28}_{-0.29}\) & \(0.46^{+0.21}_{-0.15}\) & \(-0.24^{+0.25}_{-0.21}\) & \(0.095^{+0.071}_{-0.096}\) \\ & \(-0.02\pm 0.91\) & \(-0.02\pm 0.80\) & \(0.00\pm 0.38\) & \(0.04\pm 0.79\) \\ & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \(H_{0}\,[\mathrm{km}/\mathrm{s}/\mathrm{Mpc}]\) & \(66.7^{+1.2}_{-1.0}\) & \(67.16^{+0.92}_{-1.1}\) & \(70.59\pm 0.93\) & \(74.3\pm 1.4\) \\ & \(65.4^{+2.4}_{-1.2}\) & \(66.1\pm 1.5\) & \(70.59\pm 0.91\) & \(74.0^{+1.3}_{-1.6}\) \\ & \(67.5^{+1.7}_{-1.2}\) & \(68.3^{+1.0}_{-0.91}\) & \(70.83\pm 0.87\) & \(73.7\pm 1.3\) \\ \hline \(\Omega_{\mathrm{m}}\) & \(0.273^{+0.024}_{-0.031}\) & \(0.268^{+0.024}_{-0.027}\) & \(0.351\pm 0.021\) & \(0.321\pm 0.019\) \\ & \(0.306^{+0.019}_{-0.029}\) & \(0.300\pm 0.017\) & \(0.331\pm 0.014\) & \(0.298^{+0.050}_{-0.027}\) \\ & \(0.297^{+0.017}_{-0.020}\) & \(0.319\pm 0.013\) & \(0.333\pm 0.013\) & \(0.336^{+0.017}_{-0.020}\) \\ \hline \(M_{B}\) & \(-\) & \(-19.525^{+0.054}_{-0.063}\) & \(-19.302\pm 0.035\) & \(-19.235\pm 0.044\) \\ & \(-\) & \(-19.477\pm 0.050\) & \(-19.330\pm 0.029\) & \(-19.241\pm 0.037\) \\ & \(-\) & \(-19.409^{+0.035}_{-0.031}\) & \(-19.324\pm 0.028\) & \(-19.238\pm 0.037\) \\ \hline \(\Delta\mathrm{ln}\mathcal{Z}\) & \(-1.62\) & \(-1.84\) & \(0.15\) & \(2.21\) \\ & \(-1.99\) & \(-2.28\) & \(-0.14\) & \(-1.32\) \\ & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \end{tabular}
\end{table}
Table 1: Constraints at 68% CL on some selected parameters of the Hu-Sawicki, Starobinsky and \(\Lambda\)CDM models obtained from BAO+BBN, BAO+BBN+PantheonPlus, BAO+BBN+PantheonPlus&SH0ES and PantheonPlus&SH0ES data. Note that here \(\Delta\mathrm{ln}\mathcal{Z}=\mathrm{ln}\mathcal{Z}_{\Lambda\mathrm{CDM}}\) - \(\mathrm{ln}\mathcal{Z}_{\mathrm{f(R)gravity}}\).
Figure 2: One-dimensional posterior distributions and two-dimensional marginalized confidence regions (68% CL and 95% CL) for \(b\), \(\Omega_{\mathrm{m}}\) and \(H_{0}\) obtained from the BAO+BBN, BAO+BBN+PantheonPlus and BAO+BBN+PantheonPlus& for the Hu-Sawicki model (left panel) and Starobinsky model (right panel). The parameter \(H_{0}\) is in units of km/s/Mpc.
from the Cepheid distance ladder in some model beyond the \(\Lambda\)CDM cosmology. In this sense, and based on the present results, we can conclude that the same is valid for scenarios like \(f(R)\) gravity.
It is well known that BAO+SNe Ia joint analysis prefers low values of \(H_{0}\) compatible with CMB observations, and on the other hand, this joint analysis is in tension with SNe Ia+Cepheid sample analysis (see general discussions introduced in [79]). Thus, from the analyses presented here for BAO+BBN+PantheonPlus and PantheonPlus&SH0ES, we find that the \(f(R)\) gravity does not significantly change the local distance ladder value of \(H_{0}\), and therefore these models are not able to solve the \(H_{0}\) tension in the light of the late time probes. In the analyses of the Hu-Sawicki model with the BAO+BBN and BAO+BBN+PantheonPlus data, we notice negative correlation between \(H_{0}\) and \(b\) (see left panel of Fig. 2). So smaller values of \(b\) correspond to the larger values of \(H_{0}\), while we notice opposite scenario in the analysis with BAO+BBN+PantheonPlus&SH0ES data. On the other hand, in all the analyses of the Starobinsky \(f(R)\) model, the \(b\) parameter is insensitive to \(H_{0}\). Similar and equivalent conclusions can be drawn from the point of view of the \(M_{B}\) estimation.
From Table 1, we notice \(|\Delta\text{ln}\mathcal{Z}|<2.5\) for all the analyses. Therefore, the Bayesian evidence of \(f(R)\) models compared to \(\Lambda\)CDM is either weak or inconclusive. So the \(f(R)\) models cannot be discriminated from \(\Lambda\)CDM statistically. Finally, in Fig. 3 we show the magnitude-redshift relation of the PantheonPlus sample for the best fit values from BAO+BBN+PantheonPlus and BAO+BBN+PantheonPlus&SH0ES for both the \(f(R)\) gravity models under consideration in the work plus the reference \(\Lambda\)CDM model, where we note that the model's predictions for low-\(z\) are almost indistinguishable from each other, but it may slightly differ at high-\(z\).
## V Final remarks
In this work, we have considered new extra degree of freedom of the gravitational origin by modifying the gravity sector that possesses GR as a particular limit. The new scalar degree(s) of freedom are proposals of intense investigation as alternatives to \(\Lambda\)CDM frameworks in the last two decades. Certainly, one of the most popular theories to explain the late-time acceleration in this sense is the \(f(R)\) gravity theory. In this work, we have presented an update of observational constraints with new perspectives on two well known and widely used \(f(R)\) gravity models, viz., Hu-Sawicki and Starobinsky models. The robustness of the state-of-the-art assumptions on BAO+BBN data is used for the first time to constrain the dynamics of these models. Then the most recent SNe Ia data, by taking the time variation of the Newton's gravitational constant over cosmic time to correct the supernovae distance modulus relation predictions, are used in the joint analysis with BAO+BBN. Finally, the inclusion of the very low-z Cepheid host distances, including the full covariance of the SH0ES sample, is considered to investigate the \(f(R)\) models under consideration in this work. We have found a minor evidence for \(f(R)\) gravity under the Hu-Sawicki dynamics from BAO+BBN and BAO+BBN+uncalibrated supernovae joint analysis, but the inclusion of Cepheid host distances, makes the model compatible with GR. In general, in all the analyses, we find that \(b\) is consistent with \(0\) at \(95\%\) CL for both of the \(f(R)\) models. So we have not found any significant deviation from GR, i.e., \(b=0\), after the application of late-time data sets. For the Hu-Sawicki model, we have noticed correlation between \(b\) and \(H_{0}\) from different observational data sets, this shows the tendency of the model to relax the \(H_{0}\) tension. Furthermore, the free parameter \(b\) of the theories still is weakly constrained which is clearly observed from its large error bars. But the
Figure 3: This figure shows the magnitude-redshift relation of the PantheonPlus sample in the range \(0<z<2.3\) for the best fit values from BAO+BBN+PantheonPlus and BAO+BBN+PantheonPlus&SH0ES analysis summarized in Table 1 for Hu-Sawicki model (left panel) and Starobinsky model (right panel). The \(\Lambda\)CDM best-fit prediction is also shown in both panels.
generalization of the perspectives considered here can be carried out with Planck CMB data at perturbation level in the light of the \(H_{0}\) tension. We hope to report the results in this direction in future communications.
###### Acknowledgements.
S.K. gratefully acknowledges support from the Science and Engineering Research Board (SERB), Govt. of India (File No. CRG/2021/004658). S.P. acknowledges the financial support from the Department of Science and Technology (DST), Govt. of India under the Scheme "Fund for Improvement of S&T Infrastructure (FIST)" (File No. SR/FST/MS-I/2019/41). P.Y. is supported by Junior Research Fellowship (CSIR/UGC Ref. No. 191620128350) from University Grant Commission, Govt. of India.
|
2304.05686 | Gate Camouflaging Using Reconfigurable ISFET-Based Threshold Voltage
Defined Logic | Most chip designers outsource the manufacturing of their integrated circuits
(ICs) to external foundries due to the exorbitant cost and complexity of the
process. This involvement of untrustworthy, external entities opens the door to
major security threats, such as reverse engineering (RE). RE can reveal the
physical structure and functionality of intellectual property (IP) and ICs,
leading to IP theft, counterfeiting, and other misuses. The concept of the
threshold voltage-defined (TVD) logic family is a potential mechanism to
obfuscate and protect the design and prevent RE. However, it addresses
post-fabrication RE issues, and it has been shown that dopant profiling
techniques can be used to determine the threshold voltage of the transistor and
break the obfuscation. In this work, we propose a novel TVD modulation with
ion-sensitive field-effect transistors (ISFETs) to protect the IC from RE and
IP piracy. Compared to the conventional TVD logic family, ISFET-TVD allows
post-manufacture programming. The ISFET-TVD logic gate can be reconfigured
after fabrication, maintaining an exact schematic architecture with an
identical layout for all types of logic gates, and thus overcoming the
shortcomings of the classic TVD. The threshold voltage of the ISFETs can be
adjusted after fabrication by changing the ion concentration of the material in
contact with the ion-sensitive gate of the transistor, depending on the Boolean
functionality. The ISFET is CMOS compatible, and therefore implemented on 45 nm
CMOS technology for demonstration. | Elmira Moussavi, Animesh Singh, Dominik Sisejkovic, Aravind Padma Kumar, Daniyar Kizatov, Sven Ingebrandt, Rainer Leupers, Vivek Pachauri, Farhad Merchant | 2023-04-12T08:16:00Z | http://arxiv.org/abs/2304.05686v1 | # Gate Camouflaging Using Reconfigurable
###### Abstract
Most chip designers outsource the manufacturing of their integrated circuits (ICs) to external foundries due to the exorbitant cost and complexity of the process. This involvement of untrustworthy, external entities opens the door to major security threats, such as reverse engineering (RE). RE can reveal the physical structure and functionality of intellectual property (IP) and ICs, leading to IP theft, counterfeiting, and other misuses. The concept of the threshold voltage-defined (TVD) logic family is a potential mechanism to obfuscate and protect the design and prevent RE. However, it addresses post-fabrication RE issues, and it has been shown that dopant profiling techniques can be used to determine the threshold voltage of the transistor and break the obfuscation. In this work, we propose a novel TVD modulation with ion-sensitive field-effect transistors (ISFETs) to protect the IC from RE and IP piracy. Compared to the conventional TVD logic family, ISFET-TVD allows post-manufacture programming. The ISFET-TVD logic gate can be reconfigured after fabrication, maintaining an exact schematic architecture with an identical layout for all types of logic gates, and thus overcoming the shortcomings of the classic TVD. The threshold voltage of the ISFETs can be adjusted after fabrication by changing the ion concentration of the material in contact with the ion-sensitive gate of the transistor, depending on the Boolean functionality. The ISFET is CMOS compatible, and therefore implemented on 45 nm CMOS technology for demonstration.
Reverse engineering, camouflaging, ion-sensitive field-effect transistor, threshold voltage-defined
## I Introduction
Building and maintaining a semiconductor foundry is a challenging and costly process. Increased demand for chips and the competition for time to market has led many chip designers to outsource the fabrication to third-party foundries [1]. This acts as an enabler for security threats such as reverse engineering [2], leading to the theft of the design's valuable intellectual property (IP piracy) [3], and counterfeiting [4]. In RE-based techniques, the adversary de-layers the IC to obtain information about the functionality of the gates and their wire connectivity to reconstruct the netlist [5]. To identify the gate's functionality and internal structure for post-manufacturing RE, techniques such as probing and high-resolution imaging are used to exploit critical information from the chip to perform various attacks efficiently [6][7].
After de-packaging the IC, the attacker captures images of the metal and base layers, which contain information about the metal connections used for device interconnection and gate identification, respectively. By compiling the information obtained from the images, the attacker can eventually reconstruct the netlist for IP overproduction and illegally sell the cloned design on the black market (Fig. 1). Researchers have proposed various camouflaging techniques to combat RE. One such countermeasure is gate camouflaging. The primary purpose of gate camouflaging is to hide the functionality of particular logic gates and make RE impossible or very difficult by solely observing physical characteristics.
The threshold voltage-defined (TVD) logic family is a gate camouflaging technique to protect the design against post-manufacturing RE. TVD uses a combination of transistors with different threshold voltages (\(V_{th}\)) to define a gate functionality within _an identical layout_. The procedure requires a one-time mask that is programmed with different threshold implants depending on the intended Boolean functionality. Consequently, to understand the gates' functionality, the \(V_{th}\) of all transistors must be probed for each TVD logic gate. Although information about the threshold voltage of transistors cannot be obtained from IC imaging or delayering, there are several dopant profiling techniques for measuring channel doping, such as spreading resistance profiling [8] and scanning capacitance microscopy [9][10]. To prevent the aforementioned attacks and to protect the design against reverse engineering during or after fabrication, the proposed ISFET-TVD gate has been developed. The main contributions of this paper are as follows:
* Implementation of a reconfigurable ISFET-TVD gate using different solvents for different Boolean functionality.
* Introduction of an obfuscation scheme that allows post-manufacture programming of the gates.
* Demonstration of different logic gates using ISFET-TVD,
Fig. 1: Example of a standard NAND gate that can be easily identified by looking at the top metal layers, reverse engineering, and reconstructing the design to clone the IP.
evaluated in \(45\,\mathrm{nm}\) CMOS technology.
The remainder of the paper is structured as follows. Section II discusses the operation and basic design of the conventional TVD logic family as proposed in [6]. In Section III, by taking advantage of emerging technology such as ISFET, we described the proposed ISFET-TVD logic gate. Section IV provides a comparison between the proposed design and the conventional TVD. The conclusion is presented in Section V.
## II Conventional TVD logic family
The \(V_{th}\) modulation technique (combining different voltage thresholds in a circuit) is widely used in the semiconductor industry to compromise between power, performance, and robustness [11]. Based on the imposed \(V_{th}\) the transistors can conduct or not. For example, the transistor is when low \(V_{th}\) (LVT) is assigned and stops conducting when high \(V_{th}\) (HVT) is assigned during fabrication (Fig. 2(a)). Hence, for transistors of the same size based on HVT or LVT, the amount of current conduction between the transistors can be defined as \(I_{LVT}>I_{HVT}\) (Fig. 2(b)). Such a characterization can be used to implement _different Boolean functions_ while having the _same circuit structure_ with different \(V_{th}\) implants.
The TVD technique is one-time mask programmed with different threshold implants to realize different camouflaging gates on the same physical structure. Hence, stacks of transistors, including low (\(\sim\)\(300\,\mathrm{m}\,\mathrm{V}\)) and high (\(\sim\)\(600\,\mathrm{m}\,\mathrm{V}\)) \(V_{th}\) are used as a pull-down network (PDN) to provide all possible input combinations (Fig. 3). A differential PDN pair is replaced by the differential input of the sense amplifier. After the gates are fired, the sense amplifier amplifies the corresponding current difference to the full logic level. The 2-input TVD logic family (A, B) is shown in Fig. 3.
There are two modes of operation: when the clock is low, the circuit is in precharge mode, and when the clock goes high, the gate evaluates. While the clock is low, transistors \(M_{p1}\) and \(M_{p4}\) are on, so \(V_{OUT}\) and \(\overline{V_{OUT}}\) are high, therefore \(OUT\) and \(\overline{OUT}\) are pulled down. On the other hand, during the evaluation phase (\(CLK\): High, \(M_{n3}\): ON), based on the input combination, only one of the parallel branches from each side of the differential PDNs is turned on. Since the transistors with the LVT or HVT are placed in an asymmetric manner, both branches conduct, but asymmetrically in the differential pairs. Therefore, the current drawn by one of the differential sides (the branch with the LVT) is greater than the other, defining the output as low or high. The table in Fig. 3 describes the transistors set up in differential PDNs. These must be programmed as shown using LVT or HVT implants to implement different Boolean functions.
For example, the operation for a 2-input TVD-XOR gate is as follows. The precharge phase (\(CLK\): low) has already been described. After CLK is set to high (evaluation phase), for the input combination \(A/B=00\) (\(\overline{A}/\overline{B}=11\)), only two branches with \(M_{1}\) and \(M_{2}\) (LVT) from one PDN and \(M_{9}\) and \(M_{10}\) (HVT) from the asymmetric side are conducting. Since \(I_{LVT}>I_{HVT}\), more current will flow through \(M_{1}\) and \(M_{2}\), thus the \(\overline{V_{OUT}}\) quickly drops (\(\overline{V_{OUT}}=0\)), but \(V_{OUT}\) makes a slight dip and stays high (\(V_{OUT}=1\)). After the inversion, \(\overline{OUT}\) goes high and \(OUT\) remains low. Table I[6] describes the overheads of the TVD logic family compared to standard CMOS gates.
Accordingly, the same topology is used to implement other gates. This TVD logic family can camouflage the gate by using the same physical layout and design except for the threshold implant of the devices. However, the \(V_{th}\) is not directly detectable by solely observing the physical layer and delayering. Still, there are several methods to measure the channel doping and ultimately reveal the gate functionality [9][10].
## III Reconfigurable ISFET-TVD Logic Gate
This section introduces the TVD gate that uses ISFET devices (ISFET-TVD) instead of MOSFET transistors with different threshold implants. The proposed camouflaging gate can be programmed after fabrication to perform different
Fig. 3: Conventional TVD logic family with 2-inputs (A, B); pull-down transistors considered with all possible input combinations; different Boolean functions can be realized by considering different arrangements of transistors with low or high threshold implants (LVT or HVT), described in the table.
Fig. 2: \(V_{th}\) modulation technique: (a) \(V_{th}\) programmable switch; LVT: ON, HVT: OFF, (b) IV characteristic of LVT and HVT transistors.
Boolean operations, while having the same physical layout and design's schematic for all logic gates.
### _Emerging Technologies, ISFET_
The ion-sensitive field-effect transistor (ISFET) operates similarly to the metal oxide field-effect transistor MOSFET. However, the gate is extended by a passivation layer that is in contact with an external reference electrode (Ag/AgCl) through an electrolyte [12][13]. Fig. 4 illustrates the schematics of a MOSFET (a) and an ISFET (b). For the ISFET the reference voltage is provided by an electrode, which acts as the gate voltage (\(V_{G}\)). Compared to conventional MOSFET structures, the threshold voltage of an ISFET (\(V_{TH(ISFET)}\)) device also depends linearly on the surface potential of the oxide-electrolyte interface. The expression for the drain-source current (\(I_{DS}\)) of an n-type ISFET transistor is [14]:
\[I_{DS}=\mu_{n}c_{ox}\frac{W}{L}(V_{GS}-V_{TH(ISFET)})V_{DS}-\frac{1}{2}V_{DS}^{2}. \tag{1}\]
Equation (1) shows that compared to the MOSFET, the threshold voltage of the MOSFET is replaced by \(V_{TH(ISFET)}\), which depends on the ion concentration of the liquid. Therefore, the \(V_{TH(ISFET)}\) can be changed and adjusted depending on the aqueous electrolyte that is electronically in contact with a reference electrode. This implies that \(V_{TH(ISFET)}\) can be changed and reconfigured to HVT or LVT after fabrication, depending on the ion concentration of the sample solvent.
### _Proposed ISFET-TVD_
Instead of using different threshold implants for transistors in differential PDNs, a parallel configuration employing ISFET transistors is proposed. _We present a camouflaged logic gate with the same schematic design and the identical layout for all logic gates, which can be programmed after fabrication depending on the Boolean functionality_. After fabrication, the devices' \(V_{th}\) must be adjusted by choosing solvents with different ion concentrations depending on the Boolean function. The ion concentration is proportional to the \(V_{th}\), so a low hydrogen ion strength electrolyte solution is used for LVT and a high hydrogen ion strength solution is used for HVT. Thus, the amount of charge depends on the concentration of certain ions present in the solution (hydrogen ion concentration or pH). This modulates the surface charge at the insulator-semiconductor interface of the ISFET, resulting in interface charge densities. Therefore, the pH response of an ISFET device can be characterized as a threshold voltage shift when the pH of the injected solution is varied (from 1 to 14).
Fig. 5(a) shows the \(V_{th}\) of an ISFET device as the pH is increased from 1 to 7 [15]. In particular, for a single ISFET device, the threshold voltage increases or decreases in accordance with the pH of the device. The \(I_{DS}\) characteristics are shown in Fig. 5(b) for a fixed \(V_{DS}\) and different pH values. The \(I_{DS}\) is higher at lower pH values (lower \(V_{th}\)); e.g. \(I_{DS-pH4.9}>I_{DS-pH9.2}\). Based on the Nernst sensitivity limit, the threshold voltage shift for conventional ISFET devices is \(59\,\mathrm{m}\,\mathrm{V}\!/\!pH\)[16].
Note that instead of having a transistor whose threshold voltage variation is a function of a transistor's geometry and doping, \(V_{th}\) of the ISFET depends on the pH of the sample solution. The LVT or HVT can be easily adjusted after fabrication by changing the pH of the solvent in contact with the transistor. Furthermore, ISFET devices have a gate structure that is compatible with the CMOS manufacturing process. The proposed ISFET-TVD gate replaces the stack of parallel n-type transistors that have LVT or HVT threshold implants with ISFET devices in the PDN of the TVD logic family. Different Boolean functions can be implemented using the same schematic while injecting _two different pH solutions_ to perform LVT or HVT (low or high pH value, respectively). With the ISFET-TVD, the gates can be reconfigured at any time after manufacturing.
## IV Experimental Result and Comparison
To demonstrate the correct functionality of the new camouflaging scheme, we simulated the ISFET-TVD logic gate with 2 inputs and n-type ISFETs as it shows in Fig 6. The conventional TVD logic family was evaluated in \(65\,\mathrm{nm}\) CMOS technology and the proposed ISFET-TVD was evaluated in \(45\,\mathrm{nm}\) CMOS technology and Verilog-A to model the surface potential, reference electrode, and electrolyte of the ISFET. All logic gates are designed using the same geometry of NMOS and PMOS devices. A supply voltage of \(1.8\,\mathrm{V}\) and a temperature of \(27^{\circ}\mathrm{C}\), and the frequency of \(20\,\mathrm{MHz}\) are the nominal conditions for the simulation. Cadence Virtuoso is used for simulation and analysis.
Fig. 4: Schematic of a field-effect transistor (a) MOSFET and an (b) ISFET.
Fig. 5: (a) Threshold voltage variation with respect to pH value when drain-source voltage (\(V_{DS}\)) is \(0.1\,\mathrm{V}\) and channel thickness (\(t_{sx}\)) is \(50\,\mathrm{nm}\)[15]; (b) drain-source current for different hydrogen ion concentrations (pH) at electrolyte interface for a fabricated ISFET device having channel width of \(15\,\mathrm{\SIUnitSymbolMicro m}\), channel length of \(7\,\mathrm{\SIUnitSymbolMicro m}\), and \(V_{DS}\) is \(2\,\mathrm{V}\).
The overall design with the proposed configuration is done in the following order. First, two high and low-pH solutions are considered to inject on the ISFET devices in the differential PDNs. The pH solutions are then applied to the selected set of ISFETs according to the desired Boolean function. The input combinations are provided to the reference electrode of the ISFETs, which is electrically connected to the gate surface via the pH sample. The sense amplifier and operating phases are then used similarly to the conventional TVD design.
The number of ISFET transistors required is the same as for conventional TVD transistors. However, a single design schematic can be used to implement different gates. The ISFET-TVD can be considered _a reconfigurable universal gate_ that can operate as any Boolean function based on a specific configuration of ISFETs with pH solvents. Fig 7 shows the transient analysis of a 2-input ISFET-TVD configured to operate as an XOR gate for inputs A and B.
From the results, we can observe that the ISFET-TVD behaves in the same way as the conventional TVD logic family. Therefore, in the precharge phase, \(CLK=0\) and the p-type transistors in the sense amplifier are turned on, pulling \(V_{OUT}\) and \(\overline{V_{OUT}}\) to VDD and thus \(OUT\) and \(\overline{OUT}\) to VSS. On the other hand, for the evaluation phase (\(CLK=1\)), depending on the combination of inputs, only one of the branches from each side of the differential PDNs is turned on. The use of asymmetric \(V_{th}\) causes more current to flow to one side of the differential PDNs, namely to the branch with LVT (low pH value), compared to its complementary branch with HVT (high pH).
Compared to the conventional TVD logic family, the proposed ISFET-TVD does not require transistors with different threshold implants; the voltage thresholds are programmed after fabrication. In addition, the ISFET-TVD gate provides the same schematic and layout for all types of logic gates. Hence, different logic gates can be programmed post-fabrication by reconfiguring the ISFET devices with different pH values.
However, the complexity of ISFETs makes it challenging to achieve an accurate, fast, and repeatable response, resulting in additional overheads. The ISFET-TVD area consumption is the same as conventional TVD (Table. I) except for the additional reference electrodes (depending on the size of the reference electrode). There is also a relatively small delay based on the time it takes for the ISFET to sense the pH (depending on the passivation layer). Furthermore, additional power is initially required to allow the solvent to flow to the ISFETs to configure/reconfigure the gate.
ISFET is an emerging technology, and there are a number of research efforts underway to overcome these challenges and to demonstrate a rapid response. On the other hand, there are some other TVD logic techniques, including enhanced-TVD (E-TVD) to reduce the amount of required transistors to enhance the overall performance [17]. This is important because, in the TVD logic structure, the number of transistors increases significantly with the number of inputs; this can reduce the efficiency of the gate. The proposed design has the same goals as the previous TVD designs, but uses ISFET technology to hide the design intent.
## V Conclusion
Reverse engineering reveals the functionality of the chip by effectively determining the gate-and-wire layout of the circuit. In this work, we proposed an ISFET-TVD camouflage gate to hide the functionality of the gate and make it RE-resilient. The ISFET-TVD extends conventional TVD logic gate technology with ISFETs. The advantage of the ISFET-TVD is that it does not require additional threshold implants and can be configured and reconfigured after fabrication. Moreover, it contains the same schematic and physical layout for all gate circuits. The gate is defined by the injected electrolyte solvent. By replacing some of the conventional gates with ISFET-TVD logic gates, the circuit can be obfuscated to further enhance the security. ISFET transistors are CMOS compatible. However, due to the complexity of ISFETs compared to conventional MOSFETs, ISFET transistors come with additional overheads. This results in a larger delay, area and power consumption compared to the conventional TVD logic family. In the future, we plan to improve the ISFET-TVD gate by reducing the number of ISFET transistors to keep the logic overhead of ISFET-TVD low enough to allow large-scale replacement of conventional gates with the proposed logic gate.
## Acknowledgement
This work was partially funded by Deutsche Forschungsgemeinschaft (DFG - German Research Foundation) under the priority programme SPP 2253.
Fig. 6: 2-input (A,B) ISFET-TVD gate Schematic. Only differential PDNs with p-type ISFETs are shown. Low and high pH values must be placed on the ISFETs to operate as an LVT or HVT for different Boolean functionality.
Fig. 7: Transient analysis of a 2-input TVD-ISFET XOR gate for the input pH of 2 and 10, operating as devices with LVT and HVT, respectively. |
2308.03122 | "Kurosawa": A Script Writer's Assistant | Storytelling is the lifeline of the entertainment industry -- movies, TV
shows, and stand-up comedies, all need stories. A good and gripping script is
the lifeline of storytelling and demands creativity and resource investment.
Good scriptwriters are rare to find and often work under severe time pressure.
Consequently, entertainment media are actively looking for automation. In this
paper, we present an AI-based script-writing workbench called KUROSAWA which
addresses the tasks of plot generation and script generation. Plot generation
aims to generate a coherent and creative plot (600-800 words) given a prompt
(15-40 words). Script generation, on the other hand, generates a scene (200-500
words) in a screenplay format from a brief description (15-40 words). Kurosawa
needs data to train. We use a 4-act structure of storytelling to annotate the
plot dataset manually. We create a dataset of 1000 manually annotated plots and
their corresponding prompts/storylines and a gold-standard dataset of 1000
scenes with four main elements -- scene headings, action lines, dialogues, and
character names -- tagged individually. We fine-tune GPT-3 with the above
datasets to generate plots and scenes. These plots and scenes are first
evaluated and then used by the scriptwriters of a large and famous media
platform ErosNow. We release the annotated datasets and the models trained on
these datasets as a working benchmark for automatic movie plot and script
generation. | Prerak Gandhi, Vishal Pramanik, Pushpak Bhattacharyya | 2023-08-06T14:09:02Z | http://arxiv.org/abs/2308.03122v1 | # "Kurosawa": A Script Writer's Assistant
###### Abstract
Storytelling is the lifeline of the entertainment industry- movies, TV shows, and stand-up comedies, all need stories. A good and gripping script is the lifeline of storytelling and demands creativity and resource investment. Good scriptwriters are rare to find and often work under severe time pressure. Consequently, entertainment media are actively looking for automation. In this paper, we present an AI-based script-writing workbench called KUROSAWA which addresses the tasks of plot generation and script generation. Plot generation aims to generate a coherent and creative plot (600-800 words) given a prompt (15-40 words). Script generation, on the other hand, generates a scene (200-500 words) in a screen-play format from a brief description (15-40 words). Kurosawa needs data to train. We use a 4-act structure of storytelling to annotate the plot dataset manually. We create a dataset of 1000 manually annotated plots and their corresponding prompts/storylines and a gold-standard dataset of 1000 scenes with four main elements -- scene headings, action lines, dialogues, and character names -- tagged individually. We fine-tune GPT-3 with the above datasets to generate plots and scenes. These plots and scenes are first evaluated and then used by the scriptwriters of a large and famous media platform ErosNow1. We release the annotated datasets and the models trained on these datasets as a working benchmark for automatic movie plot and script generation.
Footnote 1: [https://erosnow.com/](https://erosnow.com/)
## 1 Introduction
Movies are one of the most popular sources of entertainment for people worldwide and can be a strong medium for education and social awareness. The impact and influence of film industries can be gauged from the fact that Hollywood movies invest 100s of millions of dollars and often make box-office collections of billions of dollars. The first motion picture _The Great Train Robbery, 1903_--black & white with no sound-- was created at the beginning of the 20th century. Since then, the art has gone through several transformations, and now people can instantly access 4K HD movies of their liking on any smart device.
Throughout the history of film, two of the contributors to a film's blockbuster success have been the quality of its plot and the manner of storytelling. The appeal of the movie decreases drastically if the viewers find the plot drably predictable. Writing a creative and exciting script is, therefore, a critical necessity and is extremely challenging. Add to this the constraints of time and budget, and the need for (at least partial) automation in script writing becomes obvious.
AI-based story generation has been used before. Based on the engagement-reflection cognitive explanation of writing, the computer model MEXICA (Perez and Sharples, 2001) generates frameworks for short tales. BRUTUS (Bringsjord and Ferrucci, 1999) creates short stories with predetermined themes like treachery. With the arrival of pre-trained transformer models, automatic story generation has got a shot in the arm. Transformer models like GPT-2 and GPT-3 are extensively used for text generation. These models have shown the capability of generating creative text, albeit sometimes with hallucinations (Zhao et al., 2020). Text generated by these models also sometimes lacks coherence and cohesiveness. On the other hand, template-based models can generate coherent text but lack creativity in generating new characters and events in the plot (Kale and Rastogi, 2020).
The process of creating a movie generally starts with an idea which is then used to create a plot which is used as the base to build the movie script (Figure 1).
Novel datasets are an important feature of this
paper. We closely studied the plots and prompts of movies from Bollywood and Hollywood. Such plots and prompts were scraped from Wikipedia2 and IMDb3, respectively. The plots are then annotated using the 4-act story structure- an extension of the well-known 3-act structure [17]. The 4-act structure and the annotation methods are explained in detail in **appendix A.5** and **section 4**, respectively.
Footnote 2: [https://www.wikipedia.org/](https://www.wikipedia.org/)
Footnote 3: [https://www.imdb.com/](https://www.imdb.com/)
We introduce a dataset of 1000 Hollywood movie scenes and their short descriptions. The scripts are scraped from IMSDb4. The scenes are annotated with the four major components of a screenplay: _sluglines, action lines, character names_ and _dialogues_, described in details in appendix A.4
Footnote 4: [https://www.imsdb.com/](https://www.imsdb.com/)
We introduce a workbench which we call "Kuro-sawa", consisting of datasets and a pair of GPT-3 [1] models fine-tuned with the said datasets. One GPT-3 model generates a movie plot given a short description of the storyline (15-40 words), while the other creates a scene based on a short description of the required scene.
Importantly, we have provided the "Kurosawa" platform to one of the biggest media platforms engaged in the business of making movies and TV shows, producing music and soundtrack etc.- to help script and content writers from different film industries create new movie plots.
**Our contributions in this work are as follows:**
* To the best of our knowledge, this is the first work on generating movie scenes from a scene description.
* We create and publicly release two datasets: (a) a parallel dataset of 1000 movie storylines and their corresponding plots, (b) a parallel dataset of 1000 movie scenes and their corresponding descriptions. In (a), we link available movie storylines from IMDb with available corresponding movie plots from Wikipedia. In (b), we link available movie scenes from IMSDb with corresponding descriptions from IMDb.
* We manually annotate movie plots according to a 4-act structure which is an extension of the well-known 3-act structure [17]. Professional scriptwriters from the media and entertainment industry guided us very closely.
* We manually annotate movie scenes with four major components of a scene: _sluglines, action lines, character names_ and _dialogues_, along with a short description of the scene.
* We introduce "Kurosawa": a workbench that consists of multiple datasets and models which can assist script and scene writers in the film industry.
## 2 Motivation
Movies are a form of visual media and can have a huge influence on life and society. Movie scripts are often 30,000 words long, comparable to a 100-page book. Though scripts can be diverse, they have fixed and oft-repeated structures, _e.g., scene heading, transition, character name, etc._. This fix- and repetition can be dull and time-consuming and can be handed over to AI. However, a surprising fact is that AI-based models can be creative in generating novel characters and stories. These reasons have motivated the film industry to seriously consider harnessing AI for various aspects of movie making, script and scene writing being one of them.
Los Angeles Times, 19 December 2022, asks, "AI is here, and it's making movies. Is Hollywood ready?". The newspaper edition reports mainly movie editing efforts ongoing at various places using AI. Our task in the paper is allied but different in the sense that we aim to provide a "script-writers' assistant".
## 3 Related Work
### Automatic Story Generation
Neural models have been able to produce stories by conditioning on different contents like visuals [13] and succinct text descriptions [15]. Work on plot controllable, plan-driven story generation abounds [17, 18, 19, 20]. A related kind of work is automatic poetry generation based on keywords or descriptions [21, 22].
Figure 1: The thought process a scriptwriter follows in creating a movie script. An idea (**storyline**) leads to a **plot** which is then converted into a **movie script**.
### Plot Generation
Plot Machines Rashkin et al. (2020) generate multi-paragraph stories based on some outline phrases. Fan et al. (2018) introduce a hierarchical sequence-to-sequence fusion model to generate a premise and condition that in turn generate stories of up to 1000 words. This work- unlike ours- is non-neural and template-driven and is, therefore, much less creative and novel compared to what we generate.
### Scene Generation
Automatic scene or script generation has received comparatively less attention. Dialogue generation Li et al. (2016); Huang et al. (2018); Tang et al. (2019); Wu et al. (2019) with a semblance of scene generation has been done. There has recently been some work focusing on guiding dialogues with the help of a narrative Zhu et al. (2020). We generate scenes in which the main elements come from a small prompt as input.
## 4 Dataset
For movie plot generation, we have taken the plots from Wikipedia. The prompts for this task have been taken from IMDb. In IMDb, this prompt can be of two types. The first is a short description (15-40 words) of the movie, while the second is a long storyline, which varies from 30-200 words and contains much more details about the different characters and events of the movie. We have also collected the genres of each film from IMDb. We then divide the plots using a 4-act structure. For scene generation, we take the scripts from IMSDb and annotate them with the key elements of a scene.
### Plot Generation Dataset
We have created a dataset of 1000 plots consisting of both Bollywood and Hollywood plots, extracted from Wikipedia using the _wikipedia_ module in python. The plots collected are around 700 words long on average.
#### 4.1.1 Annotation Guidelines
We annotate the plots by manually dividing them into 4 parts using the 4-act structure described in **appendix A.5**. We place a single tag at the end of each act: _(one)_ (Act 1), _(two-a)_ (Act 2 Part A), _(two-b)_ (Act 2 Part B) and _(three)_ (Act 3) as delimiters. An example for plot annotation is given in the **appendix (Figure 6)**.
#### 4.1.2 Movie Genres
To bring some controllability to the plots generated by the model, we have introduced the genres of the movies in the dataset along with the storyline. We concatenate the genres at the beginning of the storyline. Figure 2 shows the distributions of genres in the dataset.
### Scene Generation Dataset
Movie scripts are very long. A 2-hour movie corresponds to around 30,000 words. Language models used for creative text generation, like GPT-2 and GPT-3, have token limits of 1024 and 2048, respectively, making it impossible to handle an entire script in one go. Hence, we divided the scripts into scenes and manually created their short descriptions. This allows training the scenes independently instead of relying on any previous scenes.
Movie scripts comprise of multiple elements described in **appendix A.4**. The different elements increase the difficulty models face in learning to distinguish each element. To overcome this obstacle, we tag four major elements throughout the script -- _sluglines, action lines, dialogues and character names_.
#### 4.2.1 Annotation Guidelines
We keep the four major elements present in every script -- _sluglines, action lines, character name and dialogues_-- and remove any other type of information such as page number, transitions or scene dates. The tagging of the four major elements is done using beginning and ending tags that are wrapped around the elements, as shown below:
* Sluglines: \(\langle\)bsl\(\rangle\)...\(\langle\)esl\(\rangle\)
Figure 2: Genre distribution within the plot dataset
* Action Lines: \(\langle\text{bal}\rangle...\langle\text{eal}\rangle\)
* Character Name: \(\langle\text{bcn}\rangle...\langle\text{ecn}\rangle\)
* Dialogue:\(\langle\text{bd}\rangle...\langle\text{ed}\rangle\)
An example of an annotated scene is seen in Fig 3.
## 5 Experiments and Evaluation
We fine-tune GPT3 with our datasets (refer **appendix A.6**).
### Plot Generation
We have created 5 models by fine-tuning GPT-3 with our movie plot dataset in the following manner, (i) **original** (without annotation) **(O)**: input- short storylines, output- plots without any annotations, (ii) **annotation and short input (AS)**: input- short storylines, output- plots annotated with 4-act structure, (iii) **annotation and long input (AL)**: input-long, more descriptive storylines, output- plots annotated with 4-act structure, (iv) **annotation and short input with genres included (ASG)**: input-short storylines and genre, output- plots annotated with 4-act structure, (v) **annotation and long input with genres included (ALG)**: input- long and more descriptive storylines along with the genre, output- plots annotated with 4-act structure.
For automatic evaluation we use **BLEU**Papineni et al. (2002), **Perplexity**Jelinek et al. (1977), **ROUGE**Lin (2004). We also use human evaluation in the form of a five-point Likert Scale Likert (1932). The rating system has 1-> Strongly Disagree, 2-> Disagree, 3-> Neutral, 4-> Agree, 5-> Strongly Agree. Human-written stories are assumed to have a rating of 5 for each of the following 5 features: (1) **Fluency**: grammatical correctness; (2) **Coherence**: logical ordering of sentences and paragraphs; (3) **Relevance**: Whether the key points from the prompt have been highlighted in the output; (4) **Likability**: The measure of how much the story is enjoyable; (5) **Creativity**: If the output introduced any new events, character profiles, or relationships.
For plot generation, we generate 50 plots from 50 test prompts. We divide the stories into five groups of 10 and assign three evaluators to each group.
For scene generation, we generate ten scenes from 10 test prompts. We assign five evaluators to rate these ten stories.
## 6 Results and Analysis
We present our observations and evaluations. The nature of our task makes human evaluation take precedence over automatic evaluation (it is for automatic movie script generation, after all!). The qualitative analysis of our generated plots and scenes is based on feedback from 5 professional scriptwriters of our industry partner, the well-known media platform.
### Plot Generation
#### 6.1.1 Automatic Evaluation
Table 1 shows auto-evaluation scores for the multiple GPT-3 plot generation models.
#### 6.1.2 Human Rating
We conducted human evaluation on Hollywood annotated short input model. The evaluation was done by five groups of 3 people, with each group
Figure 4: The above paragraph is a partial example of a movie plot generated by the model fine-tuned with input as short storyline and output as plot annotated with the 4-act structure.
Figure 3: The image depicts a portion of a movie scene with the four major elements annotated.
having been assigned 10 unique plots. The ratings given for the 5 features are in Figure 5. The average scores for fluency, creativity, likability, coherence and relevance are **3.98**, **3.29**, **2.97**, **2.65** and **2.55**, respectively. Fluency of almost 4 is an indicator of the power of GPT-3 as a language model. Creativity and likeability are respectable at a value of around 3.0. The low BLEU scores support the average creativity score (Table 1). Figure 5 indicates that coherence and relevance still have major room for improvement.
The MAUVE (Pillutla et al., 2021) value measures the gap between neural text and human text. We have separately calculated the MAUVE scores for 20 plots and 50 plots. The weighted average of the MAUVE scores for the two experiments is **0.48** which is reasonably good.
#### 6.1.3 Qualitative Observations
Professional scriptwriters from our industry partner have given the following observations:
**Non-annotated Hollywood Plots**
* The build-up is creative and interesting, but the ending becomes incoherent.
* Some characters which are introduced in the beginning are never mentioned again.
* The output is not portraying the key points or the theme mentioned in the input.
**Annotated Hollywood Plots**
* The plots are much more coherent, and the endings are logical.
* There is still hallucination present (a common feature of all models).
* The longer inputs made the plots more attentive to the key points.
**Annotated Hollywood Plots with Genres included**
* Along with the above points, now the plots generated are more tilted towards the genre or genres of the movie the writer wants to create.
* Addition of genre gives some control over the kind of plot generated by the model.
**Annotated Bollywood plots**
* The outputs show incoherence in the last two paragraphs and repetition of the same characters throughout the plot.
* The flow of the plot is not fast enough, i.e., the plot does not move ahead much.
* Many of the outputs have a 1990s theme around them, where the characters are separated and then find each other later. This is due to a skewed dataset with lesser modern plots.
### Scene Generation
We fine-tuned GPT-3 for scene generation with our dataset. We generated ten scenes using the models mentioned in 5.1. Figure 7 in the appendix. shows an example of a completely generated scene.
#### 6.2.1 Human Ratings
We conducted a human evaluation on 10 scenes generated by the above model. 5 people evaluated the scenes using the Likert Scale. The ratings for the five features can be seen in Figure 5. The average scores for _fluency, creativity, likability, coherence,_ and _relevance_ are **4.48**, **3.9**, **3.48**, **3.46** and **3.86**, respectively. All of the values are above the neutral mark and imply that the generated scenes are close to human-written scenes.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{1}{|c|}{**Models**} & **O** & **AS** & **ASG** & **AL** & **ALG** \\ \hline
**Metrics** & 2.48 & **1.84** & 2.43 & 2.33 & 2.63 \\ \hline
**BLEU-2 (\%)** & 12.95 & 12.01 & 12.51 & 13.08 & **14.52** \\ \hline
**BLUE-3 (\%)** & 4.70 & 4.21 & 4.55 & 4.84 & **5.59** \\ \hline
**BLUE-4 (\%)** & 2.14 & 1.92 & 2.13 & 2.27 & **2.59** \\ \hline
**ROUGE-L (\%)** & 22.67 & 21.72 & 23 & 24.02 & **24.88** \\ \hline
**Distinct 3-gram (\%)** & 97.55 & 97.61 & 97.39 & 97.28 & **98.09** \\ \hline
**Repetition 3-gram (\%)** & 1.99 & 2.02 & **1.72** & 1.89 & 1.74 \\ \hline \end{tabular}
\end{table}
Table 1: Scores from common evaluation metrics for 5 Hollywood plot generation models fine-tuned on GPT-3 as O, AS, ASG, AL, ALG (5.1)
#### 6.2.2 Qualitative Observations
In this section, we analyze the quality of the scenes generated by the GPT-3 model. This analysis has been done by professional scriptwriters from the previously mentioned media company.
* The model produces a well-structured scene.
* It can create new characters and fabricate dialogues even when they are unimportant.
* The key points from the input can be found in the output.
* There are some lines that are repetitive.
* The output is not completely coherent.
## 7 Conclusion and Future Work
In this paper, we have reported a first-of-its-kind work on automatic plot and script generation from prompts. Automatic evaluation, human rating using the Likert scale, and qualitative observations by professional scriptwriters from our industry partner (a large and well-reputed media platform)- all vindicate the power of our rich dataset and GPT3 in script generation. We hope our work will help television show writers, game show writers, and so on.
There are several future directions: (i) the imbalance in the Bollywood plot dataset needs to be rectified; (ii) there is a lot of variation in Indian script because of multilingualism, which needs addressing; (iii) the most obvious weakness of GPT-3 is not being able to handle factual data and numbers, causing hallucination and preventing the automatic generation of documentaries and biographies. Detection and resolution of hallucination is anyway a growing need for language models.
## 8 Limitations
* In the plot generation dataset, the Wikipedia plots are sometimes not written by professional content writers from the film industry. Therefore these plots may fail to include the main events of the movie.
* In a few cases, the model fails to generate coherent events along with the abrupt introduction of characters in the plots and scenes.
* Although it has been noticed only a few times, the plot or scene generated contains repeated clauses or phrases.
* The model hallucinates and generates factually incorrect things, making it incapable of generating biographies or documentaries.
* The plot or scene may not abide by the theme of the input or genre mentioned along with the prompt.
|
2308.15998 | Lambda polarization at Electron-ion collider in China | Lambda polarization can be measured through its self-analyzing weak decay,
making it an ideal candidate for studying spin effects in high energy
scatterings. In lepton-nucleon deeply inelastic scatterings (DIS), Lambda
polarization measurements can probe the polarized parton distribution functions
(PDFs) and the polarized fragmentation functions (FFs). One of the most
promising facilities for high-energy nuclear physics research is the proposed
Electron-ion collider in China (EicC). As a next-generation facility, EicC is
set to propel our understandings of nuclear physics to new heights. In this
article, we study the Lambda production in electron-proton collision at EicC
energy, in particular Lambda's reconstruction based on the performance of the
designed EicC detector. In addition, taking spontaneous transverse polarization
as an example, we provide a theoretical prediction with statistical projection
based on one month of EicC data taking, offering valuable insights into future
research prospects. | Zhaohuizi Ji, Xiaoyan Zhao, Aiqiang Guo, Qinghua Xu, Jinlong Zhang | 2023-08-30T12:38:27Z | http://arxiv.org/abs/2308.15998v1 | # Lambda polarization at Electron-ion collider in China1
###### Abstract
Lambda polarization can be measured through its self-analyzing weak decay, making it an ideal candidate for studying spin effects in high energy scatterings. In lepton-nucleon deeply inelastic scatterings (DIS), Lambda polarization measurements can probe the polarized parton distribution functions (PDFs) and the polarized fragmentation functions (FFs). One of the most promising facilities for high-energy nuclear physics research is the proposed Electron-ion collider in China (EicC). As a next-generation facility, EicC is set to propel our understandings of nuclear physics to new heights. In this article, we study the Lambda production in electron-proton collision at EicC energy, in particular Lambda's reconstruction based on the performance of the designed EicC detector. In addition, taking spontaneous transverse polarization as an example, we provide a theoretical prediction with statistical projection based on one month of EicC data taking, offering valuable insights into future research prospects.
Electron-ion collider in China; Lambda polarization; polarizing Fragmentation Functions; nucleon structure.
## I Introduction
Spin, as a fundamental property of particles, plays a crucial role in the advancement of modern physics. A growing number of experimental findings, such as the spontaneous transverse polarization of \(\Lambda\) and the proton spin crisis, have made it evident that there is much more to be understood about spin behaviors in high-energy reactions. The Lambda hyperon (\(\Lambda/\overline{\Lambda}\)) emerges as an exceptionally powerful tool in spin physics, primarily due to its parity-violating weak decay, which results in a non-uniform angular distribution of its products with respect to Lambda's spin direction [1]. In high energy reactions Lambda can be abundantly produced and efficiently detected via the decay channel \(\Lambda\to p\pi^{-}\) (branching ratio is \(63.9\%\)). In the \(\Lambda\) rest frame, the decay protons are preferentially emitted along the polarization direction of their parent \(\Lambda\), with the following angular distribution,
\[\frac{dN}{d\cos\theta^{*}}\propto\mathcal{A}(1+\alpha_{\Lambda(\overline{ \Lambda})}P_{\Lambda(\overline{\Lambda})}{\rm cos}\theta^{*}), \tag{1}\]
where \(\mathcal{A}\) is the detector acceptance, \(\alpha_{\Lambda}\) = 0.732\(\pm\)0.014 is the weak decay parameter [2], \(\theta^{*}\) is the angle between proton momentum direction and \(\Lambda(\overline{\Lambda})\) polarization direction in the \(\Lambda\) rest frame.
The spontaneous transverse polarization of \(\Lambda\) was first observed in 1976 in the unpolarized proton beam scattering on Beryllium target [3], by that time the perturbative Quantum Chromodynamics (QCD) only predicted a negligible polarization [4]. This puzzling results triggered a series of theoretical and phenomenological studies which had been extended far beyond \(\Lambda\) polarization itself. Experimentally, measurements of \(\Lambda\) polarization have since been extensively explored in various high-energy processes, encompassing electron-positron annihilation [5; 6; 7], lepton-nucleon deeply inelastic scattering (DIS) [8; 9; 10], hadron-hadron scattering [11; 12; 13], and heavy ion collisions [14; 15; 16; 17; 18], yielding invaluable insights into numerous aspects of physics. These measurements have served diverse purposes, including unraveling the physical origins of spontaneous polarization, understanding nucleon spin structure, comprehending spin effects in fragmentation processes, and exploring extreme conditions of high density and high temperature in heavy ion collisions.
High precision Lambda polarization measurements in the proposed electron-ion collider worldwide, provide unique opportunities to study the spin-dependent fragmentation functions (FFs) and polarized parton distribution functions (PDFs) [19; 20; 21; 22; 23; 24; 25; 26]. The Electron-ion collider in China, EicC, is the proposed next generation high energy nuclear physics facility, which is based on the High Intensity Heavy-ion Accelerator Facility (HIAF) in Huizhou, China [27; 28]. It is conceptually designed to deliver high luminosity electron-proton, electron-ion collision, with electron, proton, and light ion beams highly polarized. With complementary kinematics coverage to the other electron-ion collider proposals worldwide [29; 30; 31], the featured physics at EicC includes 3-dimensional proton spin structure, nuclear partonic structure, exotic hadron states, _etc._ Lambda polarization measurement at EicC is expected to be sensitive to not only the spin dependent parton distribution functions, but also the spin dependent fragmentation functions. Potential measurements of Lambda transverse polarization and impact studies have been performed for US-based EIC which is designed to collide electron and proton/ion beams at significantly higher energies than EicC [26].
In this work, the Lambda production in electron-proton scattering under EicC configuration is studied. Based on the current conceptual EicC detector design, especially the design of tracking subsystem, the reconstruction performance for \(\Lambda/\overline{\Lambda}\) is assessed. In section II, we will describe the simulation setup including the event generator in use, the detector
configuration and the corresponding fast simulation procedure. Performance of \(\Lambda/\overline{\Lambda}\) reconstruction will be presented in section III. In section IV, taking the spontaneous transverse polarization as an example, the potential statistical precision for polarization measurements will be given together with theoretical predictions. We will give a brief summary and outlook in section V.
## II Simulation framework
To simulate the Lambda production in electron-proton scattering, event generator PYTHIAeRHIC [32], a modified version of PYTHIA6.4.28 [33], is used with the Parton Distribution Functions (PDFs) input from LHAPDF [34]. The collision energy we choose is the baseline energy outlined in the EicC whitepaper [28], 3.5 GeV electron on 20 GeV proton. The leading-order diagram for \(\Lambda\) production in DIS process is shown in Fig. 1. The kinematics of the studied DIS events are constrained in the following ranges: Bjorken-\(x\)\(10^{-3}<x_{B}<1\), transferred 4-momentum squared \(Q^{2}>1\) GeV\({}^{2}\), and hadronic invariant mass squared \(W^{2}>4\) GeV\({}^{2}\). 10 millions of such DIS events are generated for the following studies.
At generator level, the average number of \(\Lambda\) produced per DIS event in the above kinematics ranges is about 0.1. In the laboratory frame, the momentum and polar angle distributions for \(\Lambda\) and the decay products are shown in Fig. 2. Comparing distributions of daughter proton and pion with of \(\Lambda\), it can be found that proton carries most of \(\Lambda\)'s momentum while pion only shares a small fraction. \(\Lambda\) is preferentially produced in proton going direction and with a large amount produced at very forward angle. Same distributions in Fig. 2 for \(\overline{\Lambda}\) are similar with slight difference which will be discussed in later context.
In this work, we are mostly interested at the \(\Lambda/\overline{\Lambda}\) from the struck quark fragmentation (current fragmentation region). Typically, Feynman-x is an effective variable to separate current fragmentation region and target fragmentation region. Feynman-\(x\)\(x_{F}\) is defined as \(x_{F}\equiv 2p_{L}^{\Lambda(\overline{\Lambda})}/W\), where \(p_{L}^{\Lambda(\overline{\Lambda})}\) is the \(\Lambda(\overline{\Lambda})\) longitudinal momentum in the hadronic center of mass frame, and \(W\) the hadronic invariant mass. Criteria \(x_{F}>0\) is expected to suppress the contributions from target fragmentation region. The correlation between Feynman-\(x\) and \(\Lambda/\overline{\Lambda}\) pseudorapidity \(\eta\) are shown in upper panels in Fig. 3. Here, pseudorapidity is defined as \(\eta=-{\rm ln}(\tan(\theta/2))\), where \(\theta\) is the polar angle. Following the EicC convention, the positive \(\eta\) is along the moving direction of proton/ion beam. One can see that \(\Lambda/\overline{\Lambda}\) with \(x_{F}<0\) are mostly produced at very forward region which will be discarded in the following simulation and analysis. Considering the limited coverage of EicC central detector, \(|\eta|<3\) is applied for \(\Lambda/\overline{\Lambda}\) and their daughter particles. Transverse momentum, \(p_{T}\), vs. \(\eta\) for \(\Lambda\) and \(\overline{\Lambda}\), with \(x_{F}>0\) and \(|\eta|<3\) are shown in the lower panels of Fig. 3. By tracing back the full event records in PYTHIA, the origins of such \(\Lambda/\overline{\Lambda}\) are shown in Fig. 4. At EicC energy, about half of \(\Lambda/\overline{\Lambda}\) are from decay of heavier hyperons. There are also significant contribution from beam remnants (di-quarks). In this study, we don't separate different sources for \(\Lambda/\overline{\Lambda}\).
The preliminary conceptual design of EicC detector has been described in the white papers [27, 28]. From inner to outer, it consists of the vertex/tracking detector, the particle identification (PID) system, the calorimeter system, \(etc\). For \(\Lambda\) measurement, the most relevant parts are the tracking and PID systems. The latest design of the EicC tracking detector is thoroughly described in Ref. [35]. The current design of tracking system uses hybrid models. For middle rapidity (\(|\eta|<1.1\)), there are 5 layers of silicon and 4 layers of Micro-Pattern Gaseous Detectors (MPGD), radially ranging from 3.3 cm to 77.5 cm. For \(|\eta|>1.1\), the tracking system consists of silicon disks followed by large-area Micromegas in the forward (proton/nucleus going) direction and all silicon disks in the backward (electron-going) direction. For the PID system, time of flight detector and Cherenkov detector will be used for particle identification at middle and forward rapidity respectively.
For the tracking system, full GEANT4 simulation has been performed with the latest design, based on which the resolutions for primary vertex position, for distance from tracks to tracks and from tracks to points, and for track momentum, the tracking detector efficiency as function of track \(p_{T}\) and \(\eta\) are given in details in Ref. [35] (Fig.4-9 therein). As well, a fast simulation framework is also developed to simulate the detector responses learned from the GEANT4-base simulation. In this work, we follow the same fast simulation procedure described in Ref. [35]. The detailed GEANT4 simulation for PID system is not available when this work is performed. To mimic the particle identification imperfection, a simplified "PID smearing" is included in the detector effect fast simulation. In principal the PID efficiency is correlated with the momentum of particles. However, we employ a toy model to study the PID effect with a typical PID efficiency of 95% as the following. Basically the identified \(\pi\), \(K\), or \(p\) has \(95\%\) possibilities to be correct, and \(2.5\%\) possibilities to be one of other two particles respectively, as described by the following matrix:
Fig. 1: Leading-order diagram for \(\Lambda\) production in a semi-inclusive DIS process.
\[\begin{bmatrix}\pi\\ K\\ p\end{bmatrix}_{\rm smeared}=\begin{bmatrix}0.95&0.025&0.025\\ 0.025&0.95&0.025\\ 0.025&0.025&0.95\end{bmatrix}\begin{bmatrix}\pi\\ K\\ p\end{bmatrix}_{\rm truth} \tag{2}\]
Here \(95\%\) of the PID purity is specifically chosen, and a few other numbers are also checked for a complete study.
## III Lambda reconstruction
Similar to the method used in other experiments with tracking detector, \(\Lambda/\overline{\Lambda}\) reconstruction in this study is based on the topological structure of the decay channel with a large branching ratio, \(\Lambda\to p\pi^{-}\) and \(\overline{\Lambda}\to\overline{p}\pi^{+}\). Taking \(\Lambda\) as example, Fig 5 schematizes the main topological features of its production and decay process in a tracking detector. Blue dot at the bottom-left represents the \(ep\) scattering vertex, named as "primary vertex". \(\Lambda/\overline{\Lambda}\) is emitted from the primary vertex, then moves along the magenta straight dash line and decay at "V0 vertex". The decay products \(p\pi^{-}\) (\(\overline{p}\pi^{+}\)) travel along helical lines with opposite bending directions in the magnetic field.
Reconstruction of \(\Lambda/\overline{\Lambda}\) starts with pairing proton and pion tracks with opposite charge. To select the \(\Lambda/\overline{\Lambda}\) candidates and suppress the random backgrounds, the following selection variables are considered :
(1) The distance of closest approach (DCA) of proton and pion tracks to the primary vertex. As indicated in Fig. 5, DCA\({}_{p}\) and DCA\({}_{\pi}\) from signals should be significantly higher than those from background, as parent \(\Lambda/\overline{\Lambda}\) flies certain distances from primary vertex before its decay.
(2) The distance of closest approach (DCA) between paired proton tracks and pion tracks. For \(\Lambda/\overline{\Lambda}\) signal, this variable should be consistent with zero within the track space resolution. The decay point (V0 vertex) is given by the middle point of these two tracks at the closest approach, as indicated by the brown triangle in Fig. 5.
(3) The decay length of \(\Lambda/\overline{\Lambda}\) candidates, which is the distance between the primary vertex and V0 vertex. The characteristic decay length of \(\Lambda\) hyperon \(cr\) is 7.89 cm [2].
Fig. 3: Upper panels: Feynman-\(x\)\(x_{F}\) vs. pseudorapidity \(\eta\) for \(\Lambda\) (left) and \(\overline{\Lambda}\) (right) in the laboratory frame. Only \(\Lambda/\overline{\Lambda}\) above the red line (\(x_{F}>0\)) are kept. Lower panels: transverse momentum \(p_{T}\) vs. pseudorapidity \(\eta\) for \(\Lambda\) (left) and \(\overline{\Lambda}\) (right) with \(x_{F}>0\) and \(|\eta|<3\) (also \(|\eta|<3\) for daughter proton and pion).
Fig. 2: Momentum (radial) and polar angle (polar) distributions for \(\Lambda\) and its decay products in the laboratory frame.
(4) The angle between \(\Lambda/\overline{\Lambda}\) candidate momentum \(\vec{p}\) and its trajectory \(\vec{r}\) from primary vertex. For the \(\Lambda/\overline{\Lambda}\) directly produced from the primary vertex, its momentum direction is supposed to be along its trajectory from primary vertex. Correspondingly, \(\cos(\vec{r}\cdot\vec{p})\) should be very close to 1.
To quantitatively determine the selection criteria, the distributions of proton-pion pairs from pure \(\Lambda/\overline{\Lambda}\) sample are compared with the proton-pion pairs from backgrounds, and the comparisons are shown in Fig. 6. Based on the comparisons, a set of selection criteria are optimized to balance the background fraction and the \(\Lambda/\overline{\Lambda}\) reconstruction efficiency, in order to keep as many signals as possible while keeping the background fraction at a relatively low level. The numerical cut conditions are listed in Tab. 1.
By implementing the aforementioned selection criteria, we successfully obtained a clean sample of \(\Lambda/\overline{\Lambda}\) candidates. The invariant mass spectrum of \(\Lambda/\overline{\Lambda}\) candidates with kinematics cuts of \(x_{F}>0\), \(|\eta|<3\), and \(z_{\Lambda}>0.1\) (fractional momentum of \(\Lambda/\overline{\Lambda}\) is defined as \(z_{\Lambda}\equiv\frac{P_{T}p_{\Lambda}}{P_{\cdot}q}\) ) is shown in Fig. 7. Here the shown histograms are scaled to an integral luminosity of 5 fb\({}^{-1}\) which is corresponding to about 1 month of EicC data taking. With all selection criteria applied, more \(\Lambda\) over \(\overline{\Lambda}\) are reconstructed due to the baryon number enhancement. There is a clean Gaussian signal peak with very limited background. The residual background mainly comes from the random combinations of oppositely charged particles and particle mis-identification. Invariant mass distribution of these background is expected to be linear. With typical side-band method, the residual background fraction is estimated to be about \(2.6\%\) for \(\Lambda\) and \(3.0\%\) for \(\overline{\Lambda}\). Signal mass window is set to be within \(3\sigma\) width of Gaussian fit, which is \((1.106,1.124)\) GeV/c\({}^{2}\). As for the side bands, they are limited to the regions far from the mass window to avoid fluctuations by signals, but not far enough to escape from the signal peak. The left side band is \((1.083,1.093)\) GeV/c\({}^{2}\) and right side band is \((1.137,1.147)\) GeV/c\({}^{2}\). The background under the signal peak is estimated as the sum of the two sides-bands normalized to the signal window. As mentioned in section II, the sensitivity of \(\Lambda/\overline{\Lambda}\) reconstruction to the PID performance is assessed by varying the PID "purity" number. For \(100\%\) PID purity, the residual background fraction is \(1.7\%\), while for a \(90\%\) case, the residual background fraction increases to \(3.4\%\), which is still under good control.
Figure 8 shows the \(\Lambda\) and \(\overline{\Lambda}\) reconstruction efficiency versus transverse momentum. The reconstruction efficiency involves several effects including topological cuts, detector acceptance, track efficiency which depends on track \(p_{T}\), track \(\eta\), etc. For \(\Lambda/\overline{\Lambda}\) with large decay length, the number of tracking detector layers the daughter tracks pass through decreases and so does the tracking efficiency. The efficiency at very low \(p_{T}\) is limited by the detector acceptance due to magnetic field. Due to low transverse momenta in the forward region (large \(|\eta|\)), the efficiency decreases significantly. As already shown in Fig. 3, more \(\Lambda\) than \(\overline{\Lambda}\) are produced at large pseudorapidity where the reconstruction efficiency is low, which lead to significantly higher efficiency of \(\overline{\Lambda}\) compared \(\Lambda\) to at \(p_{T}>0.5\) GeV/c. When \(p_{T}\) is larger than 2 GeV/c, efficiency for \(\Lambda\) reconstruction increases and gets closer to \(\overline{\Lambda}\) since \(\Lambda\) at middle rapidity start to dominate its production.
## IV Spontaneous transverse polarization
In this section, we take \(\Lambda/\overline{\Lambda}\) spontaneous transverse polarization as an example to explore the physics potentials of EicC. The theoretical calculation and statistical projection based on our simulation results are described in the following.
The QCD formalism is used in describing \(\Lambda\) spontaneous transverse polarization \(P_{\Lambda}\) in the semi- inclusive DIS process, \(e^{-}(l)+p(P)\to e^{-}(l^{\prime})+\Lambda(p_{\Lambda},{\bf S}_{\Lambda \perp})+X\). The Trento convention [36] is followed in the calculation, where the virtual photon moves in the positive \(z\) direction and the proton moves in the negative \(z\) direction, and the differential cross section can be written as [37; 38],
\[\frac{d\sigma({\bf S}_{\Lambda\perp})}{dx_{B}dydz_{\Lambda}d^{2}{ \bf p}_{\Lambda\perp}}=\frac{4\pi\alpha_{em}^{2}}{yQ^{2}}\frac{y^{2}}{2(1- \epsilon)}\left(1+\frac{\gamma^{2}}{2x_{B}}\right)\left\{F_{UU}\right. \tag{3}\] \[+\left.|{\bf S}_{\Lambda\perp}|\sin(\phi_{S_{\Lambda}}-\phi_{ \Lambda})FU_{UT}^{\sin(\phi_{S_{\Lambda}}-\phi_{\Lambda})}+\cdots\right\},\]
where \(\gamma=2x_{B}M/Q\) and \(Q^{2}=-q^{2}\), \(x_{B}=\frac{Q^{2}}{2P\cdot q}\), \(y=\frac{P\cdot q}{P\cdot l}\), \(z_{\Lambda}=\frac{P\cdot p_{\Lambda}}{P\cdot q}\) are Lorentz invariant variables, and \({\bf S}_{\Lambda\perp}\), \({\bf p}_{\Lambda\perp}\) are the transverse spin vector and transverse momentum of the \(\Lambda\) hyperon, respectively. \(F_{AB}=F_{AB}(x_{B},z_{\Lambda},{\bf p}_{\Lambda\perp},Q)\), where the subscripts indicate the polarization of proton and \(\Lambda\), respectively. \(F_{UU}\) is the spin-averaged structure function, and \(F_{UT}^{\sin(\phi_{S_{\Lambda}}-\phi_{\Lambda})}\) is the spin-dependent term that contributes to
\begin{table}
\begin{tabular}{c c} \hline Variables & Cut condition \\ \hline DCA\({}_{p}\) & \(>0.1\) mm \\ DCA\({}_{\pi}\) & \(>0.5\) mm \\ DCA of \(p\)\(\pi\) pair & \(<0.8\) mm \\ Decay length & \(>1.5\) mm \\ \(\cos\left(\vec{r}\cdot\vec{p}\right)\) & \(>0.95\) \\ \hline \end{tabular}
\end{table}
Table 1: The summary of topological criteria for \(\Lambda/\overline{\Lambda}\) reconstruction.
Figure 5: Topology schematic diagram of \(\Lambda\) production and its decay process through \(\Lambda\to p\pi^{-}\).
the spontaneous transverse polarization. The experimentally measured polarization \(P_{\Lambda}\) is related to the structure functions as follows:
\[P_{\Lambda}=\frac{F_{UT}^{\sin(\phi_{S_{\Lambda}}-\phi_{\Lambda})}}{F_{UU}}. \tag{4}\]
Within the usual transverse momentum distribution (TMD) factorization, at leading twist the structure functions can be written as,
\[\begin{split} F_{UU}&=\int d^{2}\mathbf{p}_{\perp}d^{2 }\mathbf{k}_{\perp}\delta^{2}(z_{\Lambda}\mathbf{p}_{\perp}+\mathbf{k}_{\perp}-\mathbf{p}_{ \Lambda\perp})\\ &\qquad\times\sum_{q}e_{q}^{2}f_{1q}(x_{B},\mathbf{p}_{\perp}^{2},Q) D_{1q}^{\Lambda}(z_{\Lambda},\mathbf{k}_{\perp}^{2},Q),\\ F_{UT}^{\sin(\phi_{S_{\Lambda}}-\phi_{\Lambda})}& =\int d^{2}\mathbf{p}_{\perp}d^{2}\mathbf{k}_{\perp}\delta^{2}(z_{\Lambda }\mathbf{p}_{\perp}+\mathbf{k}_{\perp}-\mathbf{p}_{\Lambda\perp})\\ &\qquad\times\sum_{q}e_{q}^{2}\frac{\hat{\mathbf{p}}_{\perp\perp} \cdot\mathbf{k}_{\perp}}{z_{\Lambda}M_{\Lambda}}f_{1q}(x_{B},\mathbf{p}_{\perp}^{2},Q )D_{1Tq}^{\perp\Lambda}(z_{\Lambda},\mathbf{k}_{\perp}^{2},Q),\end{split} \tag{5}\]
where \(\mathbf{p}_{\perp}\) and \(\mathbf{k}_{\perp}\) denote the transverse momentum of the quark relative to the initial proton and the transverse momentum of \(\Lambda\) relative to its parent quark, respectively.
Fig. 8: Reconstruction efficiency of \(\Lambda\) and \(\overline{\Lambda}\) as a function of \(p_{T}\) after all selection criteria applied.
Fig. 6: Distributions of topological variables for \(\Lambda/\overline{\Lambda}\) signal (red) and background (blue) respectively.
Fig. 7: Invariant mass distributions of \(\Lambda\) and \(\overline{\Lambda}\) candidates passing all selection criteria.
We parameterize the TMDs using the usual Gaussian form, as the product of collinear functions and Gaussian widths:
\[f_{1q}(x_{B},\mathbf{p}_{\perp}^{2};Q)=f_{1q}(x_{B},Q)\frac{e^{-\mathbf{p} _{\perp}^{2}/\left<p_{\perp}^{2}\right>}}{\pi\left<p_{\perp}^{2}\right>}, \tag{6}\] \[D_{1q}^{\Lambda}(z_{\Lambda},\mathbf{k}_{\perp}^{2};Q)=D_{1q}^{\Lambda }(z_{\Lambda},Q)\frac{e^{-\mathbf{k}_{\perp}^{2}/\left<k_{\perp}^{2}\right>}}{\pi \left<k_{\perp}^{2}\right>},\] \[D_{1Tq}^{\perp\Lambda}(z_{\Lambda},\mathbf{k}_{\perp}^{2};Q)=D_{1Tq} ^{\perp\Lambda}(z_{\Lambda},Q)\frac{e^{-\mathbf{k}_{\perp}^{2}/\left<M_{D}^{2} \right>}}{\pi\left<M_{D}^{2}\right>},\]
where \(\left<p_{\perp}^{2}\right>=0.61,\left<k_{\perp}^{2}\right>=0.19\) and \(\left<M_{D}^{2}\right>=0.118\) are the corresponding Gaussian widths from Ref. [39, 40]. In this analysis, we use the CT18NLO [41] parametrization for the collinear PDF while using the DSV [42] and AKK08 [43] parametrizations for the collinear unpolarized FF. Both parametrizations describe experimental data but significantly differ, with AKK08 including substantial isospin symmetry violations and DSV parametrization conserving it. Additionally, the universality of the polarizing FF \(D_{1Tq}^{\perp\Lambda}\) has been proven [44, 45, 46]. Similarly, \(D_{1Tq}^{\perp\Lambda}\), as a modulation of \(D_{1q}^{\Lambda}\) by an additional collinear function, is also described by two different parametrizations, i.e. CLPSW [47] considering isospin symmetry and CKT allowing isospin symmetry violations [40]. The future EicC experiment provide an ideal place to test the \(\Lambda\) isospin symmetry of FFs. In our study, we employ these different parametrizations to calculate the polarization observables and perform comparisons.
Using these parametrizations for the TMDs in Eq.(6), the spontaneous transverse polarization of \(\Lambda\) in Eq.(4) has the analytic form:
\[P_{\Lambda}(x_{B},z_{\Lambda},\mathbf{p}_{\Lambda\perp},Q) \tag{7}\] \[=\frac{\sum_{q}e_{q}^{2}f_{1q}(x_{B},Q)D_{1Tq}^{\perp\Lambda}(z_{ \Lambda},Q)}{\sum_{q}e_{q}^{2}f_{1q}(x_{B},Q)D_{1q}^{\Lambda}(z_{\Lambda},Q)} \frac{\left<k_{\perp}^{2}\right>+z_{\Lambda}^{2}\left<p_{\perp}^{2}\right>}{ \left<\left<M_{D}^{2}\right>+z_{\Lambda}^{2}\left<p_{\perp}^{2}\right>}\] \[\quad\times\frac{\left<M_{D}^{2}\right>\mathbf{p}_{\Lambda\perp}}{z_ {\Lambda}M_{\Lambda}}e^{\left\{\frac{\mathbf{p}_{\perp}^{2}}{\left<k_{\perp}^{2} \right>+z_{\Lambda}^{2}\left<p_{\perp}^{2}\right>}-\frac{\mathbf{p}_{\perp}^{2}}{ \left<M_{D}^{2}\right>+z_{\Lambda}^{2}\left<p_{\perp}^{2}\right>}\right\}}.\]
With this expression, we can estimate the magnitude of \(P_{\Lambda}\) in SIDIS. Taking \(Q^{2}=5\,\mathrm{GeV}^{2}\), we plot the \(P_{\Lambda}\) as a function of \(\mathbf{p}_{\Lambda\perp}\) in Fig. 9. The results are obtained for different values covered by the kinematic range of the future EicC. To get \(P_{\Lambda}\) dependence on the Feynman variable \(x_{F}\), we parameterize \(x_{F}\) as a function of Lorentz invariant variables \((x_{B},z_{\Lambda},Q)\) through a kinematic transformation:
\[x_{F}=\frac{-z_{\Lambda}Q^{2}}{M[x_{B}M^{2}+(1-x_{B})Q^{2}]} \left[\sqrt{Q^{2}+\frac{Q^{4}}{4x_{B}^{2}M^{2}}}\right. \tag{8}\] \[\left.+(M+\frac{Q^{2}}{2x_{B}M})\sqrt{\frac{4x_{B}^{2}M^{2}(M_{ \Lambda}^{2}+\mathbf{p}_{\Lambda\perp}^{2})}{z_{\Lambda}^{2}Q^{4}}-1}\right].\]
Using Eq.(7,8) and integrating over \(\mathbf{p}_{\Lambda\perp}\), we plot the result of the \(x_{F}\) and \(z_{\Lambda}\) dependent \(P_{\Lambda}\) in Fig. 9.
The statistical projection of \(\Lambda/\overline{\Lambda}\) polarization is based on an integrated luminosity of 5 fb\({}^{-1}\), which is of same size of data sample as shown in Fig. 7. The statistical uncertainties follows the equation format as \(\delta P\approx\frac{1}{\alpha_{\Lambda}\sqrt{\sqrt{N}/3}}\) based on the polarization extraction procedure. The \(\Lambda\) and \(\overline{\Lambda}\) projected precision versus \(p_{\Lambda\perp}\), \(x_{F}\), and \(z_{\Lambda}\) are also shown in Fig. 9 together with theoretical predictions. The size of the error bars are smaller than the marker sizes, thus invisible. Depending on the statistics in different bins, the range of the errors is from 0.002 to 0.007.
Fig. 9: The statistical projection with theoretical predictions for \(\Lambda\) and \(\overline{\Lambda}\) polarization in \(ep\) collisions at EicC. The size of the projected statistical errors are smaller than the marker sizes, thus invisible.
Summary and outlook
EicC is the proposed next generation nuclear physics facility which is expected to provide unique opportunities for precisely studying the 3-dimensional nucleon structure, the nuclear partonic structure, the exotic hadron states, _etc_. Lambda hyperon serving as a natural final state polarimetry is a powerful tool for studying nucleon spin structure and spin effect in the fragmentation process. Lambda measurements at EicC is of special importance and interest.
Based on a conceptual design of EicC tracking system and GEANT4 simulation, we performed a detailed study for Lambda production and reconstruction. Also, taking spontaneous transverse polarization as an example, theoretical predictions are given as functions of different kinematic variables, together with statistical projections with one month's data taking at EicC. We find that measurements with EicC data taken in only one month's running based on current accelerator design could provide distinguishable constraints for different parameterizations of the fragmentation functions.
EicC is designed to have both beams polarized, and the Lambda polarization transferred either from the lepton or the proton beam could provide important constraints on the spin dependent PDFs and FFs, in both co-linear and transverse momentum dependent framework. In the future work, more observables will be studied. What could be improved in the future also includes decay contributions from heavier particles, more realistic PID, _etc_.
## Acknowledgement
We thank Tianbo Liu for the valuable discussion on the theoretical calculations. We thank the EicC tracking and heavy flavor working groups for the technical supports on detector simulation and useful suggestions on the analyses.
|
2301.10333 | Lost in Algorithms | Algorithms are becoming more capable, and with that comes hic sunt dracones
(here be dragons). The term symbolizes areas beyond our known maps. We use this
term since we are stepping into an exciting, potentially dangerous, and unknown
area with algorithms. Our curiosity to understand the natural world drives our
search for new methods. For this reason, it is crucial to explore this subject.
The project's objective is to overlay the information obtained, in
conjunction with the state of hardware today, to see if we can determine the
likely directions for future algorithms'. Even though we slightly cover
non-classical computing in this paper, our primary focus is on classical
computing (i.e., digital computers). It is worth noting that non-classical
quantum computing requires classical computers to operate; they are not
mutually exclusive. | Andrew N. Sloss | 2023-01-02T16:09:05Z | http://arxiv.org/abs/2301.10333v1 | # Lost in Algorithms
###### Abstract.
Algorithms are becoming more capable, and with that comes _hic sunt dracones_ ("_here be dragons_"). The term symbolizes areas beyond our known maps. We use this term since we are stepping into an exciting, potentially dangerous, and unknown area with algorithms. Our curiosity to understand the natural world drives our search for new methods. For this reason, it is crucial to explore this subject.
In this document, we look for future algorithms and styles. Each era in computing has had an algorithm focus. Examples include periods when military range prediction was necessary, _weather prediction_, and, more recently, _machine learning_. The 1940s saw the starting point when electronic machines replaced humans (Brockman et al., 2017). Procedures became too complex to be handled by people. This time and other historical periods have accelerated additional specific algorithm development and the associated hardware architectures. The question for this paper is, _what next_? As we explore this question, we will introduce a set of practical terms to help classify the various algorithms and hopefully provide a straightforward method of understanding. At the highest level, algorithms are recipes for solving problems. Problems range from the small & simple to the large & complex.
This project covers the behavior of algorithms and the hardware styles to execute those algorithms. In other words, we will cover _what the algorithms do rather than how they do it_. The world of algorithms is a large and complicated subject. To assist in the process, we will separate the world into three main areas: _computer science, artificial intelligence_, and finally _quantum computing_.
The project's objective is to overlay the information obtained, in conjunction with the state of hardware today, to see if we can determine the likely directions for future algorithms'. Even though we slightly cover non-classical computing in this paper, our primary focus is on classical computing (i.e., digital computers). It is worth noting that non-classical quantum computing requires classical computers to operate; they are not _mutually exclusive_.
algorithms, theory, mathematics, software, hardware +
Footnote β : copyrighted: Β©
## 1. Introduction
Algorithms are critical for industry, research, and ideas. And have an increasing influence on society. We can say every part of our lives involves some form of an algorithm. It is a ubiquitous tool for problem-solving. To run an algorithm, we have many machines: mechanical, biological, analog, digital, quantum, and even human-social. We use them to optimize (e.g., repetition), explore (e.g., search), and even predict (e.g., models). The subject attracts
Figure 1. Algorithm-Environment
some of the brightest and most innovative people wanting to discover better solutions. These people are always at the cutting edge, striving to do the next complicated task.
Figure 1 shows how we will approach the subject. An algorithm or recipe has inputs and outputs and lives inside an environment.
The inputs-outputs come as part of the executing environment, e.g., a digital computer. We will look at algorithms from a behavioral viewpoint. In other words, _what do the algorithms do?_, and less on _how they do it?_. It is essential to understand the distinctions; we are not explaining the algorithms themselves but how they affect or take from the environment. Because of the vast nature of the field, the focus is mainly on digital computing to help prune the domain.
We have hopefully explained our approach, and the next stage is to show the relationship between hardware and software. The Venn diagram shown in Figure 2 attempts to establish the relationship between hardware, software, and algorithms. We are separating the worlds, so it is easier to see the connections. Each world has its styles of thought and process. For instance, algorithms + software gives us applications, and hardware + software + algorithms give us an end product. A complete solution is about all three worlds aligning together.
_What came first, hardware or algorithms?_ This is a _chicken or egg_ question. Algorithms are probably first, with the role of hardware being initially human and algorithms being mathematical equations. Historically, hardware has had to catch up with algorithms. This catch-up is why sizable effort is applied to optimize a new algorithm for existing hardware. With one exception, hardware jumps in capability every so often, forcing algorithms to catch up, e.g., quantum hardware. Hardware provides the enabling canvass for the exploration and optimization of algorithms.
An algorithm describes a method to utilize software and hardware for problem-solving. There is usually an _objective_, but not always. A problem belongs to a _problem domain_. A problem domain is a search space. For example, if a company has a problem with obtaining electronic components, the problem domain would probably include suppliers, the ordering process, and the manufacturing department. The algorithm should search those areas to determine a solution. The environment consists of the execution machine, data input, and a place for the final output decision. The execution machine is any computation system. Data input comes from the environment as filtered or noisy real-world data. Lastly, output decisions can be anything from classification to an action that makes an environmental change
As computer scientists, we have historically relied on famous texts. For instance, _Robert Sedgewick et al._ book on _Algorithms_(Sedgewick et al., 2010), or _Donald Knuth_ book series on _The Art of Computer Programming_(Sedgewick et al., 2010) to provide libraries of known solutions. The knowledge includes such algorithms as _recursive tree structure walking_ or the _shortest path_ between two nodes on a graph. These libraries were the result of decades of experimentation. If we fast forward to today, we see _statistical_ and _probabilistic_ algorithms becoming ever more popular. These algorithms are less concerned with precision (i.e., absolutes) and more concerned with _good enough_ (i.e., levels-of-certainty). This trend does not mean the older algorithms are any less important, but currently, they are not at the cutting edge.
Mathematics allows us to express complex processes or prove correctness. It is probably our most significant accomplishment. The language of mathematics underpins the world of algorithms, and it is how we formally describe recipes. We will start this journey by looking first at the general objectives, i.e., _what should algorithms explore_?.
### General objectives
Algorithms optimize, summarize and discover the world around us. These pursuits are related to us humans, either to augment or to move beyond our capabilities. We separate objectives into three distinct areas:
* _What we know_ Using historical knowledge and skills
* _What we think we want to know_ Using investigative methods of exploration
* _What we can't imagine_ Going beyond human perception and capability
In the beginning, we described an algorithm as a recipe. The recipe is a proven sequence of operations that find an answer or execute computation. Historically, algorithms were optimizations of what we knew --replicating human procedures. The early hardware had significant constraints, limiting what algorithms existed. These early solutions, even though primitive by today's standards, could operate 24 hours a day, seven days a week (provided the thermionic valves did not burn out). The hardware was a digital replacement. As time has progressed, improvements in computation have allowed us to shift from "_what we know_" to "_what we think we want to know_". In other words, the hardware allows us to explore, i.e., find new knowledge. Exploring involves some form of guessing and, by implication, takes time. Guessing allows for mistakes. Lastly, this brings us to the third objective "_what we can't imagine_", these are the algorithms at the edge of a discovery that cannot necessarily
Figure 2. Relationship between Hardware-Software-Algorithms
be 100% explained or show a reasoned causal path. Their forms and structures are still in flux.
_Certainty_ requires some form of confirmation. We used the word _proven_, in an earlier paragraph, as an aside. A mathematical proof is used for validation and differs from an algorithm. In that, what is computable and what can be proven may be different. We have an arsenal of automated mechanisms to help verify algorithms. These include formal methods and the modern trend to explore complexity hierarchies with various forms of computation. It is worth mentioning that _Kurt Godel_, in 1933, presented the infamous _Incompleteness theorem_, showing that not all algorithms can be proven [23]. Another critical example is _David Hilbert's_'s _Haling problem_[43], where _Alan Turing_ proved that it is impossible to determine when an algorithm will stop (or not) given an arbitrary program and inputs.
As the objectives become more abstract, _uncertainty_ increases. We can divide uncertainty into two ideas. There is uncertainty due to the complexity of nature, and there is uncertainty due to our lack of knowledge. Both play a critical role as we explore our environment. The first idea is called _ontological uncertainty_ (e.g., associated with biology and quantum mechanics), and the second idea is called _epistemological uncertainty_ (e.g., we do not know the precise number of people who are left-handed?) [33].
_"What we know"_, _what we think we want to know_", and "_what we can't imagine_" are the three high-level objectives; we next look at the aspirations. _What should we consider as ideal attributes for a good algorithm?_
### Ideals
There are many ways to think about algorithms. We can look at demands that an algorithm has to satisfy (e.g., best voice compression algorithm or highest security level for buying online). The approach we have decided to adopt is to look at the attributes we want algorithms to have, the set of ideals. _Platonic idealism_ is the contemplation of ideal forms. Or, more realistically, a subset of ideal forms. An ideal involves attempting to find an algorithm without sacrificing other essential attributes. These attributes include being efficient with time (the _temporal_ dimension) and using appropriate resources (the _spatial_ dimension). Resources include physical storage, communication, and computation.
We should make it clear this is not about what is possible with today's technology or even in the future but what we want algorithms to achieve, i.e., our expectations. The following list is our first attempt:
1. [leftmargin=*,noitemsep,topsep=0pt]
2. _Perfect solution_, is an obvious first ideal. A good outcome is to have several solutions, each providing a different path and varying levels of precision & accuracy. Human biases, such as symmetry, are removed from the outcome unless there is a requirement for a human-biased result, i.e., a decision is made between impartial (ethical) or partial (practical) solutions [29]. Finally, the algorithm maps directly onto available hardware per the spatial dimension.
3. **Confidence through consistency**, we want consistency; an algorithm creates confidence by providing reliable results.
4. _Self-selection of the objective_, one of the essential activities humans undertake is determining the purpose. For an ideal algorithm, we want the goal or sub-goals set by the algorithm or offered as a set of options, i.e., negotiation.
5. _Automatic problem decomposition_, we want the algorithm to break down a problem into testable modules. The breakdown occurs automatically. This process is essential if we want to handle more significant issues, i.e., more extensive problems.
6. _Replication when required_, specific problems lend themselves towards parallel processing. For these classes of problems, we want the algorithm to self-replicate. The replication allows solutions to scale automatically; some problems require scale. As much as possible, the algorithms should also work out how to scale linearly for a solution to be ideal.
Figure 3. Ideal algorithm
1. _Handling the known and unknown_, we want an algorithm to handle problems that are either known (with related solutions) or entirely unknown (where exploration occurs). An unknown answer, once found, transfers to the known. It learns.
2. _No preconditions on input data_, from an ideal perspective, we want to remove the format strictness imposed on the input data. Data acts as an interface for algorithm negotiations. Analyzing the data means the data format is deducible, removing the requirement of a fixed interface. Note Machine Learning has different criteria that are more to do with the quality of the input data.
3. _Self-aware_, ideally, we want an algorithm to be aware of the implications of a decision. This implication is especially true regarding safety-critical problems where a decision could have more consequences and, more generally, the emotional or moral side of a decision. It weighs the effect of the outcome.
4. **Secure and private**, we want an algorithm to handle data so that it is secure, and if human information is concerned, it provides privacy.
5. _Being adaptive_, an ideal algorithm can change as the environment changes and continuously learns from new knowledge. Knowledge comes from experimenting with the environment and subsequently improves the algorithm.
6. _Causal chain_, where applicable, we want the causal chain that produced the result. We want to understand _why_.
7. _Total knowledge_, an ideal algorithm has all the necessary historical knowledge for a particular area. The algorithm does not follow information blindly but has all the knowledge about a specific subject. New knowledge can be identified as an emergent property if an unknown pattern appears. The ideal algorithm becomes an _encyclopedia_ on a particular subject. Maybe different weights are placed on the knowledge that is correct or contradictory. Maturity means the topic under study is wholly understood and has well-defined boundaries.
8. _Explainable_, we want results explained in human understandable terms. As the problem becomes more complicated, so do the answers. We want the algorithms to explain the answer and, potentially, the context.
9. _Continuous learning and improvement_, as alluded to in previous ideals, we want the algorithm to continue to learn and to continue attempts to improve the techniques to produce a better, faster, less resource-draining solution.
10. _Cooperative or competitive_, the ideal algorithm works in a multi-agent environment, where agents are assistants for or detractors against a zero-sum scenario. [(35)]--the ideal looks for an alliance with other agents. If an alliance is not possible, it goes out on its own to solve the problem (Nash equilibrium [(32)]). In other words, we want an algorithm to have _Games Theory_ skills.
11. _Law-abiding_, we need an algorithm to be a law-abiding citizen. It works within the confines of legal law (not scientific laws). This confinement is essential for algorithms involved in safety-critical or financial activities.
12. _Nice and forgiving_, is more of a human constraint. We want algorithms to take the most friendly society approach in a multi-agent environment; if harm occurs to the algorithm, then a counter-reaction could be implemented. Within reason, an algorithm can forget any malicious act. This reaction is essential when only partial information is available [(35)]. We can argue whether this is a constraint or an ideal, but we want algorithms to have some human-like tendencies, e.g., compassion over revenge).
As pointed out, these are first-pass ideals. We are certain ideals are missing or need modification. Even though algorithms are likely to be unique, ideal characteristics define the algorithms' boundaries (or extremes). The limits provide the universal goals for an algorithm.
We now move the journey to the problem domains. At this point, we have discussed the importance of algorithms and what the high-level ideals should be. Hopefully, these ideals indicate why we are potentially entering unknown territory.
### Computation
Problems make an algorithm attractive, from the challenge of chasing a solution to the actual application. A solution is a map of a complex world, making it understandable. The map comes from a boundless library of ideas, e.g., _The Library of Babel_ concept [(6)]. Each room in the infinite library includes varying truths and falsities.
As mentioned, problem domains determine the search space of possibilities, ranging from simple to complex and small to large. A simple mathematical problem domain tends to have a simple solution; likewise, a chaotic problem domain leans towards a complicated answer. There is an underlying belief and a hope that a problem domain is reducible [(28)]. For example, _Sir Isaac Newton_ created the _Laws of Motion_[(31)] that reduces the complexity of movement to a set of rules. By contrast, we have failed to reduce gravitation, electromagnetism, weak nuclear, and strong nuclear forces to an agreed-upon single solution, i.e., a _Grand Unified Theory (GUT)_. We have controversial ideas, such as _String Theory_, but no provable solutions [(21)]. There is a possibility and hope a future algorithm will eventually solve this problem.
Algorithms rely on reducibility. The likelihood of finding a reducible form is dependent on the complexity level. How we reduce a problem also depends on the _spatial_ and _temporal_ constraints. A good example of an external constraint is the timing required for a successful commercial product. Missing the timing window means a potential loss of revenue.
A solution to a problem domain involves a combination, and sequence, of _searching_, _ordering_, and _compression_, see Figure 4. The
Figure 4. Searching, ordering, and compression
searching function involves looking for a solution within the problem domain. Searching is about discovering _knowledge_. The sorting function organizes the problem domain in a logical order, i.e., _information_. Finally, the compression function converts the elements to a new form, i.e., _meta-data_. These three functions find solutions to problems.
There are three _embarrassing_ worlds. We have gone over the goal of reducing complexity and the functions to map complexity to some form of reasoning; now, we look at the types of problem domains. We start with the first, _embarrassingly parallel_ problems; these are problems that map perfectly onto parallel solutions. These problem types do exist but are relatively rare. Performance is proportional to the available parallel machines, i.e., the more parallel machines available, the faster the processing. These improvements remain accurate, while the serial sections are minimal (i.e., taking into account _Amdahl's law_, i.e., parallel performance becomes limited by the serial parts (Bartos et al., 2015)).
The second embarrassing style is sequential data, i.e., _embarrassingly sequential_. Embarrassingly sequential data follows a strict structure that can be mechanically optimized. We can design efficient computation to work best on embarrassingly sequential data. These problem spaces map efficiently onto software-hardware systems. They are less chaotic and random.
As we move to real-world situations, the domain types require complicated synchronizations (a mixture of parallel and serial components) and deal with unstructured data. The problems end up being _embarrassingly unhelpful_. Bringing the focus back to hardware efficiency, the prior (i.e., embarrassingly parallel and sequential) leans towards _specialization_. The latter (i.e., embarrassingly unhelpful) leans towards _general-purpose_ machines. General-purpose machines move towards the _Principle of Universality_. As-in is capable of handling all problems. General-purpose machines are better for embarrassingly unhelpful problems since they reduce complexity using less specialized operations. The embarrassing aspects of data drive computation design.
_What is computation?_ It is the act of running an algorithmic recipe on a machine. A computation process takes input data and outputs some form of result; see Figure 5. A process can be serially sequenced or run in parallel. Optional auxiliary feedback is taken from the output and placed as input, giving an algorithm the ability to adapt. This characteristic allows for the creation of complex hardware architectures. Conditional control mechanisms (i.e., _if-then-else_) determine the order and flow of the computation either between processes or within the process.
Figure 6 shows a subset of machines, with _combinational logic_ as a foundation for the other levels. In this diagram, we place _Probabilistic Turing Machine_ at the top of the computation stack. This decision will hopefully become self-apparent as we journey further into algorithms.
Energy is a requirement for computation and, as such, has direct influence on the choice of algorithms. Carrying out any calculation requires a form of energy imbalance (following the _Laws of Thermodynamics_). To achieve energy imbalance in classical computing, we either supply energy directly or harvest the energy from the environment. Once energy is supplied, execution results in the production of heat. Maybe somedy we can recycle that heat for further computation. With future machines, _reversibility_ may become an essential requirement to reduce energy and to allow more capable algorithms, i.e., _reversible computing_(Bartos et al., 2015).
### Input-output relationship
What are the general relationships between the input and output data? Below are some connections from simple to complex. Finding any relationship from the data, even a good-enough one, can be highly complicated.
* _Linear relationship_, shows a straightforward relationship between input data and result, example equation \(y=2x\).
* _Exponential relationship_, shows a growth factor between input and output, example equation, \(y=e^{x}\).
Figure 5. Process function
Figure 6. Computation machines
* _Nonlinear relationship_, difficult to determine the relationship, a more complex pattern is emerging, example equation \(x^{2}+y^{2}=42\)
* _Chaotic relationship_, at first sight appears to have no relationship, due to complexity, example equation \(x_{t+1}=kx_{t}(1-x_{t})\)
* _Random/stochastic relationship_, the relationship is truly random, example equation \(y=rand(x)+42\)
### Exploitative vs exploratory
Algorithms handle the known or unknown, namely _exploitative_ or _exploratory_. The first relies on knowledge and provides _hopefully_ a known outcome, for example, following an applied mathematics equation to determine whether a beam is in tension or compression. The second type, exploratory algorithms, explores a problem domain when the exact answer, and maybe even the environment, is unknown or changing \(-\), for example, an algorithm learning to fly on a different planet. The alien world will have unknown gravitational or magnetic challenges. The former concept leans towards precision and accuracy, whereas the latter is comfortable with _good-enough_ results, i.e., compromise or palliative.
_What comes first, exploitative or exploratory?_ This is a _cart before the horse_ question. Whether human or machine, exploratory takes place before exploitation. It is part of the learning process because we first have to understand before we can look for a solution. As we move into the future, we will rely more on algorithms to explore and find new solutions. And hence, hardware will need to increase support for more experimental methods. An example of increasing exploratory support is the trend towards efficient hardware for training systems. Efficiencies in both speeds of training and power consumption.
Newer algorithms can take advantage of both techniques, i.e., explore first and then optimize or exploit until further exploration is required. This shift is a form of _simulated annealing_. Annealing is the method of toughening metals using different cooling rates; simulation annealing is the algorithm equivalent. Forward simulated annealing starts by first exploring _global_ points (e.g. random jumps) and then shifts to _local_ points (e.g. simple movements) as the perceived solution becomes more visible.
**Hardware-software**: the software can play directly with exploitative and exploratory algorithms. By contrast, hardware is dedicated to the exploitative side, i.e., getting the most out of a known algorithm. Hardware is static and fixed. And the software provides the ability to adapt and re-configure dynamically. In the digital context, the software is the nearest we have to adaptive biological systems. Exploitation allows for hardware-software optimizations.
### Where do algorithms come from?
_Are algorithms entirely invented, or are they driven by the problems?_ This is a time-old question with deep philosophical arguments from many sides. If we believe algorithms are invented, anticipating the future could be difficult. For this reason, we have chosen the view that problems define algorithms. To make this even easier, we will state that problems fall under the following four categories: _i. Physics, ii. Evolution, iii. Biology_, and _iv. Nature_. Where algorithmic ideas develop from one or a combination of categories.
1. **Physics** gives us the exploration of thermodynamics, quantum mechanics, and the fabric of the universe. Problems from a planetary scale to the sub-atomic
2. **Evolution**, gives us exciting ways to create future options. Allows algorithms to explore their problem domains, i.e., survival of the fittest, natural selection, crossover, and mutation
3. **Biology**, introduces complex parallel networks, e.g., cell interactions and neuron communications
4. **Nature**, provides us with big system problems, e.g., climate change
As previously pointed out, mathematics is the language to describe or express algorithms. It is not necessarily a vital source of inspiration. We believe that the discovery and understanding of the natural world create algorithms. By observing the natural world, we can play with predicting future directions and possibilities.
### Measuring computational complexity
Many subjects are concerned with complexity. For example, safety-critical systems are susceptible to increases in complexity. Computer science has an area of research labeled _Computation Complexity Theory_ dedicated to the subject. The theory translates complexity into _time to solve_. It is worth mentioning that time taken and energy is closely connected. Our tentative goal is always to remain within an energy boundary. Problems break down into different time relationships as shown in the now infamous Euler diagram (see Figure 7). _Why should we care_? Because there exist problems that are impossible to solve efficiently or are just unsolvable. Providing more engineering time or effort will not culminate in a faster solution in these cases.
* _Polynomial (P) time_, represents computational problems that are solvable in deterministic polynomial time. These problems are relatively straightforward to solve on a _Turing machine_ (see Section 1.3). Low in complexity. The input length determines the time required for an algorithm to produce a solution.
* _Nondeterministic Polynomial (NP) time_, are solvable problems but in nondeterministic polynomial-time. The algorithm can be proven correct using a deterministic Turing machine. Still, the search for the solution uses a nondeterministic Turing machine. The search involves some form of best guess.
* _Nondeterministic Polynomial-complete (NP-complete) time_, similar to NP problem, the verification can occur in quick polynomial time but solution requires a _brute-force_ algorithm. Brute force means there is no efficient path to a solution. These problems are the most complicated to solve in the NP set. Also, each problem is reducible in polynomial time. The reducibility allows for simulation.
* _Nondeterministic Polynomial-Hard (NP-hard) time_, covers the truly difficult problems, the hardest NP problems and continues outside the NP scope. It also includes potential problems that may not have an answer.
Suppose we look at complexity through an exploitative and exploratory lens. We see that **P** covers the exploitative algorithms, and generally, **NP** covers the exploratory side, i.e., experimentation.
### Measuring probabilistic complexity
Continuing our journey, it becomes apparent that probability is becoming increasingly important. For this reason, we should try to understand probabilistic complexity, where _good enough_, _averages_, and _certainty_ play a more critical role. Probability is fundamental for artificial intelligence, probabilistic computers (Hornorn et al., 2010), and quantum computing, as described later.
These are problems only solvable by a _probabilistic Turing machine_(Turing, 1998). This Turing machine operates with a _probability distribution_, which means that a distribution governs the transitions. This setup gives a probabilistic Turing machine-specific unique characteristics.
The characteristics are nondeterministic behavior, transition availability by a probability distribution, and stochastic (random) results. The stochastic effects require repeatability before a level of certainty can be established. The behavior means that the same input and algorithm may produce different run times. There is also a potential that the machine will fail to halt. Or the inputs are accepted or rejected on the same machine with varying execution runs. This variability is why an average of execution runs is required.
Figure 8 shows an extension to the traditional Euler diagram with probabilistic complexity (Turing, 1998). Rather than covering all the different options, permit us to focus on three especially relevant examples:
* _Bounded-error Probabilistic Polynomial_ (BPP) time, an algorithm that is a member of BPP runs on a classical computer to make arbitrary decisions that run in polynomial time. The probability of an answer being wrong is at most \(\frac{1}{3}\), whether the answer is heads or tails (i.e., a coin flip).
* _Pounded-error quantum Polynomial_ (BQP) time, an algorithm that is a member of BQP can decide if there exists a quantum algorithm running on a quantum computer with high probability and guaranteed to run in polynomial time. As with BPP, the algorithm's run will correctly solve the decision problem with a probability of at least \(\frac{2}{3}\).
* _Probabilistic Turing machine in Polynomial time_ (PP) time, is simply a algorithm were the probability of error is less than \(\frac{1}{2}\) for all instances.
Finally, probability complexity is linked directly to the probability of being wrong. Errors are part of the process and, as such, need to be handled or mitigated.
Note that Figure 8 captures the relative topology, but not the algorithmic class size. How important each member will be in comparison is still to be determined.
Now that we have described some parts of measuring complexity, we can move on to the objective.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Exploitative** & **Exploratory** \\ \hline Specialized & General purpose \\ \hline Narrow & Broad \\ \hline Focused & Unfocused \\ \hline Known, well understood & Unknown, and less understood \\ \hline Best result & Good enough \\ \hline Turing complete/incomplete & Turing complete \\ \hline \end{tabular}
\end{table}
Table 1. Exploitation vs Exploratory
Figure 8. Probabilistic complexity (Wikipedia)
Figure 7. A Euler diagram on complexity - Computer Science
### Algorithm objective
An algorithm has to have some form of direction. The direction can take one of three forms _single objective_, _multi-objective_, and _objective-less_. A single objective means only one search goal, e.g., performance or power. Single search goals tend to be simpler problems to solve but not always. A multi-objective is more complicated with multiple search goals to satisfy, e.g., performance, power, and area. Multi-objective searches look for a compromise between the solutions. These compromises live on what is called the _Pareto optimal_, see Figure 9. Finally, objective-less relies on gaining new experiences in an environment rather than moving towards any particular goal (Krishnan et al., 2017). The idea is that by gaining experience, a better understanding of the problem domain occurs, thus allowing for significantly better solutions.
For most of this century, the focus has been on a single objective, but problems have changed in the last fifty years, and multi-objective problems are more typical. Figure 9 shows two objectives for an electric car. These objectives are the best acceleration on the x-axis and the lowest power consumption on the y-axis. **p1** represents the fastest option (e.g., fast tires, higher voltage, and performance electric motors), whereas **p2** represents the lowest power consumption (e.g., aerodynamic tires, voltage limited, and balanced electric motors). The edge between the two extremes is the _Pareto front_, a point on the front (e.g., **p3**) is _Pareto optimal_. The best solution must be a compromise between acceleration times and power consumption. There are no perfect solutions, just compromises of opposing objectives.
As a counter-intuitive idea, objective-less is an exciting alternative. The algorithm is placed in an environment and then left to discover. Progress occurs when a new experience is discovered and recorded. An example of this exploration style is _Novelty Search_ (see Section 2.2.8). Potentially these algorithms can find new knowledge. The objective-based algorithms look for something known, and the solution is biased by implication. Whereas objective-less algorithms learn by experience, removing implicit bias.
As well as the objectives, there are constraints (or limitations). These constraints impose restrictions on any possible solution. For example, the conditions may include limited resource availability, specific time windows, or simply restrictions on power consumption. A general mathematical goal is to provide _constraint satisfaction_, where each object in the potential solution must satisfy the constraints.
### Environment and data
There are many properties concerned with problem-domain landscapes. The first is the environment the algorithms operate within; see Figure 10. For example, one environment could have the properties of solitary, relaxed, and completely observable. A domain can make the problem space more or less difficult to cover. The landscape diagram shows some of the variations. The variations act as a filter to determine which class of algorithms is more likely to be successful and rule out other ones that are unlikely. It seems common sense that an algorithm _should_ be chosen by first assessing the environment.
Data comes in many different forms. From simple _unimodal_ data with an obvious solution to the more complex data that includes noise or _deception_(Krishnan et al., 2017), see Figure 11. The format of the input data determines the algorithm.
By increasing the dimensions of the input data, we can extract hidden information (Krishnan et al., 2017). More advanced algorithms use this technique to handle more complicated problems. The hope is to remove some of the noise in the data. The opposite approach can also be valid; reducing dimensions simplifies the input data.
### Determinism, repeatability, and randomness
Algorithms have different characteristics, these include _determinism_, _repeatability_, and _randomness_. Taking each characteristic in turn. Determinism is when an algorithm given a set of inputs provides a result in the same amount of time. Consistency is essential for applications that are time constrained. These applications come under the term _Real-Time_. Where the algorithm flow consistently takes the same amount of time to complete. Real-Time is an arbitrary measurement since the definition of time can cover a wide range.
Figure 10. Landscape (Krishnan et al., 2017)
Figure 9. Multi-objective Pareto Curve
Repeatability is the concept that given the same input, the same output occurs. Many problems require this type of characteristic. It is fundamental to most mathematical equations. It is more aligned with perfection and means that the problem space is wholly understood.
Randomness is the opposite of repeatability, as the results can differ or even not occur. Randomness allows some degree of uncertainty to provide variation in the answers, i.e., flexibility in discovery. It goes against mathematical perfection because it allows for greater exploration of complex spaces. Algorithmic complexity can appear random because the patterns are so difficult to comprehend.
Random numbers in computers are called _pseudo-random_ numbers. This label is because they follow some form of artificial distribution. A _pseudo random number generator_ (PRNG) creates these numbers. As an important example, pseudo-random numbers can simulate noise. The simulated noise can help transition a high-dimensional problem into a more accessible lower-dimensional problem [33]. Achieving this transition occurs by replacing some of the state variables with guesses. This transition makes an otherwise impossible situation searchable (e.g., weather prediction).
### Prediction, causality, and counterfactual
_Prediction, causality_, and _counterfactual_ are at the cutting edge of what algorithms are capable of achieving. Prediction is probably one of the most exciting areas for modern algorithms--the ability to predict a future with some degree of certainty. Science as a discipline has had its challenges with prediction. Prediction is a difficult subject; it includes everything from software to determine the next actions for a self-driving car to consistent economic forecasting on a specific stock. Probably the best-known of all the algorithmic predictions is weather prediction. Weather prediction is highly accurate for the next three hours but becomes less certain as we increase the time.
Casually is the ability to show cause-and-effect [34]. The reason a pencil moved was that a person pushed the pencil. In many ways, humans want more than just an answer from an algorithm; they want to understand why. It is problematic for algorithms because it requires more understanding of how actions are connected and chained together.
Finally, there are counterfactuals--a combination of prediction and causality. Counterfactual is an alternative history where a decision not to do something affects a prognosis. This action can play with the future. Again an exciting area to play with from an algorithm point of view [34].
### Creativity and diversity
Creativity and diversity are terms primarily associated with humans rather than algorithms. Creativity is some form of inspirational jump that allows complex problems to be solved. There is an ongoing debate whether an algorithm can be creative [12]? And if so, how do we measure creativity? If an algorithm paints a scenery, is it being creative? These are difficult questions, but when it comes to algorithms, this is the new frontier.
What is creativity? _Margaret Boden_, a Research Professor of Cognitive Science, broke down creativity into three useful mechanisms [5], namely _exploration_ - playing within the rules, _combination_ - applying one set of rules to another domain, and _transformation_ - rewriting the rules by removing a critical constraint. See Figure 12. For algorithms, exploration creativity is risk-averse and limited, and at the other end of the scale, the transformation has the highest risk with potential novelty.
On the same lines of creativity, we have diversity. Diversity brings about novelty or new solutions by offering variation in the algorithms. A diverse set of algorithms can search multiple directions in parallel.
### Byzantine style algorithms
Distributed systems, safety-critical systems, and financial systems need to have some resilience. Resilience is an attempt to avoid
Figure 11. Problem style [26]
Figure 12. Creativity [5]
wrong decisions. Wrong decisions occur due to system errors or an intentional lousy agent. This area is called the _Byzantine General Problem_, description outlined below:
1. _Lieutenant generals need to come to a decision._
2. _Unfortunately, there are potential traitors._
3. _How do the loyal generals decide on a correct decision?_
Redundancy is essential in several areas. For example, hardware or software has the potential to exhibit problems. It is necessary if the ramifications of such a situation occur to be able to mitigate those problems. This avoidance is crucial in safety-critical systems where failure results in harm or significant financial loss. The solution is to have redundancy and not rely on one or two agents for a decision.
### No free lunch theorem
When we described the Ideals in Section 1.1, we were skirting around the concept of a _free lunch_. This idealism is a reverse play on the _No Free Lunch_ (NFL) theorem. David Wolpert and William McReady formalized the theorem, and it states that "_all optimization algorithms perform equally well when averaged over all possible problems_" (Wolf, 2007). The theorem means no algorithms stand out as being better or worse than any other. Solving different problems involves specific knowledge of that problem area.
A modern algorithm has to deal with a knowledge question. The question is whether an algorithm starts from a clean slate (i.e., no knowledge \(-\)_Tabula rasa_) or some known captured experience (i.e., known knowledge \(-\)_Socratic recollection_). We need to decide how an algorithm starts; by learning with no expertise or giving the algorithm, a jump starts with knowledge.
### Network thinking
In algorithms, it is crucial to mention networks. Networks play an essential role in modeling and analyzing complex systems. Networks are everywhere, from the interconnection between neurons in the brain to aircraft flight patterns between airports. Electrical engineering uses networks to design circuitry. Probably one of the most famous technology networks is _The internet_ which allows us to communicate efficiently. We commonly describe networks in terms of _nodes_ and _links_. There are at least two helpful methods of describing networks:
* **Small-world networks** are networks where all nodes are closely connected, i.e., requiring only a small number of jumps. Most nodes are not neighbors but are closely linked. For example, the aircraft flight patterns we already mentioned. Another example is Karimthy's 1929 concept of _six degrees of separation_(Krishnan, 1999; Krishnan, 2000), where just six links connect everyone.
* **A scale-free network**, follows a _Power law_ distribution. The Power law states that a change in one quantity will cause a proportional change in another. All changes are relative--for example, a social network.
_Preferential attachment_ is a process where a quantity distributes across several nodes. Where each node already has value. The nodes with more value gain even more, and the nodes with less value gain less. This value transfer is necessary when algorithms model _wealth distribution_ or _contribution-rewards in an organization_.
Lastly, network thinking is all about _modern graph theory_. Graph theory covers _graph knowledge models_ to _graph databases_ (e.g., temporal-spacial databases). It is an important area for algorithms, and it is constantly expanding.
## 2. Category
In this section, we will attempt to describe algorithms as categories or classes; it is a complicated process. Again more from the behavioral viewpoint. We will divide the world into three main categories _computer science_, _artificial intelligence_, and _quantum computing_. We cover what we think are the more interesting behavioral classes, but this is by no means exhaustive.
### Computer science, Cs
Computer science has been creating and formulating algorithms for the past 50 years. This length of time means there is an abundance of algorithms. In this subsection, we collect the various essential concepts. As an academic subject, computer science is still relatively young as a discipline, but it acts as a universal provider. As-in it provides a service to all other fields. Note we included the first two descriptions as fundamental concepts rather than algorithms.
#### 2.1.1. Cs, Propositional logic
Proposition logic is a language. It is used by algorithms to provide _Boolean_ answers, i.e., _True_ or _False_. Combining logic (_OR_, _AND_, _Exclusive OR_, and _NOT_ operations) together we can create an algorithm. Digital hardware circuits derive from propositional logic.
#### 2.1.2. Cs, Predicate calculus
Predicate calculus is a language. Algorithms use it to produce correct statements. This correctness means that all statements are provable and true within the algorithm. Symbols represent logical statements. For example, \(\forall x\in N:x^{2}\geq x\) translates to "_for all \(x\), where \(x\) is a natural number, it is true that \(x^{2}\) is equal or greater than \(x\)_". Thus satisfying the predicate calculus rules that all statements are sound and true. Another example, \(\exists x\in N:x\geq 42\) translates to "_there exists an \(x\), where \(x\) is a natural number, that \(x\) is greater than \(42\)_", again sound and true.
#### 2.1.3. Cs, Recursive algorithms
A _recursive algorithm_ calls itself. For example, \(f(x)=x-f(x-1)\), where the function \(f\) is on both sides of the equation. Recursive algorithms have interesting behavioral properties because they can converge or diverge. A convergent recursive function concludes. A divergent recursive algorithm never stops. In computer terms, memory resources can be pre-calculated in a convergent algorithm. Divergence means the opposite; an equation will fail to conclude, potentially resulting in an _out of memory_ error. We can use _proof-by-induction_ to determine correctness.
Recursion can be seen in many natural objects, for instance, leaves, trees, and snowflakes. These occurrences mean there are many examples in nature where recursion is employed. Figure 13 shows recursion in the form of a fractal.
#### 2.1.4. Cs, Divide-and-conquer algorithms
The _divide-and-conquer_ algorithms separate a problem into easier-to-manage problems. They help handle situations that are either too big in their entirety or too difficult.
These algorithms are best employed when the divided problem is less complicated than the added communication and distribution overhead. If true, a parallel architecture can lend itself to this type of problem. And this is especially true if the sub-problems are all solved deterministically, i.e., taking the same time to process. Divide-and-conquer serves as a helpful method for debugging complex systems. This technique is closely related to the _Scientific Method._
#### 2.1.5. Cs, Dynamic programming algorithms
_Dynamic programming_ (or DP), is related to the divide-and-conquer algorithms, see Section 2.1.4. Unlike the divide-and-conquer, DP uses the result of the sub-problems to find the optimum solution to the main problem. These algorithms find the shortest path (e.g., map navigation), optimal search solutions, and operating system schedulers.
#### 2.1.6. Cs, Randomized algorithm
Randomized algorithms use artificially generated randomness to solve complex problems, e.g., molecular interactions. One of the most famous algorithms in this class is called the _Monte Carlo_ algorithm. Monte Carlo takes an initial configuration, let us call it the _status quo_, and, driven by a probability distribution, a state is randomly changed (e.g., on a coin, heads go to tails) [(27)]. A calculation then determines the energy of the new configuration. With that information, the energy acceptance criteria can determine whether the new configuration becomes the status quo. These algorithms model _probabilistic real-world_ systems. And as such require significant amounts of computational resources, not to mention the time is taken to set them up. Their primary use is in particle Physics, Biochemistry, and financial modeling.
#### 2.1.7. Cs, Fractional factorial design
_Fractional factorial designs_ are included primarily because of their behavioral characteristics. They use a concept called _sparsity-of-effects_ principle [(42)]. This principle brings out important features from the data. The claim is that only a fraction of the processing extracts the most interesting data features. This characteristic is an important behavioral style for many problem domains, i.e., using less work to identify the most interesting features.
#### 2.1.8. Cs, Greedy algorithms
A _greedy algorithms_ discover a solution a little piece at a time. These algorithms find near-optimal solutions for NP-Hard style problems. The algorithm takes the direction of the most advantage at each point (i.e., local optima strategy). This algorithm is relatively easy to implement for optimization problems.
#### 2.1.9. Cs, Brute force algorithm
A _brute force_ algorithm, as the name implies, involves looking at every possible solution to find the best. In other words, no shortcuts. We use these algorithms when there are no alternatives. From a behavioral point of view, running these algorithms requires significant resources. These algorithms best suit problems with no known better solution and can get away with no time constraints. We default to these algorithms when there is no other solution. Replacement of these algorithms occurs when there are economic or environmental pressures. For example, we can see this with _cryptocurrencies_ as they change from brute force to be more environmentally friendly methods, i.e., _proof-of-work_ to _proof-of-stake_[(15)].
#### 2.1.10. Cs, Backtracking algorithm
_Backtracking_ algorithms move forward step by step. Constraints control each step. If a potential solution cannot achieve its objective, the path halts, and the algorithm backtracks to explore another possible solution path. This approach is robust at exploring different options for success. This algorithm can stop early if a solution option reaches a good-enough level of success.
#### 2.1.11. Cs, Graph traversal algorithms
_Graph traversal_ is simply the process of visiting nodes in a graph. The visit can involve either reading or updating the node. There are different methods on the order of visits, e.g., _depth-first_ and _breadth-first_. Where depth-first attempts systematically to visit the farthest nodes, by contrast, breadth-first attempts to visit all the nearest nodes first. Graphs are popular for many applications, including graph databases, spatial graph databases, and spatial-temporal graph databases.
#### 2.1.12. Cs, Shortest path algorithm
Where graph traversal techniques are general graph algorithms, _shortest-path_ algorithms discover the shortest path between nodes in a graph. One of the more famous algorithms in this category is the _Dijkstra_ algorithm. A popular variant uses a source node and calculates the shortest path to any other node in the graph. Common usages include navigating between two points on a map.
#### 2.1.13. Cs, Linear programming
The _linear programming_ is a mathematical modeling technique where a linear function is either maximized or minimized under constraints. The behavioral outcome
Figure 14. Divide and conquer
Figure 13. Recursive fractal (Public Domain)
makes processes more efficient or economically cost-effective. This behavior means that any problem that requires more efficiency can take advantage of linear programming, whether the situation involves improving energy distribution or mathematical problem solving.
### Artificial intelligence, Ai
_Artificial intelligence_ encompasses many algorithms, from object recognition (i.e., correlation) to the far-out attempts to create artificial life. We can crudely subdivide the subject into people-types _connectionist_, _evolutionist_, _bayesian_, _analogizer_, and _symbolist_. In the early days, artificial intelligence covered everything we could not achieve. Today, a broader definition is used that defines the subject by problems. The general goal is to tackle evermore difficult challenges where the path is less well known.
As with Computer science, we do not cover all the algorithms in the field, but we will try to cover some interesting behavioral classes.
#### 2.2.1. Ai, Reinforcement learning
_Reinforcement learning_ is an award-style algorithm. The algorithm rewards a path that gets closer to a solution. It encourages forward progression. The disadvantage is overlooking a better solution, a single-minded approach; nevertheless, it is a powerful algorithmic technique. The single-mindedness makes it potentially dangerous without some form of safeguards.
#### 2.2.2. Ai, Evolutionary algorithms
Use a synthetic form of evolution to explore problem domains; if we consider the world as a two-dimensional graph with data on the x-axis and algorithms on the y-axis. Neural networks live near the x-axis, and evolutionary algorithms live near the y-axis. They modify algorithms. Either by playing with the variables or creating & modifying the potential solutions directly. Similar to biological evolution, there is, for the most part, a population of potential solutions, and that population goes through generational changes. These changes occur through _mutation_ and _crossover_. Each member of the population, in a generation, is valued by their _fitness_ towards the potential solution. A population can either start as an initial random seed (i.e., _tabular rasa_) or with a known working solution. We use these algorithms for optimization and discovery. These algorithms are powerful, especially when the problem domain is too big or the solution is beyond human knowledge (Sundhi et al., 2017).
#### 2.2.3. Ai, Correlation
_Correlation_ allows pattern recognition with a level of certainty. For example, "we are 87% sure that the orange is behind the pineapple". Neural nets provide certainty of recognition. _Convolution neural networks_ and _deep learning_ rely on this technique to solve problems. Hyperparameters configure the network, which is a complicated process. The network learns a response using training data. The quality of the training data determines the effectiveness of the correlation. This technique is good at image and speech recognition.
#### 2.2.4. Ai, Gradient descent
_Gradient descent_ is about finding the best descent path down a steep hill. The technique reduces the cost and loss of setting up a neural network model by looking for the fastest descent. In other words, it is about finding the optimal minimum and avoiding the local minimum of a differentiable function. As the algorithm proceeds, it sets up the various parameters of a neural net model. Both Machine learning and, more specifically, deep learning use this technique.
#### 2.2.5. Ai, Ensemble learning
_Ensemble learning_ uses multiple learning algorithms to obtain better predictive performance. Like predicting the weather, numerous futures are provided, from the extremes to the most likely. In other words, the ensemble method uses multiple learning algorithms to obtain better predictive performance than could be obtained from a single algorithm. An ensemble learning system consists of a set of different models: with diversity in structure (and hyperparameters). Its outward behavior is to produce several solutions in the hope of finding a better solution.
#### 2.2.6. Ai, Random forest
_Random forest_ is a type of ensemble learning. Random forests apply to classification (the act of classifying due to shared properties), regression (finding cause and effect between variables), and tasks that involve building multiple _decision trees_ (decisions represented by a tree leading to a consequence). The random forests method generally outperforms traditional decision trees, but their accuracy can be lower than other methods.
#### 2.2.7. Ai, Continuous learning
Continuous learning has a long history with traditional _evolutionary algorithms_. An algorithm is left to learn in an environment with resources. The algorithm continuously adapts. Experiments have shown these algorithms exhibit the most significant learning jumps at the beginning of the cycle, and as time progresses, jumps become ever fewer, if not at all. This situation can alter if changes occur in the environment (e.g., additional resources or objectives).
#### 2.2.8. Ai, Novelty search
_Novelty search_ is a different approach to, say, reinforcement Learning (see Section 2.2.1), where the rewards are for learning new experiences rather than moving nearer to a goal (Sundhi et al., 2017). Novelty search has the behavioral advantage of not requiring an initial objective for learning to occur. For example, learning to walk occurs by learning how to fall over. Falling over is not directly linked to the act of walking.
#### 2.2.9. Ai, Generative adversarial network, GAN
_Generative adversarial network_ is, in fact, two networks --each one vying to attain different goals. One side is a creator (i.e., the generative network), and the other is a critic (i.e., the discriminative network). The creator's role is to outsmart the critic, meaning the creator can mimic the dataset. From a behavioral view, this algorithm can create a fake version (or deep fake output) of existing material. As of writing this text, this technique is taking over many traditional learning algorithms.
#### 2.2.10. Ai, Supervised learning
This class is a more generalized version of the correlation mentioned in Section 2.2.3. _Supervised learning_ is a system using input-output pairs. In other words, known input samples connect to known outputs. The system learns how to connect the input to the output. The input data is labeled. Currently, the majority of machine learning is of this form. From a behavioral view, this algorithm requires _human_ supervision, as the name implies. The quality of training data is paramount.
#### 2.2.11. Ai, Unsupervised learning
_Unsupervised learning_ is the opposite of supervised learning (Section 2.2.10). The input data is
unlabeled, meaning it is not of a particular form. For example, no pictures labeled cats. This algorithm class attempts to find connections between the data and output. The unsupervised means no human has gone along labeling the data. From a behavioral view, this is a desired attribute but more challenging to control and get right --for example, the risk of connecting uninteresting features to an outcome.
#### 2.2.1. Ai, Self-supervised learning
It is a compromise between supervised and unsupervised learning. _Self-supervised learning_ learns from unlabeled sample data, similar to unsupervised learning. What makes it an in-between form is a two-step process. The first step initializes the network with pseudo-labels. The second step can be to use either supervised or unsupervised learning on the partially trained model from step one. From the behavioral view, not sure it shows any difference between supervised and unsupervised learning.
#### 2.2.13. Ai, Bayesian probability
_Bayesian probabilism_ is classical logic, as far as the known is concerned. New variables represent an unknown. Probabilistic refers to the amount that remains _unknown_. This class of algorithms is best for robotics, particularly Simultaneous Localization and Mapping (SLAM). These algorithms are good at mitigating multiple sources of information to establish the best-known consensus. For example, while a robot tracks its path, it always runs on a minimal amount of information and makes decisions based on statistical likelihood.
#### 2.2.14. Ai, Knowledge graphs
As the name implies, _knowledge graphs_ use data models as graph structures. The graphs link information together. The information can be semantic (hold meaning). Knowledge-engine algorithms use knowledge graphs to provide question-answer-like services, i.e., expert systems. These algorithms, in theory, can provide some form of causality mapping since the graphs store the knowledge relationships.
#### 2.2.15. Ai, Iterative deepening \(A^{*}\) search
Lastly for artificial intelligence, we decided to include one traditional artificial intelligence algorithm from the past. _Iterative deepening \(A^{*}\) search_ is a graph traversal search algorithm to find the shortest path between a start node and a set of goal nodes in a weighted graph. It is a variant of the iterative deepening search since it is a depth-first search algorithm. We use these algorithms in simple game-playing --for example, tile-tac-toe or, potentially, chess.
### Quantum computing, Qc
_Quantum computing_ is relatively new in the context of algorithms since hardware devices are rare and, if not difficult, at least different to program. When describing the world of quantum computing in a few paragraphs, it quickly becomes apparent that we could slide into an overly complex explanation. To avoid some of the complexity and remain relatively helpful, we decided to explain quantum computing from the perspective of how it differs to classical digital computing [(8)]. Keep in mind quantum computers require a lot of classical computing to operate.
Quantum computers follow a different set of rules or principles. These rules come from atomic and subatomic particle physics, i.e., the notoriously complicated world of _quantum mechanics_[(4; 14)]. Classical digital computing uses transistors to implement bits. Quantum computers use even smaller atomic-scale elements to represent quantum bits, also known as qubits. A qubit is the unit of information for quantum computers. Transistors represent either 0 or 1, binary qubits represent 0 and 1 simultaneously by including a continuous phase. Qubits, therefore, have the unique property of simultaneously being in a combination of all possible states. This fundamental principle of quantum mechanics is called _superposition_. Superposition enables a quantum computer to have non-classical behavior. This non-classical behavior means we can probe many possibilities at the same time [(19)] with the potential for saving considerable energy over classical computing.
Another principle used in quantum computers is _entanglement_. The basic concept is that quantum systems can correlate so that a measured state of one can be used to predict the state of another. This connection enables the construction of quantum computers to include linkages between sets of qubits. It reinforces other physical principles, such as _causality_ limiting communications to no more than the _speed of light_. Entanglement and superposition are intimately connected, in subtle ways, even across long distances.
Relative to a classical computer, a contemporary quantum computer has higher error rates, better data analysis capabilities, and continuous intermediate states. Compare this with classical computing, which has far lower error rates, is better for day-to-day processing, and uses discrete states [(8)]. A quantum computer comprises a set of qubits, data transfer technology, and a set of classical computers. These classical computers initialize the system, control it, and read the results. Where the qubits carry out the computation, the transfer technology moves _information_ in and out of the machine. At the conclusion of the calculation, measurements of quantum states will return definite classical values at the outputs. Different runs on the same inputs return a distribution of results whose magnitudes squared are interpreted as a probability distribution. After completing the quantum calculation and measurements, classical computers do error correction, interpolation, and filtering across many runs of the same program. The information involves quantum (superposing and entangling qubits) and classical (configuring the initial values and reading out the classical final values). Modern quantum computers can efficiently collect statistics. These statistics provide more probable answers to more complex problems instead of definitive answers to simpler ones.
As with computer science and artificial intelligence, we will explore quantum algorithms, not from the quantum computing perspective but the algorithm perspective, i.e., _what can they do?_. In keeping with the previous discussions, we provide a subset of
Figure 15. Sum-over-histories (Feynman diagram)
algorithms. This technology has enormous potential but maybe ten or more years away. We believe it is essential to include this area when exploring future algorithms.
#### 2.3.1. Qc, Shor's algorithm
_Shor's algorithm_ appears to be currently the most important, or most practical, in quantum computing. Shor's algorithm takes an integer \(N\) and returns the prime factors. This is achieved in polynomial time (NP, see Section 1.7). An alternative to _Fourier transform_. Shor's algorithm uses the quantum Fourier transform to find factors. This behavior has implications for cryptography. It is an exponential speedup from the ability of a classical digital computer to break the types of cryptographic codes used for authentication and signatures (e.g., common methods such as RSA or ECC). This capability gives quantum computers the concerning potential to break today's security algorithms.
**Note**: In the press and academia, we now hear the term _Post-Quantum Encryption_ (PQE). PQE is a classical computing response to this capability. The response is a set of classical algorithms resistant to quantum methods. Many meta versions can exist because they can be dynamically updated and modified. Making the approach less reliant on the underlying hardware.
#### 2.3.2. Qc, Grover's algorithm
_Grover's algorithm_ is probably the next most important quantum algorithm. Grover is said to be a quantum search algorithm. The algorithm carries out an unstructured search. And, as such is potentially useful to speed up database searches. There is a potential that classical algorithms could be equally competitive. The algorithm finds a unique input with the highest probability for a particular output value. This in theory can be achieved in \(O(\sqrt{N})\) time. Where \(N\) is defined as the size of the function domain.
#### 2.3.3. Qc, Quantum annealing
Quantum annealing (QA) is a _metaheuristic_. A metaheuristic is a problem-independent algorithm normally operating at a higher level. Metaheuristics look at a set of strategies to determine the best approach. Quantum annealing finds a given objective function's extreme, either minimum or maximum, over a set of solutions. In other words, it finds the procedure that finds an absolute minimum for size, length, cost, or distance from a possibly sizable set of solutions. Quantum annealing is used mainly for problems where the search space is discrete with many extremes --limited only by available resources.
#### 2.3.4. Qc, Adiabatic quantum computation (AQC)
Adiabatic quantum computation is reversible (Becker, 1998). The word adiabatic means _no heat transfer_, allowing for reversible computing, see Section 1.3. Calculations occur as a result of the adiabatic theorem. Optimization is the first application for these algorithms, but there are potentially many others. It is an alternative to the circuit model (from digital computing). This alternative makes it useful for both classical and quantum computation.
#### 2.3.5. Qc, Quantum walks
Lastly, _Quantum walks_ is the quantum equivalent of the classical random walk algorithm (Kolmogorov, 1959). Similar to the other quantum solutions, a quantum walk operates with different rules. On certain graphs, quantum walks are faster and, by implication, more energy efficient than the classical equivalent (Kolmogorov, 1959).
## 3. Hardware Options
_"without hardware, we don't have algorithms, and without algorithms, there is no purpose for the hardware"_
Even though we are mindful of analog solutions and the exciting developments in quantum hardware, we will focus primarily on digital solutions. We are also aware of Moore's Law's limitation, which may affect the future direction of computation, e.g., neuromorphic computing, DNA computing, analog computing, or unconventional computing. Maybe over time, there will be a change of emphasis toward analog, but today, digital systems lead. Digital systems include some form of traditional Turing machine. Turing machines are either fully implemented (e.g., a general-purpose processor moving towards the Principle of Universality) or partially implemented devices (e.g., a specialized accelerator missing some control elements).
Compute systems fall into two categories based on input data. The data is either embarrassingly helpful (i.e., sequential or parallel) or embarrassingly unhelpful. For the former, embarrassingly helpful, we design specialized hardware. For the latter, embarrassingly unhelpful, we design universal hardware to accommodate a broader range of problems.
It is challenging to map hardware developments to algorithm advancements, so we created the best guess using chronological ordering and the effects, see Table 2. We combined inflection points, and technology jumps to represent hardware advancements. This list is endlessly complicated, even when constrained. It is not clear when a technology caused an effect. If we look at history, what does it show us? It shows at least three emerging patterns:
1. Long-term serial speed-ups have been the priority for hardware until relatively recently. In more modern times, we can see a steady increase in parallelism at all levels of the computation stack. Endless gains in parallelism from bit patterns to network scaling. Parallelism gives performance advantages for problems that can explicitly exploit such designs. The exploitation allows for more sophisticated algorithms and to scale out.
2. History of hardware has seen a constant struggle for and against the _Principle of Universality_. In other words, a continuous fight between computing engines that can handle everything (_Turing-complete_) and specialized ones (_Turing-incomplete_).
3. Sophistication of hardware has increased exponentially, allowing algorithms to be more capable and less efficient. Optimization is more complicated, and the likelihood of hitting an unusual anomaly is more likely.
## 4. Next Algorithms
_What next for algorithms?_ Making any prediction is difficult, but there are some standard frameworks we can apply. Firstly, we must determine where future algorithms will likely come from and why. To get this moving, we look at the existing algorithms; let us call this the \(\alpha\)_"alpha_" future.
The \(\alpha\) future involves taking what we have and improving either the performance or efficiency of the algorithms. Conversely, we could also take an older algorithm and run it on modern hardware.
Both these concepts are increments. These improvements should occur naturally and not necessarily cause a change apart from maybe more modularization and specialization. Algorithms that fit this category include _fast Fourier transforms_, _geometric mathematics_, and _regular expression_. These algorithms rely on steady incremental improvements in both software and hardware.
Next, and remaining with the \(\alpha\) future, are the algorithms that initially start executing on much larger computer systems and eventually migrate over time to smaller systems. We predict that many of today's offline cloud algorithms, requiring specialized computation and vast resources, will ultimately become online algorithms on mobile devices (assuming some form of Moore's law still exists). For example, learning will move from the cloud to smaller devices in the coming decades. Learning algorithms are offline high-intensity applications, so processing does not occur in real time. A shift towards lighter real-time mobile variations will likely happen in the future, partly due to privacy concerns and partly due to economic ones (end users foot the bill).
The improvement in the \(\alpha\) algorithms allows a jump (not an increment) in new future algorithms. These new algorithms represent what we call the _\(\alpha\) "alpha prime"_ future. Many technological advances coincide, e.g., speech recognition, natural language processing, and gesture recognition. These technologies require improvements in existing algorithms and can be combined to help
\begin{table}
\begin{tabular}{l l l} \hline Year & Cause & Effect \\ \hline
1912 & JK Flip flop & Start of Boolean logic in circuits \\
1914 & Floating point & Algorithms to handle real-world problems \\
1936 & Turing machine & Universal computation model \\
1943 & Finite automata & Original pattern recognition method \\
1945 & ENIAC & First programmable computer, draft EDVAC report \\
1946 & Automatic Computing Engine (ACE) & RISC-style computation \\
1948 & Digital Signal Processing & Analysis and processing of continuous data \\
1954 & SR, D, T & Adding temporal logic \\
1955 & Finite State Machine (FSM) & Complex pattern matching \\
1959 & Metal-OxideβSemi. Field-Eff. Transistor (MOSFET) & Enabled far more sophisticated algorithms \\
1961 & Transistorβtransistor logic (TTL) & Continuing to enable increased complexity \\
1961 & Virtual memory & Decoupling from physical constraints \\
1964 & IBM System/360 & Inflection point in architectures \\
1965 & Memory Management Unit & Standard control of decoupling \\
1966 & Single Instruction, Multiple Data (SIMD) & Fast method of handling one dimensional arrays \\
1967 & Virtualization & Make the underlying hardware virtual \\
1968 & IBM ACS-360 SMT & Full utilization of the processor \\
1971 & Intel 4004 & Allow for hard-coded algorithms (no stack) \\
1972 & Single Instruction, Multiple Threads (SIMT) & SIMD array processing algorithms \\
1972 & Packed SIMD & Speed-up software CODECs \\
1973 & Ethernet & Distributed (networked) algorithms \\
1975 & Dataflow & Execution flows on context \\
1976 & Harvard cache & Separating data and instruction efficiencies \\
1976 & RCAβs βPixieβ video chip GPU & Geometry based algorithms \\
1978 & Ikonas RDS-3000 (claim first GPGPU) & Machine learning \& Crytocurrency \\
1979 & Motorola 68000 (CISC) & Execute sophisticated programming languages \\
1979 & Berkeley RISC & Change in algorithms to support load/store \\
1981 & Quantum computing & Probabilistic mathematics i.e., qubits \\
1983 & Networks (ARPANET) & Adoption of the TCP/IP protocol \\
1984 & SPMD & Messaging passing \\
1984 & VILW & Instruction level parallelism \\
1985 & Intel 80386 & Mainstream MMU \\
1990 & Liquid Crystal Display & Ubiquitous flat screen monitors \\
1994 & Beowulf clusters & Using standard hardware for massive scale \\
1997 & Samsung DDR memory & Algorithms could be bigger and faster \\
2006 & MPMD & Games console (Sony PlayStation 3) \\
2013 & MIMD systems & Ubiquitous parallel compute nodes \\
2013 & Predicated SIMD & Associative processing \\
2020 & Unified memory & Heterogeneous compute sharing memory \\ \hline \end{tabular}
\end{table}
Table 2. Cause-and-effect table for classical hardware (best guess)
discover the new \(\alpha\) algorithms (Grover et al., 2017; Grover et al., 2017). For example, natural language processing will likely assist in creating future algorithms, i.e., moving away from programming languages toward negotiation, where we negotiate with existing ideas to produce a new solution. We need this to happen if we want to explore more of the natural world. It requires us to hide some development details, makes problem exploration more abstract, and leverages existing knowledge.
This abstraction means algorithm construction will likely change in that algorithms will be designed for re-purposing, modification, and interface negotiation. The modifiable part is to allow for multi-objective goals (see Section 1.9). Many next-generation algorithms expect insertion into bigger systems, where discovery and negotiation occur. Traditional methods are too rigid for the next set of problem domains. This change means algorithm development is more about setting the multi-objectives and goal-driven negotiation than gluing pieces of low-level code together. Below are some interface types that might be involved in an \(\alpha\) future:
* potentially entirely machine-driven, no human standard mechanisms are defined. In other words, an automation system works out its communication language. For example, Facebook proved this possible when two machine learning systems created their language for communication (Krishnan et al., 2017).
* human, and strict mathematical interface. Restricted to the lowest level if the compute node or pointers-to-memory provides basic types.
* again human and strict mathematical interface, similar in characteristics to static interfaces but not fixed to the image, i.e., late binding.
* human, less mathematical, unstructured, and non-standard, propriety concept. Powerful since it can handle heterogeneous systems.
* human, semantic representation, structured so that the data can be discovered and assessed. Data is self-described over raw data.
* **Evolutionary interfacing**, where a method of evolution decides on a dynamic process of interfacing. The interface changes depending on workloads and a temporal element.
Lastly, we have the \(\beta\)_"beta_"_' future. The \(\beta\) future for algorithms is more about the computing substrate. The substrate may change into an exotic computing zoo, e.g., quantum computing, DNA computing, optical computing, biological computing, and unconventional computing. Silicon will remain the digital workhorse, but the edge of algorithm development may shift. And with this change come very different approaches to algorithms and datatypes. It is exciting to look at new algorithms on these new computing options.
As well as the substrate, the types of algorithms in a \(\beta\) future are different--for example, artificial general intelligence. Artificial general intelligence is currently a theoretical goal to create a universal intelligence that can solve many problems. A significant part of these algorithms is driving the decomposition (breaking a problem into smaller sub-problems) and recombination (taking the sub-problems and putting them back together to solve the main problem). Where the algorithms are much more general solutions than specialized ones, this is important as we try to handle problem domains at a much larger scale.
### Major meta-trends
We see potentially three overarching meta-trends occurring in \(\alpha\), \(\alpha\) and potentially \(\beta\) futures, namely _parallelism_, _probability_, and _interaction_. Under those headings, we can link other subjects, such as artificial intelligence, quantum computing, and computer science.
#### 4.1.1. Parallelism
_Is there any more parallelism to be extracted?_ We have taken what we call algorithmic _structural parallelism_ (e.g., data-level parallelism, instruction-level parallelism, and thread-level parallelism) to an extreme. Structural parallelism is when a problem breaks neatly into similar-looking sub-problems--covering embarrassingly sequential and parallel problem domains. But there are other forms of parallelism, for example, biological parallelism. Independent cells work together in parallel to form complex structures. Analogous to these natural processes are Carl Hewitt's concept of _actors_(Hewitt, 1998) and John Hollands' view on complexity (Hollands, 2000).
An actor is a small computational element comprising compute, memory, evolving rules, and adaptable communication (i.e., message based). Actors can have _reason-response_ capabilities. It moves away from mathematics and appears more like particle physics. Together with evolutionary techniques, actors can solve complex problems. Actors have to be free-flowing and can connect loosely; the loosely based connections are the reason for more messaging style interfaces for communication. We have not fully utilized these technologies because (a) the success of classical parallel systems, (b) the lack of biological level scaling, and (c) advanced interfacing. Actors will likely play a much more important role in the future as more evolutionary technologies glue everything together.
#### 4.1.2. Probability
Computer science has long predicted the importance of probability. As we approach limitations in computation, uncertainty will start to dominate. We believe the subsequent algorithms will have to link directly or indirectly to probability. Whether quantum or classical computing, they all rely on statistical approximation over precision and accuracy.
For quantum computing, error correction will be paramount unless we can make them less noisy (highly unlikely). Error correction will most likely have to reside in classical computing. Even though quantum computing opens up new possibilities for algorithms in one direction, it causes problems in another (error correction). It is worth pointing out that voting systems (a class of Byzantine algorithms) are likely to become more common. Error correction codes are suitable for a specific problem, whereas voting systems are helpful for system-level corrections.
#### 4.1.3. Interaction
_Interaction_ is a different take on future algorithms. _What does the term interaction mean in this context?_ It is to do with the process of creating new algorithms. This meta-trend brings new algorithms, old algorithms, and humans together (Grover et al., 2017). In other words, future systems can solve problems by selecting an old algorithm or creating a new one. This flexibility is possible by building advanced tools to explore the algorithm space, i.e., algorithms exploring algorithms. We are starting to see this with new programming languages, and libraries, allowing for greater expressiveness.
This greater expressiveness improves productivity by combining new tools with future user interface technologies (e.g., natural
language processing, speech recognition, gesture recognition, patterning recognition, and goal-oriented design). We can see a meta-trend toward a more integrated algorithm exploration framework, moving away from implementation details. Transition is only made possible due to the advancements in digital hardware.
The interaction is to accommodate all the added complexity, vast data, and navigation required to find a new algorithm. No longer can algorithm development occur without such advanced tooling. This becomes especially interesting if the tooling involved _virtual reality_ (VR), and _augmented reality_ (AR). _Advancing mathematics by guiding human intuition with AI_[(11)] by Davis et al., published in Nature in 2021, highlights that machine learning, with human assistance, is beginning to tackle hard mathematical problems. As algorithms become more expressive, they can be re-applied, with human assistance, to create even more algorithms. Solving previously intractable problems.
To provide a glimpse of future capabilities, let us look briefly at _human intent_ as part of a negotiated search for a new algorithm. Human intent is essential for many technology areas since it is about second-guessing what a human intends to do. This guessing may become one of algorithm development's most potent tools. Allowing a computer system to understand the intent of a human objective.
### Other trends in algorithms
In this section, we discuss some of the other potential trends in algorithms, some are just a continuation of existing trends, but others are emerging trends that may become important.
1. [leftmargin=*]
2. _Automation of everything_, algorithms continue to automate activities at every level --, for example, the automation level required to process package distribution within a warehouse.
3. _Growth of exploration over exploitation_, exploitation remains in common practice, but the use of exploration is rising. We want to understand more about the natural world. This change will increasingly occur as we shift to problems beyond human capability.
4. _Mathematics becomes a target_. Algorithms can optimize older equations and formulas [(18)]. Algorithms can search a much bigger problem domain than any human in the past [(18)]. We will see the optimization of traditional mathematics using modern techniques. A future mathematician will most likely be very different from the past.
5. _Spiking neurons_, basically any algorithms that allow backward information feed, i.e., from a higher level back to a lower level, improve recognition or optimization. This biological method can potentially reduce the size of the required networks and, in many cases, could enhance the quality of results (connection to neuromorphic computing).
6. _Development through negotiation_, already mentioned but worth repeating is the creation of algorithms that allow humans to define goals. The goal is a starting point for navigating complex problem domains. Solutions are developed over time through negotiation [(45)].
7. _Pressure on better knowledge representation_. Knowledge representation is at the core of all the activities around algorithms. The pressure is due to the problem domains becoming more complicated. This trend will continue, and we will likely see an expansion of basic data types.
8. _Pattern recognition through increased dimensionality_. With increasing resources, adding data dimensions will continue as a style of pattern extraction from complex and noisy data.
9. _Single shot learning algorithms_, is the ability to learn with minimum data sets. We see a continuous trend to reduce the data required to train the next-generation algorithms.
10. _Sparse data structures_ will continue to be important. Using these structures is a desperate attempt to reduce resource requirements and improve performance. This measure becomes especially important for algorithms that require enormous data sets.
11. _Prediction_, we can say that predictive model will occur in every critical area, from social decision-making to instruction pre-fetchers. This trend continues unabated for the foreseeable future.
12. _Physical three-dimensional algorithms_, these are algorithms that deal with the layout and positioning of physical components. Important for 3D transistor layout, virtual reality systems escaping the real world, and augmented reality systems that add to the real world.
13. _Map between physical and virtual worlds_ will increase in importance. This mapping is required if we want to accelerate the adoption of simulation, i.e., transferring environments quickly into virtual representations.
14. _Byzantine algorithms_ become more critical as society deploys machine learning models. Multiple model voting increases the likelihood of a correct prediction; machine learning will move quickly in this direction to avoid biases.
15. _Generative Adversarial Network (GAN)_ continues to be more successful and valuable along with traditional machine learning systems.
16. _Built-in multi-objective capability_. These options allow for more of a weather prediction-type approach to solutions. The variations can range from maximum optimization to zero optimization or likely to most unlikely; see Figure 9.
## 5. Auxiliary Support
We have created a list of what might be next for algorithms, but what about auxiliary support?
1. [leftmargin=*]
2. _Multi-level randomness_. The requirement for much more sophisticated randomness. Having multiple levels of _good_ randomness becomes imperative. What we mean by good randomness is having everything from, as near as possible, true randomness to pseudo-randomness. We require an ability to dial randomness to a particular level, including some form of stochastic numbers.
3. _Probabilistic operators_, extension of operations to include probabilistic helper functions. We have integer, fixed point, and floating point operations. We need probabilistic operators.
* **P-adic datatypes and operators**, we add this as a potential auxiliary extension. p-adic extends the standard number systems adopted by digital computers (Srivastava et al., 2017; Wang et al., 2018). Based on prime numbers, p-adic allows for a different style of flexibility from the traditional extension of real and complex numbers.
* **Stochastic rounding**, is already gaining momentum. It rounds real numbers using a probability distribution compared to the traditional fixed rounding up or rounding down to the nearest number (Bauer et al., 2016; Srivastava et al., 2017). This method is increasing in popularity, especially with the machine learning community, opening the door to lower precision numbers.
* **Biological neuron mimicking**. If we compare artificial and biological neurons, the biological neurons have many more links. We predict a change in the base neuron for future machine learning.
* **Memory management optimization**, as memory systems become more sophisticated and complicated, we need new methods to help algorithms optimize memory efficiency and usage. This help may reside in hardware or future tooling.
* **Agent or chaotic based parallelism**, as mentioned previously, structural parallelism continues at all levels. Still, there is a potential for a hardware-assisted agent or chaotic, based parallelisms.
* **Error-correction**, is an old subject with a new set of focuses coming from critical areas such as quantum computing, probabilistic systems, and traditional digital systems where geometry shrinkage goes to the limits. Any system that operates at the boundaries of stability where minor errors can result in significant problems (Bauer et al., 2016).
* **Spatial-temporal datatypes**, as we move into more graphing problems (e.g., Virtual Reality, Augmented Reality, Digital Twin, and Physical Simulators), there is a requirement to make spatial-temporal datatypes a first-class citizen. A universal datatype for physical systems with potential sparse recursive scaling coordinate systems and velocity coordinates to represent _n-body_ problems (Srivastava et al., 2017).
## 6. Conclusion
We have taken a journey through the algorithm world. It has involved crawling through tunnels, jumping over fences, and running across fields. As with many subjects, we picked a few tunnels, fences, and fields, realizing this is a staggeringly small subset of the ideas. The algorithm world is complex, dynamic, and full of old and exciting new directions. As we see it today, there is an undertone that probability and statistics have an increasingly critical role in tackling complex applications. These problem solutions are less amenable to simple absolutes.
Over time, the application focus has shifted from calculating the ordinance range for artillery to pattern correlation using machine learning. These shifts have transitioned the algorithm world in different directions. We see, through our journey, new transitions towards data-directed outcomes, adaptability, and meta-learning.
_Data-directed outcomes replace rule-based systems_. This transition is not new; we see this as a continuation. Rule-based systems can handle specific problem domains, but they fail when a none pre-programmed pattern occurs, i.e., they lack flexibility. Data-directed outcomes can circumvent, within reason, many of these problems, which are difficult for rule-based systems. For rule-based systems, the value often is in the exceptions, not the rules, and for data-directed outcomes, the value comes down to the quality of the training data. We may see a mixture of the two systems, with rule-based systems ensuring the other operates within required boundaries.
_Adaptability replaces precision._ Precision deals with perfection, whereas adaptability is handling imperfection. For example, we design robots to perfect specifications. And algorithms rely on those perfections for length, pressure, and movement. Adaptability in robots has algorithms that constantly learn and change as the robot matures or the environment changes. In other words, the new algorithms handle environmental changes, wear, and poor design.
_Meta-learning enhances capability._ Increasing capability is important; it is a horizontal activity. See Figure 16. We want to expand algorithm capability, so they tackle evermore exciting tasks. At the same time, we also want to accelerate the actual learning and creative process. Meta-learning is the process of _learning to learn_, moving away from the specific problems and focusing on common patterns. These generalizations force algorithms up the abstract tree. In that, common patterns can transfer to other problem types. In the coming decades, we predict much more activity around meta-learning and integrating more abstract approaches. In addition, this could mean more weather-predicting style algorithms, i.e., ensemble prediction, that provide a range of solutions with different characteristics. In other words, a group of solutions with different probabilities of certainty and uncertainty (Srivastava et al., 2017; Wang et al., 2018). We
Figure 16. Meta-Learning
can potentially build these systems using multiple models based on physics and probability. It allows us to explore the unlikely so we can create anticipating actions with cost-loss attributes. For example, moving people out of danger to avoid a unlikely but possible catastrophic weather system (Kalalain, Sunfah, Gawhwal, and Kumar, 2021).
In any modern algorithm discussion, it is essential to mention quantum algorithms. The quantum world is attractive for its potential energy saving advantages (Bentent et al., 2020). We are still in an exploratory phase and starting to learn how to build basic systems and determine possible algorithms. One of the many concerns about quantum computing is the quick drive to optimization before the benefits are genuinely discovered, i.e., the race to be valuable. Also, the problem domains that quantum computing can explore may not be that exciting, and traditional computation may remain dominant for the majority. One true unarguable benefit of quantum computing is exploring the quantum world itself (Bentent et al., 2020). Quantum algorithms' ultimate achievement may be to push classical computing in new directions.
_Hic sunt dracones_ ("_Here be dragons_") is the term used at the beginning of this exploration. The term describes the world beyond the edge of the map. We are entering an exciting time around algorithms and what they can accomplish, but concerns about how they can be abused or used for badness come with that excitement. For example, we have seen algorithms manipulate people on a mass scale through social media. Or the various fake images and videos depicting people saying fictional opposites or non-truths. Just the mechanical process of _validation_ and _verification_ becomes more of a challenge as algorithms exhibit more extraordinary capabilities.
We want algorithms to be benevolent in our society. We have seen how algorithms can influence people away from acting in their best interests. For this reason, we provided a list of ideals at the beginning of our journey. These ideals are possible areas of further exploration, but they are not rules. At best, guidelines.
Lastly, _Richard Hamming_, _Albert Einstein_, _Neil deGrasse Tyson_, and many others pointed out a common mistake: assuming the new is just like the past. The mistake prevents many from contributing significantly to the next revolution. For Hamming, this thought came as he observed the transition to digital filters (Bent et al., 2020).
## 7. Acknowledgments
The OPEN DASKALOS PROJECT is an open project to help explain complicated Computer Science concepts. Each paper is open and undergoes a continuous process of refinement, i.e., a snapshot in thinking. We thank the great algorithm writers for creating such exceptional solutions. We would also like to thank the reviewers, Hannah Peeler, for the initial feedback and Paul Gleichauf for posing such hard questions throughout the editing process.
## Open DASKALOS PROJECT SERIES:
**Intelligence Primer (2nd Edition)**, May 2022
by Karl Fezer and Andrew Sloss
_Intelligence is a fundamental part of all living things, as well as the foundation for **Artificial Intelligence**. In this primer we explore the ideas associated with intelligence and, by doing so, understand the implications and constraints and potentially outline the capabilities of future systems. Artificial Intelligence, in the form of Machine Learning, has already had a significant impact on our lives._
|
2302.09332 | Incipient Fault Detection in Power Distribution System: A Time-Frequency
Embedded Deep Learning Based Approach | Incipient fault detection in power distribution systems is crucial to improve
the reliability of the grid. However, the non-stationary nature and the
inadequacy of the training dataset due to the self-recovery of the incipient
fault signal, make the incipient fault detection in power distribution systems
a great challenge. In this paper, we focus on incipient fault detection in
power distribution systems and address the above challenges. In particular, we
propose an ADaptive Time-Frequency Memory(AD-TFM) cell by embedding wavelet
transform into the Long Short-Term Memory (LSTM), to extract features in time
and frequency domain from the non-stationary incipient fault signals.We make
scale parameters and translation parameters of wavelet transform learnable to
adapt to the dynamic input signals. Based on the stacked AD-TFM cells, we
design a recurrent neural network with ATtention mechanism, named AD-TFM-AT
model, to detect incipient fault with multi-resolution and multi-dimension
analysis. In addition, we propose two data augmentation methods, namely phase
switching and temporal sliding, to effectively enlarge the training datasets.
Experimental results on two open datasets show that our proposed AD-TFM-AT
model and data augmentation methods achieve state-of-the-art (SOTA) performance
of incipient fault detection in power distribution system. We also disclose one
used dataset logged at State Grid Corporation of China to facilitate future
research. | Qiyue Li, Huan Luo, Hong Cheng, Yuxing Deng, Wei Sun, Weitao Li, Zhi Liu | 2023-02-18T13:54:15Z | http://arxiv.org/abs/2302.09332v1 | Incipient Fault Detection in Power Distribution System: A Time-Frequency Embedded Deep Learning Based Approach
###### Abstract
Incipient fault detection in power distribution systems is crucial to improve the reliability of the grid. However, the non-stationary nature and the inadequacy of the training dataset due to the self-recovery of the incipient fault signal, make the incipient fault detection in power distribution systems a great challenge. In this paper, we focus on incipient fault detection in power distribution systems and address the above challenges. In particular, we propose an ADaptive Time-Frequency Memory (AD-TFM) cell by embedding wavelet transform into the Long Short-Term Memory (LSTM), to extract features in time and frequency domain from the non-stationary incipient fault signals. We make scale parameters and translation parameters of wavelet transform learnable to adapt to the dynamic input signals. Based on the stacked AD-TFM cells, we design a recurrent neural network with ATtention mechanism, named AD-TFM-AT model, to detect incipient fault with multi-resolution and multi-dimension analysis. In addition, we propose two data augmentation methods, namely phase switching and temporal sliding, to effectively enlarge the training datasets. Experimental results on two open datasets show that our proposed AD-TFM-AT model and data augmentation methods achieve state-of-the-art (SOTA) performance of incipient fault detection in power distribution system. We also disclose one used dataset logged at State Grid Corporation of China to facilitate future research.
power distribution system, incipient fault detection, wavelet transform, recurrent neural network, attention mechanism, LSTM, data augmentation
## I Introduction
Power distribution system delivers electricity from the transmission system to the individual consumers and is an inseparable part of people's lives and society. Real-time fault detection system plays an important role in maintaining the stability of power equipment [1, 2, 3]. In particular, some pre-existing anomalies occur before a fault occurs in the power distribution system, which is called incipient faults [4].
The incipient fault may occur at any time or in any place of the distribution network. Containing a large number of non-fundamental transient signals, the voltage and current time series data shows strong randomness and non-stationary characteristics when incipient fault occurs [5]. In addition, as the incipient fault of the distribution network is self-recovering and can be self-concealed, only a small amount of data can be logged by traditional fault recorder, which makes incipient faults detection a huge challenge [6, 7, 8].
Detection of incipient fault allows maintenance personnel to replace defective equipment in advance, effectively improving power supply reliability. In addition, it is a kind of predictive maintenance, where the detection of failures at an early stage helps to avoid unexpected disruptions. There are two mainstream methodologies of fault detection in power distribution system. The first one is traditional fault classification method, in which manually selected features are extracted from the filtered current and voltage time series signal, and then matched with pre-set feature thresholds or patterns to detect the corresponding fault types [9, 10, 11, 12, 13]. For example, in [14], a method based on human-level concept learning is proposed by selecting waveform features of current and voltage and decomposing it into primitives to detect faults. In [15], an online model based on sequential Bayesian approach is proposed by splitting power quality abnormalities of continuous current. This kind of methods are easy to implement, however, human selected features and thresholds rely heavily on expertise knowledge, and are not sufficiently capable of characterizing complex non-stationary signals to be well applied to incipient fault detection in distribution system.
With the help of artificial intelligence (AI), data driven methods are also applied to incipient fault detection in power distribution system [16, 17, 18, 19]. Due to the complex causes and electrical characteristics of incipient faults in power distribution system, it is difficult to establish a comprehensive mathematical model. On the other hand, data driven methods are more effective to deal with incipient fault diagnosis in the power distribution network, and can detect some unknown faults. For example, Long Short-Term Memory (LSTM) utilizes memory units instead of hidden layers in traditional Recurrent Neural Network (RNN), which constructs a more powerful model over time series using contextual information, and shows good performance on time series estimation. In particular, regarding the voltage and current time series data, LSTM cell is utilized to build a deep RNN architecture to automatically extract features and perform fault detection [20]. In [21], a LSTM based network is proposed by learning low-resolution data from a real case study to detect incipient faults.
However, due to lack of the frequency domain analysis,
LSTM based schemes cannot fully extract features of time series data, especially for the non-stationary incipient fault signal [22]. Besides, incipient faults in power distribution system are usually of short duration and self-recovering, which leads to the unavailability of sufficient samples to train the LSTM network. And thus exacerbates the difficulty of data-driven incipient fault detection methods.
In this paper, to improve the feature extraction ability from random and non-stationary signal of incipient faults, we propose an ADaptive Time-Frequency Memory (AD-TFM) cell which embeds adaptive wavelet transform into LSTM. Specifically, we first use wavelet transform to decompose different frequency signals existing in a non-stationary signal into non-overlapping frequency bands. At each time step, the learned wavelet transform coefficients are multiplied by the input signal to obtain the time and frequency domain features. The coefficients are stored in AD-TFM and propagated to next time step to improve feature extraction abilities. In addition, we make the scale parameters and translation parameters of wavelet transform learnable to automatically adapt to the input signal, which can achieve multi-resolution and multi-dimensional analysis of non-stationary incipient fault signals.
Then we construct an AD-TFM cell based RNN model to perform incipient fault detection in power distribution system. To focus the neural network on global hidden information, we strengthen the stacked AD-TFM network by adding an ATtention layer, i.e., a new AD-TFM-AT model is implemented. The correlation of the hidden state output at all time steps of AD-TFM cells are calculated in the attention layer. Then, the correlation degrees are used to make a weighted average of hidden states of all time steps. By improving the attention of neural network to the time steps containing fault information and increasing the feature extraction ability of hidden information for all time steps, the fault detection accuracy is further improved. To enlarge the training sample of incipient fault data and improve detection performance of AD-TFM-AT, we propose two data augmentation methods, namely phase switching and temporal sliding. Based on the available small incipient fault dataset [23] and a relatively large dataset logged in State Grid Corporation of China, our proposed method achieves state-of-the-art (SOTA) performance.
The main contributions of this paper are as follows:
1. We propose an AD-TFM cell based on adaptive wavelet transform, which performs feature extraction at different scales to effectively deal with the non-stationary incipient fault signals of power distribution system. We design AD-TFM-AT, an attention assisted AD-TFM based RNN model, which increases the weight of the most relevant hidden states in fault detection and guides the feature fusion process.
2. We propose two effective data augmentation methods, i.e., phase switching and temporal sliding, which swap the phases of the voltage and current data of the fault signal with the remaining phases and intercepts each fault with different starting points, respectively. These methods effectively expand the small fault data set in the power distribution system and improve the training performance.
3. We conduct extensive experiments and experimental results on two open datasets, and the results show that our proposed AD-TFM-AT model and data augmentation methods achieve SOTA performance of incipient fault detection in power distribution system. We also disclose a relatively large dataset logged at State Grid Corporation of China to facilitate future research1. Footnote 1: [https://github.com/smartlab-hfut/SGAH-datasets](https://github.com/smartlab-hfut/SGAH-datasets)
The rest of the paper is organized as follows.Section II presents the latest research related to incipient fault detection. Section III explores the random and non-stationary characteristic of incipient fault in distribution network based on a simplified circuit model. Section IV introduces our proposed AD-TFM cell based on adaptive wavelet transform and LSTM. Section V shows the hierarchical structure diagram of the AD-TFM-AT, and two methods of data augmentation are explained in Section VI. In Section VII, we show the performance of data augmentation and incipient fault detection accuracy against two datasets. Finally, Section VIII concludes this manuscript.
Note that this paper is an extended version of our previous conference paper [24]. Different from [24],
this paper analyzes the non-stationary of fault signals based on a simplified distribution network circuit model. In addition, an adaptive wavelet transform with learnable wavelet scale parameters and translation parameters is proposed, which realizes the multi-resolution analysis of fault signals. We also added an attention mechanism to the neural network to enhance the focus on time step hidden states that are highly correlated with fault classification. This further improves the network's ability to detect incipient faults.
## II Related Work
This section explains the latest research related to incipient fault detection.
### _Faults in power distribution systems_
The incipient faults are transient events that occur at random locations and are pre-emptive hidden faults before permanent faults occur. Incipient faults in the power distribution network usually occur in underground cables [25, 26, 5], transformer equipment [27, 28], distribution networks with high Distributed Energy Resources (DERS) penetration [29], and so on. There are many reasons for incipient faults, such as tree interference, animal contact and coil contact [14]. When an incipient fault occurs, the fault phases voltage and current change, waveform is distorted, and the fault transient signal exhibits non-stationary characteristics. Meanwhile, incipient faults are typically self-clearing faults and have a short duration, ranging from a quarter of a cycle (sub-cycle), to up to four cycles (multi-cycle) [14]. Thus, incipient faults are less well documented, and detection methods for incipient faults are much needed.
### _Traditional fault detection methods_
Traditional fault detection methods, which mainly include similarity detection [30, 31], waveform eigenvalue decomposition [32, 33, 34], and model parameter estimation [35, 36, 37],
have been used for fault detection in power distribution system. For example, in [30], an expression for the transient zero sequence current characteristics under the influence of the inverter is derived, and a method based on the first-order accumulated generation operator (AGO) and the improved cosine similarity is proposed to identify the faulty feeders. In [32], the wavelet singular value decomposition is applied to obtain the edge components of the normalized fault current amplitude for fault detection. In [37], the transformer state space model, linear parameter varying (LPV) observer, primary and secondary voltage values are used to estimate the primary current at each time step of the transformer. The estimated primary currents are compared with the actual primary currents to distinguish whether the transformer is internally or externally faulty.
The above methods rely on the manual extraction of features and then achieve fault identification based on rules or thresholds set by manual experience. In distribution networks with different parameters, the threshold values set by various methods vary greatly. Moreover, these methods lack the analysis of the non-stationarity of faults, which limits the application and effectiveness of traditional fault detection methods.
### _AI based fault detection methods_
With the wide application of AI, methods based on machine learning and deep learning are applied to fault detection. From the computer vision point of view, there are schemes using Convolutional Neural Networks (CNN) [38, 39]. Recently, hybrid approaches for fault recognition have also been proposed [40, 41]. For example, in [42], zero-sequence currents are transformed into spectrograms by short time Fourier Transform, and then a two-channel CNN is constructed to achieve fault classification. In [43], a Multi-layer Long Short-Term Memory Network (MLSTM) is applied to voltage waveform analysis in order to detect whether a fault occurs in the grid. In [40], a hybrid statistical learning and machine learning approach is proposed to identify fault-inducing regions in photovoltaic (PV) farm based on micro Phasor Measurement Unit (PMU) measurement data. However, the above AI-based methods lack the analysis of fault signal features, and do not fully take into account the non-stationary nature of incipient faults in the power distribution network.
### _Solution with insufficient training data_
Fault data scarcity is also an important issue faced by fault detection using deep learning methods, and several methods are proposed to cope with this problem [44, 45]. In particular, in [44], various pre-trained models are fine-tuned on different substations through migration learning and federated learning. However, [44] requires a large number of deployable substation resources and edge-cloud communications, which will incur a large cost overhead. In [45], the fault current data are decomposed into multilayer wavelet coefficients which are fused into a matrix. Then the matrix is mapped into a phase space image with three channels (RGB) by colormap indexes as the input of the classification model. Among them, two data enhancement methods are proposed. The first one is to change the colormap indexes randomly to obtain phase space images with different color domains. The second one is to convert the phase space images from RGB color mode to HSV (Hue, Saturation, Value) mode. The Hue channel value is changed to generate different images. Then the different images are changed back to RGB mode to achieve data enhancement. In [45], only the color mapping index and Hue of the phase space images of the coefficient matrix are changed to obtain a different color graph of the same fault data, and no new fault information is actually generated. In summary, the existing methods mainly address the problem of insufficient incipient fault data in distribution networks through migration learning and fault image data enhancement. These methods perform data enhancement in terms of increasing the fault data acquisition surface or changing the fault data mapping. And they do not actively generate new fault information nor consider fault characteristics.
## III Incipient faults and their features
The faults in power distribution system, can be divided into sub-cycle faults, multi-cycle faults and permanent fault, according to their durations. Among them, sub-cycle and multi-cycle faults are called incipient faults [25]. Sub-cycle incipient faults are characterized by abnormal fault phase voltage and recovery in one cycle, while multi-cycle incipient faults mainly include interphase short circuit faults, permanent faults include grounding with high resistance, single-phase grounding faults and main transform faults. Several typical fault waveforms are shown in the Fig. 1.
Taking the single-phase grounding fault occurring in overhead lines of power distribution network as an example, the simplified circuit model contains two inductors and one capacitor, as illustrated in Fig. 2. Due to the presence of inductors, the current in the line can not be changed suddenly, which will cause a short circuit transient process, and there are a large number of integer and non-integer harmonics in the voltage and current signal. As the characteristic frequency components in the transient process are not fixed, the current signal flowing through the capacitor contains fault information, and is non-stationary.
### _Transient capacitive current_
Based on Fig. 2b, the differential equation of transient capacitance current can be expressed as:
\[R_{0}i_{C}+L_{0}\frac{di_{C}}{dt}+\frac{1}{C}\int_{0}^{t}i_{C}=U_{m}\sin(\omega t +\varphi), \tag{1}\]
where \(U_{m}\) is the amplitude of the zero sequence voltage. The transient capacitive current \(i_{C}\) is composed of transient free oscillation component \(i_{C,os}\) and steady-state power frequency component \(i_{C,st}\). When single-phase grounding fault occurs, \(i_{C,os}+i_{C,st}=0\) and \(i_{Cm}=U_{m}\omega C\). And the transient capacitive current can be calculated as:
\[\begin{split} i_{C}&=i_{C,os}+i_{C,st}\\ &=I_{Cm}[\left(\frac{\omega_{f}}{\omega}\sin\varphi\sin\omega_{f}t -\cos\omega\cos\omega_{f}t\right)e^{-t\delta}+\\ &\cos\omega_{f}t],\end{split} \tag{2}\]
Fig. 1: Typical incipient fault signals. (a) Sub-cycle incipient fault. (b) Multi-cycle incipient fault. (c) High resistance grounding fault. (d) Single-phase grounding fault. (e) Two-phase grounding fault. (f) Interphase short circuit fault.
where \(I_{Cm}\) is the amplitude of the transient capacitor current. \(\omega_{f}\) is the angular frequency of the transient free oscillation component. \(\delta=1/\tau C=R/2L_{0}\) is the attenuation coefficient of the free oscillation component. \(\varphi\) is the phase angle of the phase voltage when the fault occurs.
If \(R_{0}\) is less than \(2\sqrt{L_{0}/C}\), the transient process of the loop current has periodic oscillation and attenuation characteristics. Otherwise, the loop current has aperiodic oscillation attenuation characteristics, and gradually tends to be in a stable state.
### _Transient inductive current_
The inductive current of arc suppression coil is composed of transient DC component and steady-state AC component, which is expressed as:
\[i_{L}=I_{Lm}[\cos\varphi^{-t/\tau_{L}}-\cos(\omega t+\varphi)], \tag{3}\]
where \(\tau_{L}\) is the time constant of the inductance circuit. \(I_{Lm}=\frac{U_{m}}{\omega_{L}}\), \(\phi\) is the phase angle of phase voltage at fault.
The fault signal contains a large number of non-fundamental transient signals, which consisting of high frequency components, non-periodic components and a large number of fault or disturbance information. The transient component in fault signal is a non-stationary random process, which changes with time, the location of the fault point, the transition resistance of the fault point and the different operating conditions of the system. Through the above analysis, when a single-phase grounding fault occurs in the power grid, it can be seen from the analysis of the transient process that the fault signal is non-stationary at that time.
## IV Time-Frequency Memory Cell Based on Adaptive Wavelet
To extract the dynamic characteristics of fault parameters in power distribution networks, we introduce wavelet transform that can accurately analyze non-stationary signals into the LSTM cell, and change the forget gate of LSTM into the joint forget gate, which decomposes fault information in both time and frequency domain. Besides we establish an adaptive learning mechanism for scale parameters and position parameters in wavelet transform, and propose AD-TFM cell that can accurately model the non-stationary incipient fault signal.
### _Basic Idea of AD-TFM_
The traditional method, which combines the wavelet transform and the neural network for fault detection, usually uses the wavelet transform to extract the fault features, which are then fed into neural network for classification [20]. In this method, the wavelet transform is separated from the neural network, and the error generated during feature extraction has a greater impact on the accuracy of fault classification in the later stage. To solve the above-mentioned problem, we propose AD-TFM by embedding the wavelet transform into traditional LSTM cell, changing the originally fixed scale parameter and translation parameter to dynamic parameter that changes with the input fault information.
### _Structure of AD-TFM Cell_
The structure of AD-TFM cell is shown in Fig. 3, which consists of joint forget gate, input gate, output gate and cell state updating. It dynamically models the input, i.e., three-phase current and voltage time series \(\{x_{t}\mid t=1,2,...,T\}\) by continuous time steps. In each time step of AD-TFM, the hidden state of the previous time step and the input information of the current time step are decided by the joint forget gate, and the input gate selects the information to be updated. In the cell state updating part, the input information after adaptive wavelet transform and the information retained by the joint forget gate are added to update the cell state, and then the updated cell state is input to the output gate to obtain the hidden state at the current time.
In this process, the non-stationary analysis is achieved by converting the input three-phase voltage and current data into time-frequency features using efficient time modeling (via LSTM) and non-stationary signal processing (i.e., wavelet transform).
### _The Joint Forget Gate_
The joint forget gate contains three parts: the state forget gate \(f_{t}^{ste}\), the time forget gate \(f_{t}^{tim}\) and the frequency forget
Fig. 2: Overhead line in power distribution network and its simplified circuit model. (a) Overhead line. (b) Simplified circuit model.
gate \(f_{t}^{fre}\), which decompose the input and the hidden state of the previous time step into the \(K\) dimension in the time domain, \(J\) dimension in the frequency domain and \(D\) dimension in state domain, respectively.
\[f_{t}^{ste}=sigmoid\left(W_{ste}x_{t}+U_{ste}h_{t-1}+b_{ste}\right)\in\mathbb{R }^{D}, \tag{4}\]
\[f_{t}^{tim}=sigmoid\left(W_{tim}x_{t}+U_{tim}h_{t-1}+b_{tim}\right)\in\mathbb{ R}^{K}, \tag{5}\]
\[f_{t}^{fre}=sigmoid\left(W_{fre}x_{t}+U_{fre}h_{t-1}+b_{fre}\right)\in\mathbb{ R}^{J}, \tag{6}\]
where \(W_{*}\) and \(U_{*}\) are weight matrices. \(b_{*}\) is a bias vector and \(h_{t-1}\) is the output hidden state at the (\(t-1\))th time step. Among them \(*\) refers to \(ste\), \(tim\), \(fre\).
The output of three forget gates are used to obtain \(F^{t}\), by jointly regulating the state, time and frequency information.
\[F_{t}=f_{t}^{ste}\otimes f_{t}^{tim}\otimes f_{t}^{fre}\in\mathbb{R}^{D\times J \times K}, \tag{7}\]
\[FC_{t}=F_{t}\circ C_{t-1}\in\mathbb{R}^{D\times J\times K}, \tag{8}\]
where \(\otimes\) is the outer product operation and \(\circ\) is the element-wise multiplication operation.
The joint forget gate determines the amount of information retained from the previous time step to the current step. It can be considered as a combination gate, which controls the information of different frequencies, times and states flowing into the memory cell.
### _Input Gate_
The formulations of the input gate \(i_{t}\) and the input modulation \(g_{t}\) are similar as these of LSTM:
\[i_{t}=sigmoid\left(W_{i}x_{t}+U_{i}h_{t-1}+b_{i}\right), \tag{9}\]
\[g_{t}=tanh\left(W_{g}x_{t}+U_{g}h_{t-1}+b_{g}\right), \tag{10}\]
\[ig_{t}=i_{t}\circ g_{t}, \tag{11}\]
where the \(ig_{t}\) is defined to generate a compatible result for the input gate.
The input gate decides how much new information should be allowed to enter the current memory cell to update AD-TFM.
### _Cell state updating based on adaptive wavelet transform_
The state updating procedure of AD-TFM is similar to LSTM. By integrating the adaptive wavelet transform, the output of the input gate needs to be multiplied by the coefficients of the adaptive wavelet transform when the AD-TFM cell is updated. The output of the input gate is decomposed by the
Fig. 3: Structure of AD-TFM cell.
wavelet transform into \(K\) and \(J\) dimensions in the time domain and frequency domain respectively.
Taking the Morlet wavelet transform used in this paper as an example, the implementation function of the adaptive learning of the scale parameters \(a\) and translation parameters \(b\) are:
\[a=tanh\left(W_{a}ig_{t}+b_{a}\right), \tag{12}\]
\[b=tanh\left(W_{b}ig_{t}+b_{b}\right). \tag{13}\]
The output of the input gate, which is decomposed by the wavelet transform, is expressed as follows.
\[\begin{split}\psi_{k,j}&=exp\left(i\cdot\frac{ \omega_{0}}{a}\cdot\left(\frac{t+b}{2^{j}}-k\right)\right)\cdot\\ exp&\left(\left(-\frac{1}{a}\right)\cdot\left( \frac{t+b}{2^{j}}-k\right)^{2}\right).\end{split} \tag{14}\]
Then the cell state after decomposition can be obtained as:
\[C_{t}=FC_{t}+ig_{t}\otimes\psi_{k,j}\in\mathbb{R}^{D\times J\times K}, \tag{15}\]
\(C_{t-1}\) is the cell state value at the previous time step. \(FC_{t}\in\mathbb{R}^{D\times J\times K}\) and \(i_{t}\in\mathbb{R}^{D}\) are forget and input gates, respectively, controlling the past and current information on states, time and frequencies that are allowed to update the AD-TFM at the \(t\)th time step.
As a complex number can be uniquely represented by its amplitude and phase, we decompose the update matrix \(C_{t}\) of AD-TFM into two parts, amplitude and phase, which are expressed as:
\[A_{t}=\mid C_{t}\mid=\sqrt{\left(ReC_{t}\right)^{2}+\left(ImC_{t}\right)^{2}} \in\mathbb{R}^{D\times J\times K}, \tag{16}\]
\[\angle C_{t}=arctan\left(\frac{ReC_{t}}{ImC_{t}}\right)\in\left[-\frac{\pi}{2 },\frac{\pi}{2}\right]. \tag{17}\]
where \(Re\) and \(Im\) are the functions of taking the real part and taking the imaginary part, respectively. \(arctan\left(\cdot\right)\) is an element-wise inverse tangent function.
The amplitude will be fed into the memory cell for the next time step. However, this phase will be ignored because it has no impact on performance except for higher computation and memory overhead.
At each time step, we calculate the component \(A_{t}^{k,j}\) of the amplitude \(A_{t}\) in the \(k\)th dimensional time domain and the \(j\)th dimensional frequency domain. \(A_{t}^{k,j}\) will be sent to next time step state cell unit of AD-TFM, and the forget gate and input gate determine the information that needs to be updated. After the update, \(A_{t}^{k,j}\) is combined into \(\widetilde{c_{t}}\), and enters the output gate, which is expressed as:
\[\widetilde{c_{t}}=\sum\nolimits_{k=1}^{K}\sum\nolimits_{j=1}^{J}\left(W_{e} ^{k,j}A_{t}^{k,j}+b_{e}^{k,j}\right), \tag{18}\]
### _Output gate_
The output gate determines the information that will be fed into next time step. The input of the output gate can be expressed as:
\[o_{t}=sigmoid\left(W_{o}x_{t}+U_{o}h_{t-1}+b_{o}\right), \tag{19}\]
And the output hidden state \(h_{t}\) is computed as:
\[h_{t}=o_{t}\circ tanh\left(\widetilde{c_{t}}\right). \tag{20}\]
## V AD-TFM Based RNN with Attention for Incipient Fault Detection
In this section, we construct a RNN model for incipient fault detection based on the proposed AD-TFM cell. To focus the neural network on the global hidden information, we strengthen the stacked AD-TFM network by adding an attention layer. The hierarchical structure of AD-TFM-AT model is shown in Fig. 4.
The fault signal consisting of three phase voltage and current input to AD-TFM cell will be encoded into a fixed length of hidden information. At each time step, the hidden state output contains a different amount of fault information. Direct use of the hidden state of last time step of AD-TFM will lead to insufficient attention to global hidden information. Therefore, the amount of fault information contained in the hidden information of each time step of AD-TFM cell is quantified in the form of similarity through the attention mechanism, and the final output is calculated by weighting the matching degree of hidden state of each time step as a weight. Therefore, we use the attention mechanism to extract the important information from the hidden states of all time steps and give it larger weights to obtain more accurate fault feature vectors and improve the fault detection accuracy. Its specific implementation process is as follows:
Let \(h_{i}\) represent the hidden layer vector containing the time series produced by AD-TFM. We convert \(h_{i}\) to \(u_{i}\) through a fully connected layer illustrated as:
\[u_{i}=tanh\left(Wh_{i}+b_{o}\right). \tag{21}\]
Then we calculate the similarity between \(u_{i}\) and the context vector \(u_{w}\), and convert it to a probability distribution \(\alpha_{i}\) through softmax function.
\[\alpha_{i}=\frac{exp(u_{i}^{T}u_{w})}{\Sigma_{i}exp(u_{i}^{T}u_{w})}. \tag{22}\]
The context information \(u_{w}\) can be regarded as the contribution of one time step data to the overall data, and the contribution of each \(u_{i}\) to \(u_{w}\) can be obtained by calculating the similarity between \(u_{i}\) and \(u_{w}\), where \(u_{w}\) is randomly initialized and obtained through training.
As \(\alpha_{i}\) represents the importance of the fault hidden state at each time step to the overall fault hidden state, we use \(\alpha_{i}\) as the weighted summation of the global \(h_{i}\) to obtain the tensors and express the fault type.
\[s=\Sigma\alpha_{i}h_{i}. \tag{23}\]
## VI Data augmentation
### _Overview_
The incipient faults of the power distribution systems are manifested as waveform distortion of the three-phase voltage and current sinusoidal signals at the moment of fault occurrence. We intercept the three-phase voltage and current data before and after the moment of fault occurrence as fault data. In Section III, we have introduced the types of incipient fault signals in the power distribution system and showed the waveform of the faults.
Orthogonally, training a neural network usually requires a large amount of data. However, low incidence of incipient fault makes it a typical small sample learning problem [14]. According to the characteristics of voltage and current sinusoidal signal, we use two methods for data augmentation, i.e., phase switching and temporal sliding, to obtain a larger training dataset while keeping the characteristics of fault data unchanged.
### _Phase Switching_
The single-phase grounding fault is one major incipient fault in power distribution systems, where the faults happens in one phase of the three-phase voltage and current data. The first data augmentation method we use is phase switching, which swaps the phase of the voltage and current data of the fault signal with one of the rest phases. This changes the phase of the fault but keeps the fault type unchanged, i.e., achieve multiple data from one fault data.
Fig. 4: AD-TFM-AT model for incipient fault detection.
Taking a single-phase grounding fault as an example, we assume that the fault occurs in phase A, i.e., phase A voltage and current data contains fault information, and phase B and C voltage and current data are normal. Then, we swap the voltage and current data of the fault occurring in phase A with the normal data of phase B. In this way, the fault occurring in phase A becomes the fault occurring in phase B, and new data containing fault can then be obtained. Meanwhile, this operation does not change the characteristics of the single-phase grounding fault, e.g., the fault does not happen in two or more phases at the same time. We also exchange the voltage and current data of the fault occurring in phase A with the normal data of phase C.
### _Temporal Sliding_
Temporal sliding is also used to enlarge the amount of fault data, by sampling the original data containing the fault data multiple times with different starting time. The starting time here is selected with equal time sliding intervals. Comparing with original data (i.e., the case of only one starting time), the amount of fault data is increased while the characteristics of fault data is unchanged.
In particular, we select a window of a certain length H to intercept the fault data, and the window can pick different starting points when sampling the fault data. In order to enlarge the amount of fault data, we specify that the sampling window intercepts the data from one starting time point and then slides backward \(T\) time points to intercept the data again. In this way, a fault can be intercepted multiple times and the amount of fault data is increased, thus achieving data augmentation. Fig. 5 illustrates the temporal sliding. With sliding windows at different starting times, one fault data will be sampled multiple times within different sliding windows. As a result, the amount of fault data increases, but the type of fault is not changed.
## VII Experiments
To verify the performance of our proposed AD-TFM-AT neural network model, extensive experiments are conducted on two datasets. We use several evaluation metrics to assess the performance with and without data enhancement. We also perform ablation experiments to show the performance of adaptive wavelet transform and attention mechanism.
### _Experimental Setup_
**Dataset and Analysis:** To train and test the proposed model, we use two datasets, a small Incipient Fault dataset in Power Distribution (IFPD) system from [14], and a relatively large dataset logged by State Grid Corporation of China in AnHui Province (SGAH). In IFPD dataset, there are the Sub-cycle Incipient Fault (SIF), Multi-cycle Incipient Fault (MIF), Single phase Grounding Fault (SGF) and High Resistance Grounding Fault (HRGF), each containing three-phase voltage and three-phase current data, and each cycle has 82 sampling points. The waveforms are shown in Fig. (a)a, Fig. (b)b, Fig. (d)d and Fig. (c)c, respectively. In the SGAH dataset, there are Inter Phase Short-circuit Fault (IPSF), Two-phase Ground Fault (TGF), Single-phase Grounding Fault (SGF), and Main Transformer Fault (MTF), each also containing three-phase voltage and three-phase current data, with 100 sampling points per cycle. Both datasets contain groundtruth consisting of three-phase voltage and current data with fault labels. The waveform of these faults are shown in Fig. (e)e, Fig. (f)f and Fig. (d)d. We also make SGAH dataset available to the public at GitHub.
**Evaluation Metrics:** To verify the performance of our proposed method, the following five metrics are adopted: accuracy, precision, recall, F1-score and Receiver Operating Characteristic (ROC) curve. To calculate the accuracy, the fault detection results are compared with ground truth. Accuracy is the ratio of the number of correct predictions to the total number of samples. Precision is the ratio of the number of samples correctly classified as faults to the number of samples in the population that are classified as such. The higher the precision is, the better performance of the model will gain. Recall rate refers to the fact that the number of samples that are correctly classified for a certain type of fault accounts for the actual samples. The higher the recall rate is, the less the number of faults that are incorrectly classified into other types of faults will be. To balance the accuracy and recall of our model, we also calculate the F1-score. The performance of the proposed model are also evaluated by the size of Area Under ROC Curve (AUC). The larger the AUC is, the better the performance of the model has.
**Data Augmentation Evaluation:** To verify the validity of the proposed data augmentation method, both dataset are divided into original dataset and augmented dataset. Then we train the proposed AD-TFM-AT network, and test the performance.
**Ablation experiments:** we conduct ablation experiments to show the performance of TFM, AD-TFM and TFM-AT. TFM model is based on the LSTM by changing the forget gate to a joint forget gate and adding a wavelet transform with fixed scale parameters and translation parameters. AD-TFM is based on TFM with wavelet transform of learnable parameters without the attention mechanism. TFM-AT is based on TFM with the attention mechanism.
Fig. 5: Illustration of temporal sliding.
**Comparison schemes:** To validate the classification performance of the proposed AD-TFM-AT, we use the following five comparision schemes. We train these models using both IFPD and SGAH datasets with augmentation, and compare the evaluation metrics on the ground truth.
1. Support Vector Machines (SVM): The three-phase voltage and current data are input into a set of Gaussian kernel functions based SVMs, where each SVM detects one kind of faults. And the classification results of all SVMs are combined to achieve fault classification.
2. LSTM: The pre-processed three-phase voltage and current data are fed into a three-layer LSTM for learning. Then feature classification of the LSTM output is implemented by a fully connected layer.
3. Minirocket [46]: Multiple features of three-phase voltage three-phase current data are extracted using multiple convolution kernels which are represented by two deterministic values \(\{-1,2\}\). The multiple features are then used to train a linear classifier for fault detection.
4. SII-CNN [19]: The three-phase voltage and current data are converted to synchronous Lissajous images as the input to a CNN. The CNN contains three convolutional layers and one fully-connected layer. Each convolutional layer consists of batch normalization, max-pooling, and dropout. The last convolutional layer connects a fully-connected layer for classification.
5. HLCL [14]: The three-phase voltage and current waveform are decomposed into approximate shapes and residuals by Meyer wavelet, and then further decomposed into primitive and temporal relationships by Fast Fourier Transform (FFT). Finally, fault classification is realized by variable probability statistics and Bayesian hierarchical model.
**Implementation details:** The proposed model is implemented in Python3.7, and the experimental code is available at GitHub2. The training parameter settings are shown in Table I. All the experiments are performed on four Nvidia Tesla V100 GPUs. We use a cycle of three-phase voltage and current data as a data packet.
Footnote 2: [https://github.com/smartlab-hfut/AD-TFM-AT-Model](https://github.com/smartlab-hfut/AD-TFM-AT-Model)
poor ability of the TFM model to detect this kind of fault. With the combination of adaptive wavelet transform and attention mechanism, the AUC of the TPF exceeds 0.90. Both TGF and SGF belong to the ground fault, in which the three-phase voltage and current at the time of fault occurrence have similar characteristics, both of which show a voltage drop in the fault phase and distortion in the three-phase current. The AD-TFM and AD-TFM-AT models with the addition of adaptive wavelet transform have higher resolution than the three-phase voltage and current features extracted by the fixed parameter wavelet base in TFM. On the other hand, the TGF and SPGF fault durations are different, and the TFM-AT and AD-TFM-AT models with the added Attention mechanism focus on different time periods of fault information that the TFM lacks attention.
From Fig. 10, we can see that the AD-TFM-AT model with the introduction of adaptive wavelet transform and attention mechanism has significantly improved in terms of precision, accuracy, recall and F1 score when tested on IFPD data.
Among them, the four evaluation metrics of the AD-TFM-AT model are 0.1 higher than these of TFM model. From the ROC in Fig. 11, we can see that the AUC under ROC curve for AD-TFM, TFM-AT and TFM models for HRGF is relatively low compared to other faults, while that of AD-TFM-AT is as high as 0.97. Therefore, the proposed method by combining the adaptive wavelet transform and attention mechanism increases the depth of the network and improves the accuracy and generalization ability of the model for incipient fault detection.
The above results show that adding adaptive wavelet transform to extract fault features at different times and frequencies can well deal with the non-stationary characteristics of incipient faults such as TGF, and the characterization of the fault
Fig. 8: Ablation experiments results on SGAH dataset.
Fig. 6: ROC of AD-TFM-AT model on SGAH dataset. (a) SGAN original dataset. (b) SGAH augmented dataset.
Fig. 7: ROC of AD-TFM-AT model on IFPD dataset. (a) IFPD original dataset. (b) IFPD augmented dataset.
feature vector for key fault information is enhanced by the attention mechanism, thus enabling the proposed AD-TFM-AT model to achieve high accuracy fault identification.
**Comparison with existing methods:** The results of the comparison with existing methods are shown in Table IV.
From Table IV, we can see that our proposed AD-TFM-AT model has the highest metrics on both SGAH and IFPD datasets, especially the accuracy reaches 0.99 and 0.97, respectively. In addition, on the SGAH dataset, Minirocket, SLI-CNN, and HLCL all achieve 0.96 accuracies, and SVM only has 0.82 lowest accuracies. Besides, Minirocket also has a high Recall up to 0.97, and HLCL also performs well. Note that SLI-CNN is only 0.77 on precision and F1-score 0.78. On the IFPD dataset, the accuracy of the proposed AD-TFM-AT model reaches 0.97, which is the highest among all models. In addition, Minirocket's and LSI-CNN's accuracy reach 0.91 and 0.93, respectively. HLCL's accuracy and LSTM's accuracy reach 0.96 and 0.93, respectively. SVM's accuracy is 0.85, which is the lowest. This is because both TGF and SGF are one kind of ground fault, their three-phase voltage and three-phase current waveform have similar characteristics. The voltage waveform shows a drop in two phases. And another phase voltage maintains a normal state. Therefore, the waveform features of these two faults obtained by using convolution through images have similarity, which leads to low final classification accuracy and F1-score.
The above results show that among the existing fault detection methods, AD-TFM-AT has best performance. Instead, the Minirocket method and the SLI-CNN method use convolution to extract the features of time series, which lacks the analysis for non-stationary. Besides, they extract fault information accounts for a small component of the overall information, which may make wrong decisions. Therefore, it is not advisable to directly apply existing classification methods for fault classification. The AD-TFM-AT performs adaptive wavelet transform
Fig. 10: Ablation experiments results on IFPD dataset.
Fig. 9: ROC of ablation models on SGAH dataset. (a) ROC of AD-TFM-AT. (b) ROC of AD-TFM. (c) ROC of TFM-AT. (d) ROC of TFM.
on the fault waveform data compared to other methods to achieve analysis of non-stationary. In addition, AD-TFM-AT also uses the attention mechanism to focus global information on fault. These make AD-TFM-AT the best performer on both datasets.
## VIII Conclusion
In this paper, we focus on incipient fault detection in power distribution systems and analyzed the non-stationary characteristics of incipient faults. We propose an AD-TFM cell by embedding wavelet transform into the LSTM, to extract features in time and frequency domain from the non-stationary incipient fault signals. We make scale parameters and translation parameters of wavelet transform learnable to follow the dynamic input signals to analyse incipient fault with multi-resolution and multi-dimension analysis. Based on the stacked AD-TFM cells, we design a AD-TFM-AT model to obtain more efficient fault features. In addition, we propose two data augmentation methods, namely phase switching and temporal sliding, to effectively enlarge the training datasets. Experimental results on two open datasets show that our proposed AD-TFM-AT model and data augmentation methods achieve better performance of incipient fault detection in power distribution system.
## Acknowledgment
This work is supported in part by grants from the National Natural Science Foundation of China (52077049, 51877060, 62173120), the Anhui Provincial Natural Science Foundation (2008085UD04, 2108085UD07, 2108085UD11), the 111 Project (BP0719039).
Fig. 11: ROC of ablation models on IFPD dataset. (a) ROC of AD-TFM-AT. (b) ROC of AD-TFM. (c) ROC of TFM-AT. (d) ROC of TFM. |
2310.06210 | CAT-RRT: Motion Planning that Admits Contact One Link at a Time | Current motion planning approaches rely on binary collision checking to
evaluate the validity of a state and thereby dictate where the robot is allowed
to move. This approach leaves little room for robots to engage in contact with
an object, as is often necessary when operating in densely cluttered spaces. In
this work, we propose an alternative method that considers contact states as
high-cost states that the robot should avoid but can traverse if necessary to
complete a task. More specifically, we introduce Contact Admissible
Transition-based Rapidly exploring Random Trees (CAT-RRT), a planner that uses
a novel per-link cost heuristic to find a path by traversing high-cost obstacle
regions. Through extensive testing, we find that state-of-the-art optimization
planners tend to over-explore low-cost states, which leads to slow and
inefficient convergence to contact regions. Conversely, CAT-RRT searches both
low and high-cost regions simultaneously with an adaptive thresholding
mechanism carried out at each robot link. This leads to paths with a balance
between efficiency, path length, and contact cost. | Nataliya Nechyporenko, Caleb Escobedo, Shreyas Kadekodi, Alessandro Roncone | 2023-10-09T23:42:33Z | http://arxiv.org/abs/2310.06210v1 | # CAT-RRT: Motion Planning that Admits Contact
###### Abstract
Current motion planning approaches rely on binary collision checking to evaluate the validity of a state and thereby dictate where the robot is allowed to move. This approach leaves little room for robots to engage in contact with an object, as is often necessary when operating in densely cluttered spaces. In this work, we propose an alternative method that considers contact states as high-cost states that the robot should avoid but can traverse if necessary to complete a task. More specifically, we introduce _Contact Admissible Transition-based Rapidly exploring Random Trees_ (CAT-RRT)1, a planner that uses a novel per-link cost heuristic to find a path by traversing high-cost obstacle regions. Through extensive testing, we find that state-of-the-art optimization planners tend to over-explore low-cost states, which leads to slow and inefficient convergence to contact regions. Conversely, CAT-RRT searches both low and high-cost regions simultaneously with an adaptive thresholding mechanism carried out at each robot link. This leads to paths with a balance between efficiency, path length, and contact cost.
Footnote 1: Supplementary video and open source code [1].
## I Introduction
Robot behaviors are designed around the fundamental safety constraint of collision-free paths, as it ensures minimal physical interaction with the environment that could lead to robot error states or damage. However, this principle is oftentimes too limiting, as environmental constraints (e.g. tight spaces, areas with occlusion), perceptual constraints (e.g. narrow field of view, sensor inaccuracies), and operational constraints (e.g. maintaining a vertical cup orientation to avoid spilling) must also be accounted for while guaranteeing a collision-free path. As a result, a robot manipulator will likely fail to reach into a cluttered space due to the minimal clearance between the arm and the objects required to meet collision-free guarantees (see Fig. 1). Because motion planning is a fundamental component of a robot operating in the real world, having it restricted means significantly hindering robot capabilities; this limits the potential for robots to complete real-world tasks in unstructured or semi-structured environments such as harvesting fruit on a farm or picking items in a cluttered warehouse.
In this work, we are motivated by the idea that a binary collision test with a measure of whether the robot is in collision with the environment is insufficient to delineate the boundary between a valid or an invalid motion plan. Collision checkers provide the motion planner with a query function to test whether two geometric models overlap [2, 3]. Rather than invalidating any interactions between the robot and the environment, it is possible to evaluate them based on a continuous scale of object contact. This allows a robot to consider paths that would be discarded by traditional motion planning techniques while increasing success rate and enabling the robot to explore the environment through contact.
More specifically, in this paper we introduce a motion planner, Contact Admissible Transition-based Rapidly exploring Random Trees (_CAT-RRT_), that can generate paths in cluttered and unstructured environments by guiding the robot through states of admissible contact, which we define as contact necessary to reach the goal configuration. We are inspired by the literature in optimization-based motion planning [4, 5], which differs from traditional search-based motion planning in that it seeks to find a path that optimizes over a cost function. In particular, Transition-based Rapidly Exploring Random Tree (T-RRT, [6, 7]) uses the output of a cost function to increment or decrement a single global variable, called temperature, which is proportional to the likelihood of accepting high-cost states. CAT-RRT differs from T-RRT by not only using a set of temperatures, but also having each branch within the search tree adjust their own temperatures. This allows the planner to simultaneously propagate paths into low and high-cost regions based on multiple variables. We define cost with respect to proximity or contact with an obstacle, as shown in Fig. 1.
Additionally, in prior work cost functions are often defined to minimize travel distance [4], as short paths are a desired property [5]. In our work, we define a novel per-link cost heuristic which computes artificial repulsive and attractive
Fig. 1: CAT-RRT is an optimization planner which uses a per-link cost heuristic to generate a path in clutter by allowing contact to occur if it is necessary to succeed at the task. Rather than invalidating contact states or restricting motion for the entire arm (left), we propose a method that generates a path by prioritizing the least impacted links (right).
vectors, together forming an artificial potential field (APF) [8], for every arm link. This allows the planner to assign a different temperature to every link and prioritize motion of the links that are least impacted by vector repulsion, as shown in Fig. 2.
In summary, our contributions are: 1) a novel optimization planner which successfully generates feasible trajectories even when the robot may need to come in contact with an obstacle; 2) an APF-based per-link cost heuristic which prioritizes motion with links that are unrestricted by contact. We performed an extensive quantitative evaluation in simulation and a qualitative demonstration in the real world. Collectively, our results demonstrate that, while relevant literature struggles to generate any path into high-cost regions, CAT-RRT can consistently find feasible trajectories by gradually admitting contact one link at a time.
## II Related Work
In this section, we analyze three major approaches branching from optimization-based Rapidly Exploring Random Tree (RRT): informed approaches, stochastic approaches, and node-changing approaches. We give a brief overview of the representative planners we choose from each category to use as baselines for our work. We also summarize several works that explore contact admissible motion planning without a focus on optimization.
In the wake of success of sampling-based planners, RRT has been widely adopted due to its simplicity and efficiency [9, 10, 11]. However, because any feasible path is accepted without regard for path quality, it generally produces sub-optimal solutions [12]. To improve upon RRT, other works propose a method of prioritizing nodes that converge toward an optimal solution [12, 13]. Often, this is achieved with an informed heuristic during node creation or a modified acceptance test that uses path quality to bias nodes toward low cost regions. The most prominent of these is RRT*, which is used as one of the baselines in our evaluation. RRT* is an incremental sampling-based planning algorithm that maintains a tree without any "redundant" edges--edges that are not within the lowest cost path from the start to current node in the tree [13]. RRT*, like other tree refinement methods, has optimality guarantees. There are several existing modifications of RRT* as well, including using potential field-guided RRT* sampling, but these have not been tested on high-dimensional robot systems [14, 15, 16].
A new wave of batch-informed trees have been proposed, which focus on both efficiency and path quality [17, 18, 19]. One such planner is Batch-Informed Tree* (BIT*), used in our evaluation, which leverages a local optimization module to improve an initial path toward a local optimum. BIT* is probabilistically complete and has been shown to find solutions more often than other almost-surely asymptotically optimal planners. Other optimization methods include stochastic planning algorithms. One such example is Transition-based RRT (T-RRT), which propagates a tree search based on a stochastic optimization method with transition tests to accept or reject new states, but offers no optimality guarantees [6, 7]. Other works build on T-RRT to enable anytime behavior, bi-directional tree growth, and applicability to multi-agent systems [20, 21, 22, 23]. CAT-RRT shares a T-RRT-like optimization approach but with a unique transition test (detailed in Section III-B). To demonstrate this distinction, we use T-RRT as a baseline in our evaluation.
Several planners attempt to improve optimization efficiency by biasing sampled nodes based on a chosen direction
Fig. 2: Example scenario for per-link (top row) and whole-arm cost (bottom row) with a common start configuration (leftmost vignette). Objectβrobot contact is shown in orange and sampled states are grouped in gray scale depending on when the state was sampled, darker states are sampled later in time. The rightmost column depicts the planning space of both cost heuristics with green being a low cost area and orange being high cost. The per-link cost planner is able to find the goal location due to a reduced high cost area surrounding the goal location even though some links are in contact with the object.
[24, 25]. This strategy is desirable because the search can be moved in the direction of low-cost regions especially when guided by potential fields. Vector Field RRT (VF-RRT) does this through the Upstream Criterion, as defined in Eq. (6), which is used to bias sampling toward nodes that minimize the extent to which a path goes against a given vector field [26]. We chose VF-RRT to evaluate this strategy since it is highly applicable to potential field-based cost functions which our work relies on.
The following two papers are the closest to our work. [25] develops a potential field guided RRT* algorithm for the problem of fruit harvesting [25]. It defines leaves as permeable obstacles, which the robot is allowed to come into contact with after incurring a cost. The authors use a combination of an RRT*-like approach with tree refinement and a VF-RRT-like approach with node biasing--both of which are evaluated in our experimental framework. Finally, [27] compares a potential field cost function as applied to T-RRT and other sampling approaches [27]. The authors do not consider contact behaviors and rely on a simplified cost calculation between a single point on the robotic arm and an arbitrary obstacle point.
Finally, several works address contact admissibility and motion planning in the context of perception. Instead of using optimization, these works replace a binary collision-check function with a binary cost-based function [28, 29, 30]. They rely on a threshold that reflects how much contact a robot can make with an object. Such a threshold is challenging to define ahead of time for all environments. In contrast, our work uses an adaptive threshold mechanism.
## III Methods
In this section, we first outline the problem of path finding. Next, we describe how CAT-RRT plans a path while optimizing over a cost function using a set of temperatures and a transition test. Then, we describe how the temperatures are generated based on a separate cost for each link of the arm. Finally, we define additional cost heuristics from existing literature, which are used to evaluate CAT-RRT.
### _Problem description_
We use a similar definition of the planning problem as [18]. Let \(Q\subseteq\mathbb{R}^{n}\) be the state space of the planning problem. Let \(\mathbf{q}_{\text{start}}\in Q_{\text{free}}\) be the initial state of joint angles and \(\mathbf{q}_{\text{goal}}\subset Q_{\text{free}}\) be the set of the desired goal states. Let \(\sigma:[0,1]\to Q_{\text{free}}\) be a continuous map to a sequence of states through a space of bounded variation that can be executed by the robot (i.e. self-collision free, feasible path) and \(\Sigma\) be the set of all such nontrivial paths. The optimal planning problem is then formally defined as the search for a path, \(\sigma^{*}\in\Sigma\), that minimizes a given cost function, \(c:\Sigma\rightarrow\mathbb{R}_{\geq 0}^{n}\), while connecting \(\mathbf{q}_{\text{start}}\) to \(\mathbf{q}_{\text{goal}}\in Q_{\text{goal}}\) where \(\mathbb{R}_{\geq 0}^{n}\) is the set of non-negative real numbers.
### _Motion planning with CAT-RRT_
CAT-RRT benefits from the exploratory strength of RRT-like algorithms that quickly expand toward large regions of unexplored space. Additionally, it integrates features of stochastic optimization methods from T-RRT-like planners, which use transition tests to accept or reject potential states. The main algorithm runs as follows: a random state, \(\mathbf{q}_{rand}\), is selected from the configuration space, which is a minimum distance away from \(\mathbf{q}_{near}\). A transition test function is used to evaluate \(\mathbf{q}_{rand}\). If it passes the transition test, then it is added to the tree, and the process repeats until a path to \(\mathbf{q}_{goal}\) is found. The main tree construction algorithm of CAT-RRT is defined in [6] and will not be reintroduced here for brevity. However, the transition test is unique to our approach and defined in Algorithm 1. First, we evaluate a vector of costs, \(\mathbf{C}\), for \(\mathbf{q}_{rand}\), with each cost corresponding to a link on the arm. Next, we obtain a vector of temperatures, \(\mathbf{T}\), stored in the nearest node of the tree. One link at a time, we evaluate and update the tree node based on a transition test. A transition test is passed if the link's cost, \(\mathbf{C}[i]\), is lower than its allowed temperature, \(\mathbf{T}[i]\), and the temperature is reduced unless it reaches a user-defined minimum value, \(t_{min}\). If all the links pass the test, then the temperature vector is stored in the child node of the added state. A failed transition test increases the temperature for the given link, thereby increasing the chance of a state sampled in that region to be accepted in the next iteration. Although previous works use an intermediate exponential function based on the Metropolis criterion to relate cost and temperature [6], we did not find this beneficial and opted for a direct relationship. In our algorithm, the temperature is synonymous to a dynamic cost threshold. Both \(\omega\) and \(\gamma\) are user-defined values that control the rate of temperature decrement and increment. Our source code provides more specifics on parameter tuning [1].
```
\(\mathbf{C}\leftarrow\)GetPerLinkCost(\(\mathbf{q}_{near},\mathbf{q}_{rand},\mathbf{q}_{goal}\)) \(\triangleright\) Eq. 4 \(\mathbf{T}\leftarrow\)GetTemperature(\(Node_{parent}\)) for\(i=0...L\)do if\(\mathbf{C}[i]<\mathbf{T}[i]\) and \(\mathbf{T}[i]>t_{min}\)then \(\mathbf{T}[i]-=\omega\) elseif\(\mathbf{C}[i]>\mathbf{T}[i]\)then \(\mathbf{T}[i]+=\gamma\) return\(False\) endif endfor StoreTemperature(\(\mathbf{T},Node_{child}\)) return\(True\)
```
**Algorithm 1** CAT-RRT Transition Test
CAT-RRT differs from T-RRT in that, rather than having a global temperature parameter for all nodes, the temperature is stored at the parent node and inherited by the child node. Furthermore, rather than storing a temperature as a scalar for the entire robot body, we create a temperature vector where each link is independently represented. Consequently, the tree accepts or rejects nodes based on the temperature at every link. This results in CAT-RRT's two distinct properties: 1) each branch of the tree regulates its own temperature, and 2) each link on the robotic arm enters high cost regions
independent from the rest of the kinematic chain. When one link is in a high-cost region, it will stay in this position while the other links maintain low-cost region positioning. In the real world, this equates to one link of the robotic arm maintaining contact with an object while the other links continue to traverse contact-free spaces. This is in contrast to planners that attempt to always maintain low costs throughout the arm which may lead to scattered contact along a path. Fig. 2 summarizes how CAT-RRT converges to a goal state using discrete costs along the robotic arm. In the absence of obstacles, CAT-RRT's transition test is not invoked and the planner operates as RRT. Next, we describe how the robot's perception of the environment is converted to a cost for each link.
In this work, we try to step away from the dependence on high-resolution collision models for motion planning, as these are often unavailable for a robot operating in unstructured settings. However, each planner does require a basic understanding of the environment and the robot's position in space. To acquire this understanding, as detailed in Fig. 3, we assume the robot is equipped with a camera that relays depth perception information, as a point cloud, to the motion planning algorithm. The point cloud is converted to point obstacles, which are used as the basis for the planner's obstacle representation. Fig. 3 shows the original point cloud and point obstacles, \(p_{k}\in\Lambda\). Similarly, to ease reliance on 3D mesh models, the planner uses a set of control points to represent the robot and its position. The control points, \(p_{q}\in\Gamma\), are represented by green spheres on Fig. 3.
### _Defining the cost heuristics_
#### Iii-C1 Controlling cost magnitude
Repulsive vector costs and unit magnitude costs serve as the basis of the cost heuristics defined in this paper. These costs are generated from the distance between point obstacles and robot control points. The vector magnitudes, \(\vec{\mathbf{v}}\), are scaled to be inversely proportional to the distance. Rather than opting for the traditional potential field equation introduced by Khatib et al. [8], which increases the repulsive force to infinity as the distance to obstacles becomes zero, we use a scaling function, \(\mathbf{S}\), shown in Eq. (1). Both \(a\) and \(b\) are scalar parameters, which allow us to control the magnitude of cost associated with contact. Here, \(a\) controls the maximum scaling value of \(\vec{\mathbf{v}}\) and \(b\) controls how fast the function converges to the maximum as \(||\vec{\mathbf{v}}||\) goes to zero.
\[\mathbf{S}(\vec{\mathbf{v}})=\frac{a*\vec{\mathbf{v}}}{b*||\vec{\mathbf{v}}||+1} \tag{1}\]
#### Iii-C2 Per-link cost heuristic used with CAT-RRT
The per-link cost heuristic is an essential component of CAT-RRT as it guides the search tree. The desired vector at link \(l\), \(\vec{\mathbf{v}}\), defines the desired direction of motion in Cartesian space for every link of the arm. \(K\) is the number of point obstacles, \(N\) is the number of control points, \(L\) is the number of links, \(p_{k}\) is the \(k\)th obstacle point, \(p_{q_{near},i}\) is the \(i\)th control point on the robot's \(\mathbf{q}_{near}\) state, and link number \(l\) is \(l\in[1,...,L]\), \(p_{q_{goal},i}\) is the \(i\)th control point on the robot's goal state. \(\alpha\) and \(\beta\) are scalar parameters.
\[\vec{\mathbf{v}}=\frac{1}{K}\sum_{k=1}^{K}\frac{1}{N}\sum_{i=1}^{ N}\alpha*\mathbf{S}(p_{q_{near},i}-p_{k})\\ +\beta*(p_{q_{goal},i}-p_{q_{near},i}) \tag{2}\]
The random directional vector \(\vec{\mathbf{d}}\) from \(\mathbf{q}_{near}\) to the uniformly sampled state \(\mathbf{q}_{rand}\) at every link is obtained as follows;
\[\vec{\mathbf{d}}=\frac{1}{N}\sum_{i=1}^{N}(p_{q_{rand},i}-p_{q_{near},i}) \tag{3}\]
The cost at every link \(\mathbf{c}\) is defined by Eq. (4), with lower costs indicating an alignment between the directional vector and the desired vector.
\[\mathbf{c}=(-\vec{\mathbf{v}})\cdot\vec{\mathbf{d}} \tag{4}\]
Fig. 4 shows the components of the per-link vector field alignment cost heuristic, which guides the robot away from obstacles and towards goal locations using randomly sampled states. Next, we define the cost heuristics from previous work and how we implement them. These methods are used by state-of-the-art planners for comparison against CAT-RRT and the per-link cost heuristic.
Fig. 4: Robotβs initial configuration (\(\mathbf{q}_{near}\)) in white, goal configuration (\(\mathbf{q}_{goal}\)) in green, random sampled state (\(\mathbf{q}_{rand}\)) in purple, and a set of point obstacles (\(\Lambda\)) in red. The directional vector for link number six, \(\vec{\mathbf{d}}_{l_{6}}\), points from \(\mathbf{q}_{near}\) to \(\mathbf{q}_{rand}\). The desired directional vector for the link, \(\vec{\mathbf{v}}_{l_{6}}\), is a weighted sum between the vector from \(\Lambda\) to \(\mathbf{q}_{near}\) and the vector from \(\mathbf{q}_{near}\) to \(\mathbf{q}_{goal}\).
Fig. 3: The image on the left shows the robot in front of a point cloud of an object sitting on top of a table. The image on the right shows the same scene with an overlay of point obstacles in red and robot control points in green.
### _Comparison with state of the art_
#### Iii-D1 Obstacle overlap heuristic used with T-RRT, RRT*, and BIT*
The "permissible contact" planners [28, 29, 30] detailed in Section II evaluate the cost of a path based on the amount of overlap between the robot and the potential obstacles in the environment. In this work, the amount of obstacle-robot overlap is equivalent to adding up the vector magnitudes given from Eq. (1), which is implemented as the cost function \(\mathbf{C}\):
\[\mathbf{C}=\sum_{l=1}^{L}\left\|\frac{1}{K}\sum_{k=1}^{K}\frac{1}{N}\sum_{i=1 }^{N}\mathbf{S}(p_{k}-p_{q_{near},i})\right\| \tag{5}\]
This baseline cost heuristic is used to generate low-cost paths by T-RRT, RRT*, and BIT*.
#### Iii-D2 Upstream Criterion used with VF-RRT
As discussed in Section II, one approach to improve the convergence rate of sampling-based planners is to adjust the newly sampled nodes in the direction of a vector field [24, 25, 26]. To test this approach, we use VF-RRT with the Upstream Criterion [26]. The Upstream Criterion is defined as:
\[\int_{0}^{L}(||f(q(s))||-\langle f(q(s)),q^{{}^{\prime}}(s)\rangle)ds \tag{6}\]
where \(f(q(s))\) is a piecewise continuous vector field and \(||f(q(s))||\) represents the norm of \(\langle f(q(s)),q^{{}^{\prime}}(s)\rangle\). The function \(f(q(s))\) is not explicitly defined in the original paper and is left for the user to define based on a specific application. Since we are planning in robot configuration space, the output of \(f(q(s))\) must be a vector of joint angles. However, our robot and the environment are defined in Cartesian space by point obstacles and control points. To obtain a set of joint angles from a set of points in Cartesian space, we apply the inverse of the Jacobian \(\mathbf{J}\) to Eq. (1). \(\mathbf{J}_{l}^{\dagger}\in\mathbb{R}^{3xl}\) is the Moore-Penrose pseudo-inverse of \(\mathbf{J}\) at link \(l\).
\[f(q(s))=\sum_{l=1}^{L}\left(\mathbf{J}_{l}^{\dagger}\times\left(\frac{1}{K} \sum_{k=1}^{K}\frac{1}{N}\sum_{i=1}^{N}\left(\mathbf{S}(p_{k}-p_{q_{near},i}) \right)\right)\right) \tag{7}\]
## IV Experiments
The experimental evaluation is performed in both simulation (Section IV-B) and real-world (Section V-B). The former allows for repeatable and reproducible experiments, while the latter shows the applicability of planning with contact in the real world.
### _Specifications of the experimental testbed_
Based on the discussion in Section II and the implementation in Section III, we evaluate: T-RRT, RRT*, and BIT*, which use the obstacle-robot overlap cost heuristic, VF-RRT, which uses the upstream criterion, and CAT-RRT, which uses the per-link cost heuristic. Each planner was allotted a maximum of 60 seconds to compute and refine a path. We believe this to be a reasonable amount of evaluation time and comparable to prior work--e.g. [18] used a 20 second limit for a similar 7 degree-of-freedom (DOF) problem to evaluate BIT* with limited compute power.
Each planner relies on a set of control points and point obstacles referred to in Section III-B. To obtain the control points, we extract 115 vertices from the robot's 3D mesh, openly available on the Franka Emika repository. To obtain point obstacles, we downsample a point cloud using the Point Cloud Library Voxel Grid [31] filter with a leaf size of 0.05m. The point cloud is then converted to the robot's frame of reference. Each of the resulting voxels, or values on a regular grid in 3D space, is considered as a point obstacle. In simulation, the point obstacles are added artificially to create example objects, represented by red orbs on Scenarios 1-4 in Fig. 5.
The planning algorithms are implemented in C++ with the the ROS (Noetic) framework [32]. We use TRRT, VF-RRT, RRT*, and BIT* within the Open Motion Planning Library [33] and integrate CAT-RRT within the library as well. We use Moveit! for simulation [34] and Rviz for visualization. Franka Emika Panda is used as the robot platform and an OAK-D Pro [35] camera to capture the point cloud. All experiments were performed on a computer with an Intel i9 processor and 16GB RAM.
### _Simulated experimental scenarios_
For the simulated experiments, four scenarios of increasing complexity are designed--see Fig. 5. Scenario 1 evaluates if each planner can succeed in finding a path from a free low-cost start state to a free low-cost goal state in the presence of a single obstacle. This is a baseline scenario used to check fundamental path finding capabilities. In Scenario 2, the planner is asked to compute a path in which the robot's goal state is in contact with an obstacle. This scenario tests the planner's ability to plan into a high-cost region. Scenario
Fig. 5: In our evaluation, the robot is tasked with finding a path from the start state (white) to the goal state (green) while moving through obstacle regions (red) in four experimental scenarios of increasing complexity. The scenarios from left to right are increasingly more complex with obstacles overlapping with the start and goal states.
3 includes two obstacles at the start state and one in the goal state, which tests the planner's ability to traverse between two high-cost regions. Finally, Scenario 4 is set up similarly to Scenario 3, but with an additional obstacle blocking the path away from the other obstacles, meaning the robot cannot break contact with the high-cost regions as in Scenario 3. This tests how the planner is able to modulate high-cost regions across the robotic arm.
### _Metrics for evaluation_
We evaluate each planner based on its ability to successfully generate a path within the allotted time. For the resulting trajectories, we measure the distribution of contact along the arm and the overall path length. These metrics represent the planner's ability to minimize contact cost while moving toward the goal. We run fifty trials for each planner in each scenario and average the metrics across the successful trials. Path length is calculated as the sum of the \(L^{2}\)-norm between the end-effector Cartesian points of the trajectory. For the simulated experiments, we measure the amount of contact along the trajectory by calculating the overlap between the 3D mesh of the arm and the obstacles. This is done through a collision post-processing step. First, we place spherical collision objects of the same size as the red point obstacle orbs into the robot scenario. Then, we run collision detection based on the Bullet Physics Engine on every state along the trajectory. The collision checker returns the number of states in collision and the contact depth, or penetration depth, between each overlapping robot-obstacle pair.
## V Results and Discussion
In this section, we summarize the results obtained after running the experiments outlined in Section IV. We demonstrate that T-RRT and VF-RRT struggle to navigate into high-cost regions with contact. While RRT* tends to prioritize shorter paths by incurring more contact, BIT* generates longer paths with less contact. In contrast, CAT-RRT tends to find a better balance between path length and contact depth.
### _Simulation experiments_
#### V-A1 Scenarios 1 & 2
All planners are able to find a path with no obstacle overlap for Scenario 1. However, the results from Scenario 2 demonstrate a significant rift in the capabilities of the planners, in that T-RRT and VF-RRT were able to compute a successful path 0/50 times while the other planners were able to find such a path 50/50 times. Table I summarizes the binary results of the planners in their ability to plan into high-cost regions.
The reason T-RRT struggles to find a path in Scenario 2 is because the global temperature variable is prohibitive in allowing the tree to explore high-cost regions. The temperature parameter is proportional to the probability of having a state accepted as a node in the tree. Fig. 6, right shows that the temperature drops early in the iteration phase because of the large number of samples generated in the low-cost space of the robotic arm. This prevents the states in the high-cost regions near obstacles from being accepted.
any rewiring steps as the other planners. A post-processing trajectory optimization step can smooth out the trajectory and reduce the higher average contact depth values for CAT-RRT in Scenario 3.
#### V-C3 Scenario 4
The results from Scenario 4 are also detailed on Table II. In this scenario, CAT-RRT outperforms the other planners in its ability to generate a path of shortest length, with the least impact, and faster computationally. On Fig. 8, the contact penetration depth at every link is plotted across a sample trajectory generated by each planner. With only one peak, as opposed to two and three for BIT* and RRT* respectively, CAT-RRT demonstrates its ability to keep one link in contact while moving other links through free space. This concept is highlighted in Fig. 1. This is another reason for which the path length of CAT-RRT is shorter, as it can maintain contact with the obstacle at the base while moving perpendicular to the obstacle at the end-effector. In contrast, the other planners tend to produce contact more randomly along the links while searching for a minimum-cost path to the goal. This results in longer high-cost trajectories for the arm and each link.
### _Real-world demonstration_
To validate our simulation findings, we set up a real-world experiment that demonstrates a situation in which contact is harmless. More specifically, we show how the robot can reach for an object while making contact with a soft obstacle which overlaps with the goal state. Our supplementary video showcases the results [1]. Although the planning time of CAT-RRT remains a challenge for real-world operation, we believe this can be addressed with parallel computing and algorithm optimization.
## VI Conclusion and Future Work
This work is guided by the idea that planners can enhance their operational capabilities by reducing reliance on collision checking and increasing tolerance to contact with objects. We present a method that allows robots to intelligently plan for contact given a limited understanding of the environment. We show that our planner can successfully generate a path into high-cost regions with obstacles. Compared to other planners, which use a single cost for the entire arm, CAT-RRT can create shorter paths in less time using a per-link cost heuristic. In our future research, we aim to demonstrate how robots can help leverage more "action" in the "sense-perceive-act" paradigm [36]. To do so, we aim to tightly couple CAT-RRT with control ([37, 38]) to track contact during trajectory execution and perception to adjust planning costs based on object properties (e.g. hard or soft material). We believe that a robot that can plan and adjust for contact is better equipped to handle manipulation tasks in unstructured environments in the agricultural, industrial, and retail sectors.
\begin{table}
\begin{tabular}{|c|c c c c|c c c c c c c|c c c c c c c c|} \hline \multirow{3}{*}{**Method**} & \multicolumn{10}{c|}{**Scenario 3**} & \multicolumn{10}{c|}{**Scenario 4**} \\ \cline{2-13} & \multicolumn{2}{c}{**Path Metric**} & \multicolumn{2}{c}{**Total Contact Depth for Link (mm)**} & \multicolumn{2}{c}{**Path Metric**} & \multicolumn{2}{c}{**Total Contact Depth for Link (mm)**} \\ \cline{2-13} & S & T & PL & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & S & T & PL & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ & (50) & (s) & (m) & & & & & & & & & (50) & (s) & (m) & & & & & & & & & & & & \\ \hline
**RRT*** & 36 & 62.9 & 1.7 & 0. & 0. & 0. & 0.1 & 15.4 & 16.1 & 12.8 & 23.9 & 28.4 & 31 & 63.2 & 1.1 & 0. & 18.1 & 41.5 & 6.3 & 28.9 & 22. & 32.6 & 35.8 \\ \hline
**BIT*** & 50 & 60.0 & 3.9 & 0. & 0. & 0.2 & 20.6 & 3.2 & 8.1 & 18.3 & 21.5 & 50 & 60.0 & 1.8 & 0. & 23.2 & 43.8 & 10.7 & 30.1 & 29.7 & 28.7 & 24.5 \\ \hline
**CAT-RRT** & 50 & 18.7 & 1.2 & 0. & 0. & 0. & 18.3 & 0.2 & 5.5 & 13.9 & 25.4 & 50 & 15.9 & 1.2 & 0. & 24.6 & 51.2 & 5.6 & 10.2 & 13. & 20.8 & 22.9 \\ \hline \end{tabular}
\end{table} TABLE II: Experimental results of each planning algorithm for Scenarios 3 and 4. The path metrics include the number of successes out of 50 trials (S), the average time to compute the path within the allotted time budget of 60s (T), and the total path length of the end-effector (PL). All the metrics, including contact depth for each link, are averaged across the successful trials. The highlighted colors in Scenario 4 correspond to the maximum contact depth peaks in Figure 8.
Fig. 8: Contact depth at each link along one generated sample trajectory in Scenario 4. The peaks correspond to a high level of overlap between the link and the obstacle. Whereas RRT* and BIT* have three and two peaks each, CAT-RRT maintains one prolonged contact at a single link, achieving a faster convergence to goal with a shorter path length.
Fig. 7: Example trajectories for Scenario 3 generated by each planner. RRT* chooses to traverse the obstacle in front, BIT* first moves away from all obstacles before returning in the direction of the goal state, and CAT-RRT finds a low-cost path in between the two obstacles. |
2303.00981 | Differentiable Trajectory Generation for Car-like Robots with
Interpolating Radial Basis Function Networks | The design of Autonomous Vehicle software has largely followed the
Sense-Plan-Act model. Traditional modular AV stacks develop perception,
planning, and control software separately with little integration when
optimizing for different objectives. On the other hand, end-to-end methods
usually lack the principle provided by model-based white-box planning and
control strategies. We propose a computationally efficient method for
approximating closed-form trajectory generation with interpolating Radial Basis
Function Networks to create a middle ground between the two approaches. The
approach creates smooth approximations of local Lipschitz continuous maps of
feasible solutions to parametric optimization problems. We show that this
differentiable approximation is efficient to compute and allows for tighter
integration with perception and control algorithms when used as the planning
strategy. | Hongrui Zheng, Rahul Mangharam | 2023-03-02T05:22:18Z | http://arxiv.org/abs/2303.00981v1 | Differentiable Trajectory Generation for Car-like Robots with Interpolating Radial Basis Function Networks
###### Abstract
The design of Autonomous Vehicle software has largely followed the _Sense-Plan-Act_ model. Traditional modular AV stacks develop perception, planning, and control software separately with little integration when optimizing for different objectives. On the other hand, end-to-end methods usually lack the principle provided by model-based white-box planning and control strategies. We propose a computationally efficient method for approximating closed-form trajectory generation with interpolating Radial Basis Function Networks to create a middle ground between the two approaches. The approach creates smooth approximations of local Lipschitz continuous maps of feasible solutions to parametric optimization problems. We show that this differentiable approximation is efficient to compute and allows for tighter integration with perception and control algorithms when used as the planning strategy.
## I Introduction
Traditionally, the motion planning task for a car-like robot requires synthesizing trajectories online. The local planning task ultimately searches for a sequence of feasible control inputs given a desired local goal. This is naturally expressed as a parametric optimization problem. One can easily enforce constraints from vehicle dynamics or limits of operation range. However, even with efficient optimization solvers, solving potentially hundreds of optimization with different specifications online up to thirty times a second is still challenging. Massive look-up tables storing discretized solutions found by running the optimizations offline have been used to speed up the process online. However, the look-up table can only provide discrete approximations of the actual optimal control. Moreover, the trajectory generation remains single-threaded and requires high memory usage without specialized software implementation.
More recently, different efforts have tried to bypass solving optimal control online by creating end-to-end planners attempting to directly produce control inputs using sensor information [1, 2, 3]. End-to-end approaches provide the benefit of exposing gradient information for upstream and downstream processes (e.g., a neural network-based perception pipeline). Although these approaches show potential in efficiently generating local motion plans, they cannot enforce dynamic constraints without external help or guarantee the solutions' validity. We propose _Interpolating Radial Basis Function Networks_ (IRBFN), a differentiable trajectory generation method that produces dynamically feasible trajectories and can be efficiently parallelized. Unlike existing differentiable planners, the gradient information is available throughout the process from local goal to final states on the trajectory. Our approach also leverages the GPU for highly parallelizable computations for efficiency. One key contribution of our work is that it preserves dynamic constraints and approximates solutions from optimal control problems arbitrarily well if enough training samples are provided. Another key contribution compared to existing work is that the planner is differentiable with respect to _all_ possible parameters, e.g. gradients can flow from control or planning loss to the local goal selection. The scope of this paper only includes providing a theoretical contribution that enables the use of differentiable planners and provides an important error bound. As shown in Figure 1, differentiable planners blend the explainability of traditional modular planners and the scalability of end-to-end planners.
In the following sections, we'll present preliminary information on trajectory generation and Radial Basis Function Networks in Section III-A, define the interpolating RBFNs in Section III-C, discuss the error bounds of interpolation in Section III-D, and show benchmarks in Section IV.
## II Related Work
### _State Lattice Planners_
State lattice based motion planners based on clothoids [4] have been used on Autonomous Vehicles since the DARPA Urban Challenge [5] and continue to see success in highly unstructured and competitive environments as well [6]. It has also been shown that they can be easily integrated as differentiable planners [7, 8]. The sampling-based scheme that state lattice planners use is flexible for planning cost evaluation and quick trajectory optimization.
Fig. 1: Comparison between standard modular planner stack, differentiable planner stack, and end-to-end planner stack. Gradient information is not available between modules of a traditional modular planner, and is available in differentiable and end-to-end planners.
### _Data-driven Motion Planning_
One line of work advocates the use of a fully end-to-end trainable Autonomous driving stack and learning a network-based policy directly from a large amount of data [1, 2, 3]. These approaches boast the scalability that comes with using black-box algorithms. However, fully end-to-end approaches lack the explainability and interpretability of more principled and traditional motion planning approaches. In addition, verifiability and safety guarantees become extremely hard to inject into these algorithms due to the use of black-box approaches. In contrast, another line of work aims to find a middle ground between end-to-end pipelines and traditional motion planning [8, 9, 10, 11]. These approaches try to preserve interpretability in planning by using objectives provided by the prediction and detection modules. And use a joint backbone network that takes sensor observations directly to decision-making to utilize an end-to-end model.
### _Differentiable Modeling and Motion Planning_
More recently, approaches have focused more on applying differentiable algorithms to Autonomous driving. The idea is to make existing model-based algorithms differentiable. Differentiable modeling and simulation [12, 13, 14, 15, 16, 17] has been shown to help achieve a better synthesis of controllers and improve sample efficiency using Reinforcement Learning and other learning-based methods. Differentiable planning has shown promise in general planning task [18, 19, 20, 21] as well as vision-based planning tasks and planning under uncertainty [22, 23, 24]. Lastly, differentiable control [25, 26, 27, 28] has also shown performance on par with traditional control strategies and provides additional gradient information that could help improve upstream modules in the software stack. Our approach aligns with this theme closest by making model-based algorithms differentiable. IRBFNs preserve the interpretability of planning with state lattice planners and cubic spirals. It uses a differentiable function approximator that can get arbitrarily close to the ground truth to make the planning pipeline differentiable. Compared to approaches similar to [7], the gradient information is available for all planner parameters, e.g. gradients can flow from control or planning loss to the local goal selection policy. This enables true end-to-end training of the planner without sacrificing the properties of model-based algorithms.
### _Function Approximator in Dynamics_
The method used in this paper is also related to methods proposed by [29], where ANNs are used to create differentiable function approximations, and the approximation properties are studied. Similarly, [30] describes methods for characterizing non-linear plant models and controllers for these models in terms of RBFNs. However, neither of these provide an approximation error bound. Additionally, the training dataset in these works does not preserve the exact fitting of training data.
## III Methodology
### _Preliminaries_
#### Iii-A1 Trajectory Generation
We use clothoids as the parameterization of dynamically feasible trajectory for Ackermann steering vehicles because clothoids are posture continuous and can be represented as a polynomial. We represent the curvature of the trajectory as a cubic polynomial of the arc length \(s\):
\[\kappa(s)=a+bs+cs^{2}+ds^{3} \tag{1}\]
Following [31], we re-formulate the above cubic polynomial such that the parameters are the curvatures of four equidistant points along the trajectory:
\[\begin{split} a&=\kappa_{0}\\ b&=-\frac{1}{2}\frac{-2\kappa_{3}+11\kappa_{0}-18 \kappa_{1}+9\kappa_{2}}{s_{f}-s_{0}}\\ c&=\frac{9}{2}\frac{-\kappa_{3}+2\kappa_{0}-5\kappa _{1}+4\kappa_{2}}{(s_{f}-s_{0})^{2}}\\ d&=-\frac{9}{2}\frac{-\kappa_{3}+\kappa_{0}-3 \kappa_{1}+3\kappa_{2}}{(s_{f}-s_{0})^{3}}\end{split} \tag{2}\]
To incorporate dynamic constraints into the generation process, we use the following kinematic dynamic model of the vehicle:
\[\begin{split}\mathbf{x}=[x,y,\theta,\kappa]&\quad \mathbf{u}=\left[v,\dot{\delta}\right]\\ \dot{x}(t)&=v(t)\cos\theta(t)\\ \dot{y}(t)&=v(t)\sin\theta(t)\\ \dot{\theta}(t)&=\kappa(t)v(t)\\ \dot{\kappa}(t)&=\frac{\dot{\delta}(t)}{L}\end{split} \tag{3}\]
where \(\mathbf{x}\) is the state vector consisting of the pose of the vehicle and the current curvature, \(\mathbf{u}\) is the input vector consisting of the vehicle's velocity and steering velocity input, and \(L\) is the wheelbase of the vehicle. At low steering angles, we can approximate the curvature of the vehicle with the steering angle: \(\kappa=\frac{\tan\delta}{L}\approx\frac{\delta}{L}\). Additionally, we introduce constraints on the initial and final states of the trajectory:
\[\begin{split}\mathbf{x}(s_{0})&=[x_{0},y_{0},\theta _{0},\kappa_{0}]\\ \mathbf{x}(s_{f})&=[x_{g},y_{g},\theta_{g},\kappa_{g}] \end{split} \tag{4}\]
The initial state constraint is trivial since it's determined by the current state of the vehicle. The final state is determined by the goal pose and goal curvature of the vehicle. Next, we rewrite the formula using the substitution \(v=\frac{ds}{dt}\) to find the ODEs with respect to the arc length. By dividing \(v\) on both sides, the first three terms in the dynamics then become the following equations.
\[\begin{split}\frac{dx}{ds}&=\cos\theta(s),\qquad x( s)=\int\cos\theta(s)\\ \frac{dy}{ds}&=\sin\theta(s),\qquad y(s)=\int\sin \theta(s)\\ \frac{d\theta}{ds}&=\kappa(s),\qquad\theta(s)=\int \kappa(s)ds\end{split} \tag{5}\]
Then by using Equation 1 as the curvature, we perform the following integrations to find the states on the trajectory.
\[\kappa(s) =a+bs+cs^{2}+ds^{3} \tag{6}\] \[\theta(s) =as+bs^{2}+cs^{3}+ds^{4}\] \[x(s) =\int_{0}^{s_{f}}\cos\left(as+\frac{bs^{2}}{2}+\frac{cs^{3}}{3}+ \frac{ds^{4}}{4}\right)ds\] \[y(s) =\int_{0}^{s_{f}}\sin\left(as+\frac{bs^{2}}{2}+\frac{cs^{3}}{3}+ \frac{ds^{4}}{4}\right)ds\]
We can then set up an optimization where the objective is to minimize the Euclidean distance between the goal pose \((x_{g},y_{g},\theta_{g})\) of the trajectory and the integrated pose at the final arc length:
\[\text{minimize}\quad\left|x(s_{f})-x_{g}\right|^{2}+\left|y(s_{f})-y_{g} \right|^{2}+\left|\theta(s_{f})-\theta_{g}\right|^{2} \tag{7}\]
Along with Equation 6 as the constraints for optimization, we can find the optimization variable \(q=[\kappa_{0},\kappa_{1},\kappa_{2},\kappa_{3},s_{f}]\). Following [4], the position quadrature gradients and Hessians can be efficiently calculated with Simpson's rule. Thus Newton's method can be used for optimization. Alternatively, Powell's method [32] could also be used to find the solution without the use of gradients. To enforce dynamic constraints, we clip the curvature allowed in Equation 6 to the actual physical limits.
#### Iii-C2 Look-up Tables
Using the optimization outlined in the previous section, a grid of local goals in the car's frame can create a look-up table (LUT) that stores the optimized parameters. However, efficient online planning using the LUT requires a high resolution of the look-up grid. In addition, points between the grid points can't be interpolated accurately using the stored values.
#### Iii-C3 Radial Basis Function Networks:
Radial Basis Function Networks (RBFNs) [33] use a smooth function of the distance of an input to an origin in place of a sigmoidal function as a neuron in sigmoidal neural networks. We define RBFNs following the standard definition as follows.
**Definition III.1** (Radial Basis Function Networks).: An RBFN consists of two layers, a hidden layer with multiple RBF neurons and a linear layer. Each of the RBF neurons is centered around a predefined or trainable center where the distances are calculated.
\[\Phi(\mathbf{x})=\sum_{i=1}^{M}k_{i}\rho(||\mathbf{x}-c_{i}||) \tag{8}\]
\(M\) is the number of centers, \(c_{i}\)s are the centers for the hidden RBF layers, \(k_{i}\) are the weights for the linear layer. \(\rho\) is the smooth activation function chosen as the Radial Basis Function. In our use case, we use an inverse quadratic function as the kernel function: \(\rho(\mathbf{z})=\frac{1}{1+z^{2}}\). Usually, during training, the centers of the RBFN are chosen as the available data points in the training dataset for better approximations. We left the centers as trainable parameters during our training process.
### _Problem Definition_
Given a local goal \([x_{g},y_{g},\theta_{g},\kappa_{g}]\) for a car-like robot, we define the trajectory generation task as creating a sequence of feasible control inputs \(\{\mathbf{a},\delta\}\), and corresponding poses and velocity profiles \(\{\mathbf{x},\mathbf{y},\theta,\kappa\}\) in the workspace, that takes the robot from its current pose to the local goal. We further formulate a parametric optimization problem to describe the generation process. From Section III-A1, we can fully describe a single trajectory with the polynomial parameters \([\kappa_{0},\kappa_{1},\kappa_{2},\kappa_{3},s_{f}]\). Thus the parametric optimization problem is:
\[\text{minimize} |x(s_{f})-x_{g}|^{2}+|y(s_{f})-y_{g}|^{2}+|\theta(s_{f})-\theta_{ g}|^{2}\] subject to \[x(s)=\int_{0}^{s_{f}}\cos\left(as+\frac{bs^{2}}{2}+\frac{cs^{3} }{3}+\frac{ds^{4}}{4}\right)ds\] \[y(s)=\int_{0}^{s_{f}}\sin\left(as+\frac{bs^{2}}{2}+\frac{cs^{3} }{3}+\frac{ds^{4}}{4}\right)ds\] \[\theta(s)=as+bs^{2}+cs^{3}+ds^{4}\] \[a=\kappa_{0}\] \[b=-\frac{1}{2}\frac{-2\kappa_{3}+11\kappa_{0}-18\kappa_{1}+9 \kappa_{2}}{s_{f}-s_{0}}\] \[c=\frac{9}{2}\frac{-\kappa_{3}+2\kappa_{0}-5\kappa_{1}+4\kappa_{ 2}}{(s_{f}-s_{0})^{2}}\] \[d=-\frac{9}{2}\frac{-\kappa_{3}+\kappa_{0}-3\kappa_{1}+3\kappa_{ 2}}{(s_{f}-s_{0})^{3}} \tag{9}\]
Then, ultimately, we frame it as a function approximation problem. Where we approximate a function \(f_{\mathrm{opt}}\) such that:
\[f_{\mathrm{opt}}\left([x_{g},y_{g},\theta_{g},\kappa_{g}]\right)=\left[\kappa _{0}^{*},\kappa_{1}^{*},\kappa_{2}^{*},\kappa_{3}^{*},s_{f}^{*}\right] \tag{10}\]
Where the function being approximated takes in the local goal and outputs the optimized parameters that describe the clothoid. In the next section, we'll introduce our proposed function estimator.
### _Interpolating Radial Basis Function Networks_
We introduce a modified RBFN that performs interpolation over a uniform finite grid of the domain. Together with a smooth indicator function, interpolating approximations from multiple RBFNs produces bounded approximation error. The interpolating RBFN consists of multiple RBFNs where each RBFN performs approximation over an orthotope partitioned by a uniform grid over the domain of \(f_{\mathrm{opt}}\) (shown in Figure 2). We define a smooth and differentiable indicator function that returns a scalar between 0 and 1 on each region in each dimension of the input vector.
**Definition III.2** (Smooth Indicator Function).: A smooth indicator function is a function \(\gamma_{m}:\mathbb{R}^{N}\rightarrow[0,1]\) defined on the orthotope \(r_{m}=[l_{1},u_{1}]\times\cdots\times[l_{N},u_{N}]\in\mathcal{X}\).
\[\gamma_{n}(\mathbf{x})=\prod_{m=1}^{M}\left(\frac{\sigma\left( \zeta\left(\mathbf{u}_{m,n}-x_{m}\right)\right)+1}{2}\right) \tag{11}\] \[\times\left(\frac{\sigma\left(\zeta\left(x_{m}-\mathbf{l}_{m,n} \right)\right)+1}{2}\right)\]
Where \(M\) is the dimensions of the input vector, \(N\) is the number of orthotopes defined, \(\sigma\) is the tanh activation function, \(\zeta\) is a scalar parameter, and \(\mathbf{u}\) and \(\mathbf{l}\) are the bounds of the intervals that define the orthotopes.
The bounds of the intervals are defined to be multiples of \(\mathbf{s}\), which is the spacing of the partitioning grid over \(\mathcal{X}\). In addition, we define a controllable parameter \(\delta\in[0,1]\) as a function of \(N\) and \(\zeta\). An important property of the indicator function emerges by carefully choosing the values of \(\delta\) and \(\zeta\). At any point \(\mathbf{x}\in\mathcal{X}\), the value of \(\sum_{n}\gamma_{n}(\mathbf{x})\) can be made arbitrarily close to 1, thus creating smooth interpolation over all the orthotopes. Intuitively, the indicator function is designed such that the output of the interpolating RBFNs at the boundaries of the orthotopes in each dimension is from exactly one-half of the outputs of each neighboring RBF. The interpolation is performed before the final linear layer in the network. Thus, we define the Interpolating Radial Basis Function Network (IRBFN) as:
**Definition III.3** (Interpolating Radial Basis Function Network (IRBFN)).: \[\Phi_{\mathrm{interp}}(\mathbf{x})=\sum_{i=1}^{N}k_{i}\rho(||\mathbf{x}-c_{i} ||)\gamma_{i}(\mathbf{x})\] (12)
### _Bounded Interpolation Error_
An important property of the interpolating RBFNs is bounded interpolation error. In the following section, we provide proof with the following sketch: we first show that \(f_{\mathrm{opt}}\) is a computable function, hence continuous. Then we show that approximation of continuous function using interpolating Radial Basis Functions on a uniform grid provides a bounded error. First, we provide some important definitions following [34].
**Definition III.4** (Oracle Turing Machine).: An Oracle Turing Machine (TM) is an ordinary Turing Machine \(M\) equipped with an additional query tape and two additional states: the query state and the answer state. When the machine enters the query state, the oracle, a function \(\phi\), replaces the current string \(s\) in the query tape by the string \(\phi(s)\), moves the tape head back to the first cell of the query tape, and puts the machine \(M\) in the answer state.
**Definition III.5** (\(k\)-oracle Turing Machine).: A \(k\)-oracle TM \(M\) is an Oracle Turing Machine which uses \(k\) oracle functions \(\phi_{1},\ldots,\phi_{k}\). M can make queries to the \(i\)-th function \(\phi_{i},1\leq i\leq k\) by writing down \(\langle i,n\rangle\) on its query tape and the \(i\)-th oracle will answer by writing \(\phi_{i}(n)\) on the tape.
**Definition III.6** (Computability of functions).: A real function \(f:[0,1]^{k}\to R\) is _computable_ if there exists a \(k\)-oracle TM \(M\) such that for all \(x_{1},\ldots,x_{k}\in[0,1]\) and all \(\phi_{i}\in CF_{x_{i}},1\leq i\leq k\), \(M^{\phi_{1}},\ldots,M^{\phi_{k}}\) halts and outputs a dyadic rational \(d\) such that \(|d-f(x_{1},\ldots,x_{k})|\leq 2^{-n}\). Where \(CF_{x_{i}}\) denotes the Cauchy function or the set of all functions binary converging to \(x_{i}\).
**Definition III.7** (Local Lipshitz Continuity).: A function \(f\) is _locally Lipschitz continuous_ over a bounded domain \(\mathcal{X}\) if for \(\mathbf{u},\mathbf{v}\in\mathcal{X}\) there exists a finite \(K\) such that
\[||f(\mathbf{u})-f(\mathbf{v})||\leq K||\mathbf{u}-\mathbf{v}|| \tag{13}\]
Since \(f_{\mathrm{opt}}\) generates unique solutions with gradient descent given the same initial condition in polynomial time, and the solution exists universally, \(f_{\mathrm{opt}}\) is computable. Note that computable real functions were first formally defined by Grzegorczyk [35]. The Oracle TMs definitions are equivalent to the original definition. From [34, 35, 36], computable functions preserve Lipschitz continuity. Hence we can derive the following approximation bound for the interpolating RBFNs following [37, 38].
**Theorem III.1** (Bounded Interpolation Error on Finite Uniform Grid).: _The maximum interpolation error is uniformly bounded by the following equation:_
\[\begin{split}||\Phi_{\mathrm{interp}}(\mathbf{x})-& f_{\mathrm{opt}}(\mathbf{x})||_{\infty}<\frac{1}{N^{\alpha}}\left[L2^{ \frac{\alpha}{2}+1}s^{\alpha}\right.\\ &\left.+2^{\alpha/2}s^{\alpha}||\Phi_{\mathrm{interp}}||_{\infty}+ ||f_{\mathrm{opt}}||_{\infty}\right]\end{split} \tag{14}\]
\(N\) is the number of training samples, \(\alpha\) is the Holder order, \(L\) is the Holder constant, and \(s\) is the spacing between training samples, equivalent to the grid spacing we've defined. Note that when the function is locally Lipschitz continuous in our case, \(\alpha=1\). It is clear that as the limit of \(s\) goes to zero, implying a finer grid defined for the look-up table, the interpolation error goes to zero. By increasing the number of samples, the error can be made arbitrarily small.
### _Differentiability_
Using a differentiable region indicator function (Equation 11), the interpolating RBFN is fully differentiable. In the integration step, we use Autograd [39] to make the gradient available from the sampled local goals to the states on the trajectories. The availability of gradients through the trajectory generation pipeline in a white-box model provides benefits over black-box in Model-based Reinforcement Learning [17]. Moreover, incorporating neural components significantly improves computational efficiency when synthesizing controllers for dynamic systems [16].
Fig. 2: Network architecture of IRBFNs in the trajectory generation use case. The network takes a set of local goals as input and outputs parameters of polynomials describing the desired clothoids.
## IV Experiments
### _Software Implementation and Training Dataset_
The trajectory generation pipeline is written in JAX [40] and FLAX [41] and can be found online at [https://github.com/hzheng40/irbfn](https://github.com/hzheng40/irbfn). The implementation makes use of just-in-time (JIT) compilation to speed up mathematical calculations. The model utilizes automatic vectorization (vmap) to create the multi-headed structure of the network. Finally, during the integration step, the implementation makes use of the Haskell-like type signature scan to eliminate for-loops with carryovers. The training dataset is generated using Newton's method using the Jacobians and Hessians found in [4]. The resolution of the look-up table is specified in Table I. All points are used as the training set since overfitting the available data is desired. The interpolating RBFNs use 100 trainable centers for each RBFN and 880 regions (RBFNs). The length of the intervals that defines the orthotopes (regions) is 1.0 meter in \(x\), 1.6 meters in \(y\), and \(0.39\) radians in \(\theta\). \(\zeta\) used in the indicator function are 15 for \(x\), 15 for \(y\), and 100 for \(\theta\). Finally, the network is trained using the Adam [42] optimizer with a learning rate of 0.001, MSE loss, and batch size of 2000. At 400 epochs, the average training loss over the entire dataset is 0.03107.
### _Benchmarks and Trajectory Generation Errors_
We run all benchmarks on a system with an NVIDIA RTX 2070 Super GPU and an AMD Ryzen 9 3900X CPU. We profile the generation of 500 trajectories with different goal points. The peak VRAM usage using our approach was 273.58 MiB. In 1000 evaluations with different random noises at each evaluation added to the goal points, our approach was able to achieve an update frequency of 230.08 Hz. Compared to optimizing for the polynomial solutions online at 3.25 Hz, our approach is a 70x+ speed up only using a small amount of VRAM. Figure 3 shows example outputs with goals set at \(x=5\) meters with various \(y\) and \(\theta\) values.
We also measure the average error between generated trajectories' endpoints and the given goal points. Across 500 trajectories spanning a region of 2 to 6 meters in \(x\), -4 to 4 meters in \(y\), and -0.3 to 0.3 radians in \(\theta\). We compare the experimental error and the theoretical error bounds at the endpoints of trajectories in Table II. The theoretical error bounds are obtained by using Equation 14, then propagated through the dynamics integration in Equation 6. We note that while the \(x\) and \(y\) experimental average errors are within the theoretical bound, the \(\theta\) error is above the theoretical bound. This could be due to intrinsic sensitivity in the \(\theta\) dimension. Since the absolute values for \(\theta\) are much smaller than those of \(x\) and \(y\) while having the same resolution in the look-up table used as training data. Additionally, since our training error is not zero, the prediction error adds to the interpolation error, which gets compounded through integration to generate the states on the trajectories. Another noticeable decay in the accuracy of trajectory generation is as the desired goal moves towards the edge of the available training data.
## V Limitations and Conclusions
**Limitations:** One of the limitations is the difficulty of filtering out invalid training data during the offline generation of the look-up table. In our experiments, we reject invalid trajectories by comparing the arclength and the corresponding goal's \(x\) and \(y\) coordinates. The percentage of valid trajectories can be improved by increasing the iteration limits in the optimization. However, this step has a tradeoff between quality and computation time. Another limitation is that this work doesn't show the possible improvement in planning using the gradient information provided by the differentiable pipeline. Future can utilize a trainable goal selection policy and the IRBFNs in an end-to-end pipeline, i.e., model-based RL. Lastly, calculating the gradient information is noticeably more computationally intensive than inferencing on the trained network. Future work could benchmark and improve the efficiency of gradient calculations.
**Conclusions:** In this paper, we proposed a differentiable trajectory generation pipeline for car-like robots with interpolating Radial Basis Function Networks. Though using IRBFNs to implement a complete planner stack is out of the scope of this paper, our approach provides an important theoretical contribution that shows success in trajectory generation and a uniformly bounded interpolation error. In terms of computation efficiency, our implementation achieves
Fig. 3: Example trajectory output from IRBFN at \(x=5\)m, and at various \(y\) and \(\theta\) values.
a 70x+ speed up at 230+ Hz when generating 500 trajectories simultaneously compared to existing methods while only using a small amount of VRAM. In addition, the gradient information of every parameter is available throughout the pipeline.
## Acknowledgments
We thank Matthew O'Kelly for the initial ideation of the project and Joshua P. Reddy for their contribution to the initial experiments of the project.
|
2305.04869 | A review on Glueball hunting | One of the most direct predictions of QCD is the existence of color-singlet
states called Glueballs, which emerge as a consequence of the gluon field
self-interactions. Despite the outstanding success of QCD as a theory of the
strong interaction and decades of experimental and theoretical efforts, all but
the most basic properties of Glueballs are still being debated. In this talk, I
will review efforts aimed to understanding Glueballs and the current status of
Glueball searches, including recent experimental results and lattice
calculations. | Davide Vadacchino | 2023-05-08T17:17:09Z | http://arxiv.org/abs/2305.04869v1 | # A review on Glueball hunting
###### Abstract:
One of the most direct predictions of QCD is the existence of color-singlet states called Glueballs, which emerge as a consequence of the gluon field self-interactions.
Despite the outstanding success of QCD as a theory of the strong interaction and decades of experimental and theoretical efforts, all but the most basic properties of Glueballs are still being debated.
In this talk, I will review efforts aimed to understanding Glueballs and the current status of Glueball searches, including recent experimental results and lattice calculations.
Introduction
Quantum Chromodynamics (QCD) is believed to be the microscopic theory of the strong interaction. It has been very successful at explaining a wide range of experimental results, especially in the high-energy regime, where perturbation theory is applicable. As the target energy scale is lowered, and the coupling grows, perturbation theory is not viable anymore. The relationship between the degrees of freedom present in the Lagrangian density and the phenomenology becomes opaque. Despite the lack of a proof from first principles, confinement allows to restrict the realized states to the color singlets. Hence, not only are meson and baryon states predicted, but a plethora of states for which solid evidence has begun to surface only in the last few years. Ironically, it is one of the earliest predicted states, the _glueball_, that still await undisputed experimental confirmation.
Glueballs are quarkless color-singlet states of QCD. Their hypothetical spectrum and decay patterns have been the object of studies for more than 50 years, leaving an important footprint in the literature. Glueballs have proven to be very elusive objects: despite eclectic approaches, the community only agrees on their basic properties and an understanding of the link between their macroscopic properties and the underlying Yang-Mills dynamics is still missing. Yet, they are one of the most distinctive predictions of QCD and essential to the confirmation of every aspect of the theory.
In this talk, past and present efforts to determine the spectrum and decay patterns of Glueballs, an activity called "glueball hunting", will be reviewed. Phenomenological approaches, based on the intuition gained from the quark model, were historically the first to be developped and are reviewed in Section 2. More recent analytical approaches, deeply rooted in QCD and based on the wealth of results gained from modern computational techniques are the focus of Section 3. Lattice calculations, the only first principles fully non-perturbative approach capable of providing ready-for-comparison numbers, are reviewed in Section 4. At last, a sample of the approach to the experimental identification of a glueball is described in Section 5.
## 2 Phenomenological approaches
The first reference to a glueball is found in Ref. [1], where it is described as a state generated by a local color-singlet product of gluon fields \(G^{a}_{\mu\nu}\) with isospin and \(G\)-parity \(I^{G}=0^{+}\). The success of the minimal quark model suggests that we proceed by analogy, and build glueballs by progressively adding gluons to the system while ensuring symmetry under the interchange of its constituent gluons. The full classification can be found in Ref. [2]. The simplest states are obtained for 2 and 3 gluons. For 2 gluons, the only color-singlet operator is \(\mathrm{Tr}\ G_{\mu\nu}G_{\rho\sigma}\), where the trace is on the color indices that have been omitted. The decomposition in irreducible representations of the Lorentz group result in,
\[\mathrm{Tr}\ G_{\mu\nu}G^{\mu\nu}\,\quad\mathrm{Tr}\ \tilde{G}_{\mu\nu}G^{\mu \nu}\,\quad\mathrm{Tr}\ G_{\alpha\nu}G^{\nu}_{\beta}-\frac{1}{2}g_{\alpha\beta} \mathrm{Tr}\ G_{\mu\nu}G^{\mu\nu}\, \tag{1}\]
where \(\tilde{G}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}G^{\rho\sigma}\) and \(g\) is the metric tensor. For 3 gluons, there are two color-singlet combinations,
\[f_{abc}G^{\mu\nu}_{a}G^{\alpha\beta}_{b}G^{\delta\sigma}_{c},\quad d_{abc}G^{ \mu\nu}_{a}G^{\alpha\beta}_{b}G^{\delta\sigma}_{c}\, \tag{2}\]
where \(f_{abc}\) are the structure constant of \(SU(3)\) and \(d_{abc}\) the related totally symmetric tensor.
Since a single gluon has \(j^{\pi}=1^{-}\), the enumeration is the following,
\[J^{PC}=\begin{cases}(even\geq 0)^{++}\\ (odd\geq 3)^{++}\end{cases}\quad,\quad J^{PC}=\begin{cases}(odd\geq 1)^{ \pm+}\\ (odd\geq 3)^{--}\end{cases}\quad, \tag{3}\]
for 2 and for 3 gluon states, respectively. Note that if the gluons are thought of as non-interacting and on-shell, the classification of possible states is analogous to the classification of two-photon states, see Ref. [3, 4], and no \(1^{-+}\) appears among the 2 gluon states.
Following Ref. [2], we can obtain an heuristic picture of the spectrum by assuming that the mass of each states is proportional to the dimension of the operator that creates it. Hence, the lightest states are \(0^{++}\), \(0^{-+}\) and \(2^{++}\), known as scalar, pseudo-scalar and tensor glueballs, while glueball with exotic \(J^{--}\) quantum numbers will be found at higher energies. There is no a priori reason to expect these states to be stable. In the limit in which flavor-breaking effects can be neglected, the decay widths \(\Gamma(J^{PC})\) are flavor agnostic and reproduce the branching ratios expected by \(SU(3)\) symmetry. Moreover, they obviously are not the only states with \(I^{G}=0^{+}\); as a consequence, nothing prevents them from mixing with \(q\bar{q}\) states of a similar mass if there are any. Below, we will focus on the \(0^{++}\), \(0^{-+}\) and \(2^{++}\) states only, as they are plausibly the most accessible experimentally.
In the simplest approach, a glueball is a bound states of gluons of constituent mass \(\mu\), interacting through a potential. In Ref. [5], a (massless) one-gluon-exchange potential is obtained from the \(O(g^{2})\) two-gluon scattering diagrams prescribed by QCD, supplemented by a linear confining potential with slope \(\sigma_{a}\), the (adjoint) string tension. Clearly the former is believed to describe short-range interactions, while the latter the long-range confinement property. The relative positions and splitting of 2-gluons states are then calculated in units of \(\mu\). The, \((2n)^{\pm+}\) states are found to be degenerate at \(O(g^{2})\) level, and ordered according to the value of \(n\). Hence, the lightest states are \(0^{\pm+}\), followed by \(2^{++}\). In particular, \(m(0^{\pm+})=2.180\cdot 2\mu\) for the ground state, followed by \(m(0^{++,\star})/m(0^{\pm+})\simeq 1.4\). and \(m(2^{++})/m(0^{\pm+})\simeq 1.16\), where the \(\star\) indicates an excited state. The value of \(\mu\) is discussed and it is recognized that when \(\mu\to 0\), the only remaining scale is \(\sqrt{\sigma_{a}}\) which is, unfortunately, inaccessible. Setting, in alternative \(\sigma_{a}\simeq\sigma=0.4\,\mathrm{GeV}\), where \(\sigma\) is the fundamental string tension, then \(m(0^{\pm+})\simeq 1.5\,\mathrm{GeV}\). As a result, \(m(2^{++})=1.74\,\mathrm{GeV}\).
In Ref. [6], a more sofisticated attempt is done at defining a constituent gluon mass. Naively one would like to define a constituent mass as the pole mass of the gluon propagator. However, the latter is not physical, being gauge variant. However, a rearrangement of the Feynman diagrams contributing to it can be defined, that satisfies the Slavnov-Taylor identities of gauge invariance. See Ref. [7] for a recent discussion. The constituent gluon propagator \(d(q^{2})\) is defined by
\[d^{-1}(q^{2})=\beta_{0}g^{2}(q^{2}-\mu^{2})\ln\left[(4\mu^{2}-q^{2})\Lambda^{ -2}\right],\quad\mu^{2}(q^{2})=\mu^{2}\left(\frac{\ln{(q^{2}+4\mu^{2})}/ \Lambda^{2}}{\ln{(4\mu^{2}/\Lambda^{2})}}\right)^{-12/11}\,, \tag{4}\]
where \(\Lambda\) is a scale and \(\beta_{0}\) the leading coefficient of the QCD beta function. The mass \(\mu(q^{2})\) is a _dynamical_ constituent gluon mass that vanishes as \(q^{2}\to\infty\). The computation of a physical observable of known value then allows, in principle, to solve in the coefficient \(\mu\). In Ref. [8], a string inspired potential \(V(r)=2\mu(1-e^{-r/r_{0}})\), reminiscent of the one obtained in the Schwinger model, is considered, where \(\sigma_{a}=2\mu/r_{0}\). Its effects are supplemented with a potential obtained from
the QCD \(O(g^{2})\) (now massive) one-gluon-exchange scattering amplitude. Fixing \(m\simeq 0.5\,\mathrm{GeV}\), one obtains \(m(0^{++})\simeq 1.2\,GeV\), \(m(0^{-+})\simeq 1.4\,\mathrm{GeV}\) and \(m(2^{++})\simeq 1.6\,\mathrm{GeV}\). Note that the value of \(\mu\) can be fixed in many different ways, also owing to the fact that gluons are never observed. In Ref. [9], for example, it is defined as half the energy stored in a flux-tube between two static sources trasforming in the adjoint representation of the gauge group. One obtains, from Monte Carlo simulations of the lattice regularized theory at finite lattice spacing and in the strong coupling regime, \(\mu\simeq 0.52\,\mathrm{GeV}\). The above is an admittedly very simplified picture, mainly because the system is treated non-relativistically. In a semi-relativistic treatment, the non relativistic kinetic energy \(p^{2}/2\mu\) is replaced with \(\sqrt{p^{2}}\). A recent calculation, Ref. [10] operates this improvement, which allows setting \(\mu=0\), and also considers the contributions to the potential induced by instantons that should affect differently states of different parity. As a result, \(m(0^{++})=1.724\,\mathrm{GeV}\), \(m(0^{-+})=2.624\,\mathrm{GeV}\) and \(m(2^{++})=2.588\,\mathrm{GeV}\) which, as we will see, is in good agreement with the Lattice results.
A phenomenological and yet fully relativistic model is the MIT Bag Model, see Ref. [11], which was successful in providing a qualitative understanding of many different properties of hadrons in terms of just a few parameters. In this model, a hadron is a finite region of space of energy density \(B\), and its internal structure is described by quark and gluon fields. Confinement is introduced by imposing vanishing boundary conditions on fields on the boundaries of the bag. In Ref. [12], the bag is approximated as a static sphere of radius \(R\) and two families of eigenmodes of the free gluon field are found: the transverse electric(\(TE\)) and transverse magnetic(\(TM\)) modes of energy \(E=x_{i}/R\), where \(i=TE\) or \(TM\). Their quantum numbers are easily determined as \(x_{TE}=2.744\) and \(J^{J+1,C}\) for \(TE\) modes, \(x_{TM}=4.493\) and \(J^{J,-}\) for \(TM\) modes. In Ref. [13], the spetrum of glueballs was obtained by populating the bag with \(TE\) and \(TM\) modes. The low-lying glueball states are found for 2 or 3 gluons, in agreeement with the qualitative picture introduced at the beginning of this section. For 2 gluons states we have \((TE)^{2}\) and \((TM)^{2}\) states, with \(J^{PC}=0^{++}\), \(2^{++},\ldots\) and \((TE)(TM)\) states with \(J^{PC}=0^{-+}\), \(2^{-+}\), \(\ldots\). For 3-gluon states we have \((TE)^{3}\) states with \(J^{PC}=0^{+-}\), \(1^{+-}\), \(1^{--}\), \(3^{+-},\ldots\). Note the absence of any \(1^{-+}\) state, which was argued, in Ref. [14], to describe the translation of the bag. As a consequence, itself and its contributions to other combinations should be discarded. This allows to exclude several states among the 2 gluon family that would be excluded, in the constituents models discussed previously, on the basis of Landau-Yang argument. The three lightest states are then found in the \(0^{++}\), \(2^{++}\) and \(0^{-+}\) channels, with \(m(0^{++})=m(2^{++})=0.96\,GeV\) and \(m(0^{-+})=1.29\,GeV\). In Ref. [15], the effect of a (running) coupling is introduced between the modes. At leading order in the coupling in a static cavity, the masses are found to be \(m(0^{++})=0.67\,GeV\), \(m(2^{++})=1.75\,GeV\) and \(m(0^{-+})=1.44\,GeV\). For a non-static bag, the eigenvalues of its hamiltonian must be related to the masses of the hadrons. In Ref. [16] the effects of this center-of-mass motions are taken into account and the bag constant \(B\) is computed from a model of the QCD vacuum. This leads to \(m(0^{++})=1.58\,GeV\), \(m(2^{++})=1.88\,GeV\) and \(m(0^{-+})=0.81\,GeV\).
In Refs. [17, 18], a model of hadrons is defined from the strong coupling limit Hamiltonian of lattice QCD. An analysis of the latter reveals that in addition to the mesons and baryons of the ordinary quark model, its Hilbert space also contains glueballs, hybrids and other exotic states. In the sector with no quarks, excitations are generated from the vacuum by products of link operators on closed lattice paths. While the states in strong coupling limit are not realized in continuum
QCD, it is argued that they form a complete basis of its Hilbert space. Hence, glueball states are superposition of states generated by Wilson loops, and can be described in the continuum by a non-relativistic model of a vibrating (circular) ring of glue. The low-lying spectrum of excitations yields \(m(0^{++})=1.52\,GeV\), \(m(0^{-+})=2.79\,GeV\), and \(m(2^{++})=2.84\,GeV\).
A summary of the above predictions for the spectrum of glueballs with quantum numbers \(0^{++}\), \(0^{-+}\) and \(2^{++}\) channels is displayed in Figure 1.
## 3 Analytical approaches
In this section, two approaches based directly on the QCD Lagrangian are reviewed. The first is based on Shifman-Vanshtein-Zakharov (SVZ) sum rules and the second is based on Bethe-Salpeter equations (BSE) for multi-gluon bound states.
SVZ sum rules, see Refs. [19, 20] and Ref. [21] for a pedagogical review, allow us to improve our understanding of the non-perturbative regime of QCD. Measurable quantities like masses and decay constants of hadrons can be quantitatively related to the expectation values of local combinations of quark and gluon operators, known as _condensates_. The condensates encode the long range properties of the QCD vacuum that are beyond the reach of perturbation theory. They cannot be calculated, but after their values are fixed from phenomenology, predictions can be formulated. The method of SVZ sum rules was very succesful in evaluating the spectrum and decay rates of ordinary mesons and baryons. The subject of the sum rules are the time ordered products at momentum \(q\),
\[\Pi(q^{2})=\imath\int\,\mathrm{d}^{4}x\ e^{\imath q\cdot x}\langle 0\,|T\left\{ J(x)J(0)\right\}|\,0\rangle\, \tag{5}\]
where the interpolating currents \(J\) generate glueball states with the desired quantum numbers from
Figure 1: The predictions on the spectrum of the \(0^{++}\)(scalar), \(0^{-+}\)(pseudo-scalar) and \(2^{++}\)(tensor) glueballs, from the phenomenological models reviewed in Section 2.
the vacuum. For the scalar, pseudo-scalar and tensor glueballs, the currents are,
\[J_{0^{++}} =\alpha_{s}{\rm Tr}\ G_{\mu\nu}G^{\mu\nu}\] \[J_{0^{-+}} =\alpha_{s}\epsilon^{\mu\nu\rho\sigma}{\rm Tr}\ \tilde{G}_{\mu\nu}G_{\rho\sigma}\] \[J_{2^{++}}^{\mu\nu} =-{\rm Tr}\ G_{\rho}^{H}G^{\nu\rho}+\frac{g_{\mu\nu}}{2}{\rm Tr} \ G_{\beta\alpha}G^{\beta\alpha}\,\]
note the presence of \(\alpha_{s}\) the strong coupling constant. This can be calculated in perturbation theory at \(Q^{2}=-q^{2}\gg\Lambda_{\rm QCD}^{2}\) and can be related, through the optical theorem, to the spectral density \(\rho(s)=\mathfrak{H}(q)/\pi\) for \(s=q^{2}>0\), of states generated by the current \(J\). These two different regimes are related by the dispersion relation
\[\Pi(q^{2})=\frac{1}{\pi}\int_{sX}^{\infty}{\rm d}s\frac{\rho(s)}{s-(q^{2}+i0)} +P(q^{2})\, \tag{6}\]
where \(s_{X}\) is location of the first singularity of \(\Pi(s)\) on the real axis, and \(P(q^{2})\) is a polynomial that contains the subtractions that are necessary when \(\Pi(s)\) is divergent for \(s\to\infty\).
The sum rules are used as follows. An ansatz is made for the spectral density, that contains a simple physical picture that captures our expectations for the sector related to current \(J\) and that contains the target observables quantities. The usual ansatz is
\[\rho(s)=\frac{1}{\pi}f_{X}^{2}\ \delta(s-m_{X}^{2})+\theta(s-S)\mathfrak{H} \Pi^{\rm QCD}(s)\, \tag{7}\]
where \(m_{X}\) is the mass of state \(X\), \(f_{X}=\langle 0|J(0)|X\rangle\) is its decay constant, and \(S>m_{X}^{2}\) is the threshold of energies over which the spectral density can be approximated by the perturbative one. In this regime, \(\Pi(q^{2})\) can be calculated in terms of quark and gluon fields at leading order, while the longer-range subleading contributions are obtained through the Operator Product Expansion (OPE). The condensates of appropriate local operators that appear in the OPE may be classified according to their mass dimension \(d\). For example, for the scalar channel, the leading condensate, \(\langle{\rm Tr}\ G_{\mu\nu}G^{\mu\nu}\rangle\), appears at \(d=4\). The contribution of higher dimensional operators can be included, and becomes quantiatively relevant at smaller values of \(Q^{2}\). In order to magnify the relative importance of the low-lying states, ideally in an energy range \(Q^{2}\sim 1\ GeV\), in the spectral density, and to suppress the effects of higher powers of \(1/Q^{2}\) in the OPE, a Borel transformation is usually performed on both sides of the sum rule, Eq. (6). Matching this computation with the ansarz for the spectral density allows to relate \(f_{X}\) and \(m_{X}\) to phenomenology and to determine their values, which can then be used to predict other quantities. Clearly, the final estimates of \(m_{X}\) and \(f_{X}\) depend on which configurations are used to compute \(\Pi(Q^{2})\) at large \(Q^{2}\), for example whether instantons are included, and on the value of the condensates.
In Refs. [22, 23], the scalar and pseudoscalar currents were analyzed. The related glueballs were put in correspondence with the \(\eta^{\prime}\) state at \(1\ GeV\) and the \(\sigma\)-meson at \(0.7\ GeV\). The contribution of instantons was discussed and either not or only schematically taken into account. Differently from, i.e. the \(\rho\)-meson, the contribution of instantons at energy scales around \(1\ GeV\) seems non-negligible, and it is suggested that its neglect will impact on the prediction of the glueball masses. The matter is carefully analyzed in Ref. [24, 25], in which it is instead argued that the instanton contribution can be neglected. The masses of the scalar, pseudo-scalar and tensor glueballs are predicted to
be \(m(0^{++})=1.5(2)\,GeV\), \(m(0^{-+})=2.05(19)\,GeV\) and \(m(2^{++})=2.0(1)\,GeV\). In contrast, the effect of instantons is considered in in Ref. [26] and the direct instanton contribution is evaluated in Ref. [27]. The calculation allows to predict the mass of the scalar glueball as \(m(0^{++})=1.53(2)\,GeV\). A more recent and systematic calculation of the contribution of the direct instanton contribution may be found in in Ref. [28]. The masses of the scalar, pseudoscalar, which are the most affected are estimated as \(m(0^{++})=1.25(2)\,GeV\), \(m(0^{-+})=2.2(2)\,GeV\) In Ref. [29, 30], the authors analyze very carefully the set of currents to correlate, and include condensates up to dimension 8, but not the contribution from instantons. The masses of scalar, pseudoscalar and tensor glueballs are obtained as \(m(0^{++})=1.78(17)\,GeV\), \(m(0^{-+})=2.17(11)\,GeV\) and \(m(2^{++})=1.86(17)\,GeV\).
A different approach consists in using the Bethe-Salpeter formalism. In principle, this allows to obtain information on bound states in a fully relativistic and non-perturbative manner. In practice, an infinite hierarchy of equations is involved, and approximations are needed to obtain results. In the Bethe-Salpeter equation (BSE), see Figure 2, one is interested in computing the amplitude \(\Gamma\), given an ansatz for the the two-body irreducible scattering kernel \(K\) and the form of quark, gluon and ghost propagators. In the absence of exact solutions for the latter, some kind of is needed, whose eventual effects are generally hard to control. Clearly, the choice of truncation and of the propagators becomes the crucial aspect in determining the solidity of this method.
The first attempt at the calculation of the \(J=0\) glueball in the BSE framework can be found in Ref. [32]. Available lattice results on the behaviour of the gluon and ghost 2-point function are used both to model vertices through their Schwinger-Dyson equations and to ensure the correctness of the resulting solution. These vertices are then used in truncated BSEs to predict the properties of glueballs. Assuming that the dressed version of the lowest order scattering kernel dominates the interaction and taking as input the mass of the scalar glueball computed in lattice simulations, the mass of the pseudoscalar glueball was obtained as \(2.500(250)\,GeV\). In Ref. [31, 33, 34], the BSEs were solved in in the Landau-gauge, in the pure Yang-Mills case. The truncation scheme at 3-loops was consistent between the DSE for 2-points functions and vertices, obtained from the 3PI effective action. There is no external parameter dependency apart from an overall scale. The latter can be fixed by comparison with lattice results. The scalar and pseudoscalar glueball masses were estimated
Figure 2: The coupled set of BSEs for two-body bound states of QCD. The blue, wiggly and dashed lines are the propagators for the quarks, gluons and ghosts, respectively. The circles are the Bethe-Salpeter amplitudes \(\Gamma\) and the boxes the scattering kernels \(K\). Taken from Ref. [31].
as \(m(0^{++})=1.850(130)\ GeV\),\(m(0^{-+})=2.580(180)\ GeV\) and \(m(2^{++})=5.610(180)\ GeV\).
The large-\(N_{c}\) approach plays an important role in that it allows to relate and combine results obtained at different values of \(N_{c}\) with the phenomenologically relevant case \(N_{c}=3\). It is based on the observation that the calculation of amplitudes in Yang-Mills theories drastically simplify when the number of colors \(N_{c}\) is taken to infinity keeping \(g^{2}N_{c}\) fixed. In particular, if the large-\(N_{c}\) theory is a confining theory, then it describes stable and non-interacting mesons and glueballs, as can be easily understood from the scaling properties of Feynman diagrams with \(N_{c}\). At \(N_{c}\) large but finite, it is possible to show that,
\[\frac{m(J^{PC})}{\sqrt{\sigma}}=m(N_{c}=\infty)+\frac{c_{1}}{N_{c}^{2}}\, \tag{8}\]
where the coefficient \(c_{1}\) is independent of \(N_{c}\). This approach rests on the possibility of computing the value of \(m(N_{c}=\infty)\) and of \(c_{1}\), which can be achieved in several different ways. For a lattice oriented review, see Ref. [35]. Related approaches have been adopted in the context of the Ads/CFT correspondence. They differ in the specific duality chosen, in the way the breaking of conformal symmetry is implemented, and in the identification of glueball operators. In the recent Ref. [36], the masses of the scalar and tensor glueballs are obtained in the context of the graviton soft wall model, in which the glueball is associated to a graviton propagating in \(Ads_{5}\) space. The estimates of their masses are \(m(0^{++})=1.920\,\mathrm{GeV}\) and \(m(2^{++})=2.371\,\mathrm{GeV}\). For further results, see Section III.E of Ref. [37].
A summary of the above predictions for the spectrum of glueballs with quantum numbers \(0^{++}\), \(0^{-+}\) and \(2^{++}\) channels is displayed in Figure 3.
Figure 3: The predictions on the spectrum of the \(0^{++}\)(scalar), \(0^{-+}\)(pseudo-scalar) and \(2^{++}\)(tensor) glueballs, from the phenomenological models reviewed in Section 2 (empty triangles) and from the analytical models reviewed in Section 3 (full triangles).
## 4 Lattice calculations
The numerical approach to lattice regularized quantum field theories is the only first principles approach to the exploration of the non-perturbative regime of QCD. As such, it is the instrument of choice for the study of glueballs, in particular their spectrum and the decay widths. In this section, estimates of the glueball spectral observables as they are usually obtained on the lattice are reviewed and the results present in the litterature are discussed.
The mass of a glueball state can be calculated from the large euclidean-time behaviour of correlators of operators with appropriate quantum numbers. For zero-momentum projected operators, under very broad assumptions,
\[C(t)=\langle\Omega|O(t)O^{\dagger}(0)|\Omega\rangle=\sum_{n}|c_{n}|^{2}e^{-m_{ n}t}\, \tag{9}\]
where \(n\) labels the eigenstate of the Hamiltonian, the quantities \(|c_{n}|^{2}=|\langle n|O_{i}(0)|\Omega\rangle|^{2}\) are known as overlaps, and \(m_{n}\) are the masses in the channel with the same quantum numbers as the operator \(O\). If there exists an isolated ground state of mass \(m_{0}\) then, at sufficiently large \(t\), the sum in Eq. 9 will be dominated by \(|c_{0}|^{2}\exp{(-m_{0}t)}\). In principle, the mass can be obtained as,
\[m_{0}=-\lim_{t\to\infty}\frac{1}{t}\log C(t). \tag{10}\]
In practice, the computation of \(m_{0}\) with the above form for \(C(t)\) at finite values of \(t\) will be affected by the contamination of higher energy states. The average value above can be computed on the lattice as an ensemble average and is defined schematically as,
\[C(t)=\frac{1}{Z}\int\,{\cal D}[U]\det M[U]\,O(t)O^{\dagger}(0)\,e^{-S_{\rm YM} [U]}\, \tag{11}\]
where \(M[U]\) is the fermion matrix, \(S_{\rm YM}[U]\) is the action for the gluon field, and
\[Z=\int\,{\cal D}[U]\det M[U]e^{-S_{\rm YM}[U]}. \tag{12}\]
Many different choices are possible for both \(S_{\rm YM}[U]\) and \(M[U]\). For example, both isotropic and anisotropic discretizations can be defined, and different actions characterized by different discretization errors.
At finite lattice spacing the states transform in irreducible representations of the symmetries of the system. These are known as _channels_. The channels are labelled by \(R^{PC}\), where \(R\) are irreducible representations of the octahedral group \(O_{h}\), \(P\) is spatial parity and \(C\) is charge conjugation. There are 10 possible channels, denoted by \(A_{1}^{\pm},A_{2}^{\pm},E^{\pm},T_{1}^{\pm},T_{2}^{\pm}\). Their relationship with the continuum channels \(J^{PC}\), which they become part of in the continuum limit, can be found in Table 1. States are generated from the (invariant) vacuum \(|\Omega\rangle\) by gauge-invariant combinations of link variables and quark fields. Two families of such operators are known: traces of path-ordered products along closed lattice paths \(U_{C}={\rm Tr}\ \prod_{I\in C}U_{I}\), and operators involving \(q\) and \(\bar{q}\) fields, \(\bar{q}U_{\cal L}q\), where \({\cal L}\) is a path connecting \(\bar{q}\) and \(q\). The channel to which one operator belongs is dictated by the transformation properties of its support under elements of the octahedral group. As Charge conjugation simply amount to inverting the ordering of the link operators along a path, the
representations with definite values of \(C\) are simply obtained by considering the real and imaginary parts of ech \(R^{P}\) representations.
It was soon realized that glueball correlators are affected by a particularly severe signal-to-noise ratio problem. Two strategies have been proposed to overcome it. The first is based on the locality of both the Yang-Mills part of the action and the objective operator. It is known as multilevel, and allows to achieve an exponential reduction in the error of \(C(t)\) at large \(t\), see Refs. [38, 39]. Unfortunately, it rests on locality, and its use is limited to quenched theories. The second is known as _variational method_[40, 41]. A _variational basis_ of operators \(\{O_{i}\}\) is defined in a given channel, and their correlation matrix is obtained,
\[C_{ij}(t)=\langle\Omega|O_{i}(t)O_{j}(0)|\Omega\rangle=\sum_{n=1}^{\infty}c_{n,i}c_{n,j}e^{-m_{n}t}\, \tag{13}\]
where \(c_{n,i}=\langle n|O_{i}|\Omega\rangle\). By a diagonalization of \(C_{ij}(t)\) at large \(t\), the ground state \(m_{0}\) can in principle be obtained. In practice, because of the presence of statistical fluctuations, one instead solves the GEVP, \(C(t)v=\lambda(t,\,t_{0})C(t_{0})v\), at small \(t\), where \(v\) is a column vector, and \(\lambda(t,\,t_{0})=e^{-m_{0}(t-t_{0})}\). This amounts to finding the linear combination \(\Phi(t)=\sum_{i}v_{i}O_{i}(t)\) of the operators that maximize the overlap on the ground state in the channel. The mass can then be extracted from the large time behaviour of their correlator. The great majority of the estimates of the spectrum have been obtained using the variational method. The efficacy of the method depends crucially on the choice of the _variational basis_. An sample of the closed loop operators usually included is displayed in Figure 4. It has proved very effective to add to the variational basis operators calculated on blocked and smeared configurations [42, 43, 44]. This allows to better overlap with ground state configurations, especially in the vicinity of the continuum limit, where they are expected to be smooth on the \(a\) scale. Moreover, as shown in Ref. [45], the construction of the variational basis can be automatized and the effect of scattering and di-torelon states that propagate in the correlator can be identified. Estimates of the glueball masses can thus be obtained at several values of the inverse coupling \(\beta\), and on several lattice geometries \(N_{s}^{2}\times N_{t}\) and provided other sources of systematicall error are addressed1, an infinite volume continuum limit can then in principle be calculated.
Footnote 1: For example, the loss of ergodicity caused by topological freezing, see below.
In quenched systems, where the fermionic degrees of freedom are infinitely massive and effectively static, the calculation of the glueball spectrum was one of the early successes of lattice QCD. It is nowadays one of the best known results obtained on the lattice, and a solid prediction
\begin{table}
\begin{tabular}{c|c c c c c} \(J\) & \(A_{1}\) & \(A_{2}\) & \(E\) & \(T_{1}\) & \(T_{2}\) \\ \hline
0 & 1 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 1 & 0 \\
2 & 0 & 0 & 1 & 0 & 1 \\
3 & 0 & 1 & 0 & 1 & 1 \\
4 & 1 & 0 & 1 & 1 & 1 \\ \end{tabular}
\end{table}
Table 1: In the top row, the representations of the octahedral group. In the left column, the \(J\leq 4\) representations of the PoincarΓ© group. The elements 1 of the matrix correspond to representations \(R^{PC}\) that become part of representation \(J\) in the continuum limit.
from pure Yang-Mills theory. The spectrum can be calculated from the Wilson action on isotropic lattices, see Ref. [46], and from an improved action on anisotropic lattices, see Refs. [47, 48]. The high quality of these recent determinations of the spectrum rests on the careful identification of the target states and on the quality of the extrapolation to the continuum limit, and build upon decades of efforts.
The spectrum as obtained in Ref. [48] is displayed in Fig. 5. The picture confirms the results obtained in the majority of models: the lightest channels are the scalar, followed by the tensor and the pseudo-scalar channels. The scalar glueball in Ref. [48] has a mass of \(1.710(80)\,GeV\), the pseudoscalar a mass of \(2.560(120)\,GeV\) and the tensor \(2.390(120)\,GeV\). In Ref. [46] the scalar glueball has mass \(1.651(23)\,GeV\), the pseudoscalar \(2.599(39)\,GeV\) and the tensor \(2.378(31)\,GeV\). These predictions are compatible with each other within 1-\(\sigma\). Note that the choice of physical observables used to set the scale will have an effect on the final estimate in \(GeV\) units. Estimates present in the litterature are often expressed in units of the Sommer's scale \(r_{0}\) or in units of \(\sqrt{\sigma}\). In recent investigations, results in units of the Gradient Flow scale \(t_{0}\) have started to appear.
Quenched systems are also an ideal testbed to investigate both the quantitative effects of other
Figure 4: A sample of the lattice paths on which the glueball operators are defined. Taken from Ref. [45].
Figure 5: (left) The quenched spectrum, taken from Ref. [48]. (right) A summary of the results obtained in the scalar, pseudo-scalar and tensor channels in quenched systems. The numerical values are taken from Refs. [47, 48, 49, 50].
sources of systematical error and to obtain the spectrum at larger values of \(N_{c}\) or for different gauge groups. For example, topological freezing affects simulations performed with periodic boundary conditions at small values of \(a\). The resulting loss of ergodicity might affect mass estimations, especially in the pseudoscalar channel. The problem has been analyzed at both \(N_{c}=3\) and larger \(N_{c}\), where the effect of topological freezing should be magnified, and found to be negligible, see Refs. [51, 52, 53, 54, 55]. Moreover, the spectrum was also evaluated at different values of the number of colors, see Ref. [45, 49, 56], and extrapolated to the limit \(N_{c}\to\infty\). The lattice allows the exploration of the spectrum at large-\(N_{c}\), see Ref. [45] Finally, the glueball spectrum was obtained for gauge theories based on other families of groups, see Ref. [57] for \(Sp\,(N_{c})\) gauge theories. The comparison of spectral data for different families of gauge groups allows to analyze the degree of universality among Yang-Mills theories. In particular, the Casimir scaling and the universality of the ratio between the tensor and scalar glueball mass were analyzed in Refs. [58, 59].
A summary of the estimates of the spectrum in quenched lattice QCD is displayed in the right-hand panel of Figure 5.
The addition of dynamical fermions complicates the picture considerably. The vacuum is altered in way that is difficult to predict. Indeed, there is no reason to think that the unquenched and quenched spectra are similar: the presence of sea quarks of sufficiently low mass makes glueballs unstable2, and the mixing with other iso-singlet states makes it impossible to determine the very _nature_ of the state under scrutiny. In other words, a glueball mixed with a \(q\bar{q}\) component is indistinguishable from a meson with a glueball component. However, the possibility of varying the quark mass parameters smoothly in Lattice calculations affords us a crucial advantage, in that it allows us to tune the effects of the confounding phenomena above. In the regime of large quark masses, the system should be similar to its quenched limit, and the decays and mixing effects should be inhibited. Well defined glueballs and quarkonia states are expected, and their mass shouyld be calculable with the methods above. Reducing the sea quark mass smoothly reintroduces mixings and, for sufficiently small quark masses, decays.
Footnote 2: Torelons, often used to measure the scale, are affected by the same phenomenon.
Early studies on the unquenched glueball spectrum have been performed at \(N_{f}=2\), with different fermion discretizations, in a regime of heavy quarks, see Refs. [60, 61, 62]. It is observed that, surprisingly, the statistical error in the determination of the correlators is smaller than expected. In Ref. [61], the masses of scalar and tensor glueballs at \(N_{f}=2\) are obtained for Wilson fermions at \(m_{\pi}\simeq 0.490\,\text{GeV}\) and are compared with quenched results for similar values of the lattice spacing. They are found to be in agreement within errors. In Ref. [62], a similar calculation is carried out for non-perturbatively improved clover Wilson fermions at \(m_{\pi}\simeq 0.3-0.6\,\text{GeV}\). While the tensor glueball mass is found to be in agreement with the quenched predictions, while in the scalar channel it is found to be suppressed by \(\sim 20\%\). As discussed by the authors, as this discrepancy seem to be independent of the value of the quark mass parameters, it might be explained as a lattice artifact introduced by the \(O(a)\) improvement.
The above results were obtained with only pure-glue operators in the variational basis. In Ref. [65], \(q\bar{q}\) operators were included in the variational basis and the mass of the flavor-singlet state was measured in a system with \(N_{f}=2\) flavors of clover improved Wilson fermions. A suppression was observed in the mass of the flavor-singlet scalar state with respect to the results of
Refs. [61, 62]. Note that no difference was found between the mass obtained from a purely gluonic variational basis and from a mixed one. In order to interpret this suppression, the same calculation was performed in Ref. [64] in two different ways. First, using the same action but at a finer lattice, and also using interpolating operators at non-zero momentum. Second, on ensembles generated by a _gauge_ improved Iwasaki action. In both cases, the suppression was still observed, see Figure 6. Two different interpretations thereof are put forward in Ref. [64]. The suppression could be an artifact of the lattice discretization, caused by the so-called "scalar dip", or it could be a genuine effect of _mixing_. The masses of glueballs in the scalar, pseudo-scalar and tensor channels were later obtained for \(2+1\) flavors of improved staggered fermions in Ref. [66] and with a purely gluonic variational basis. Only a weak dependence on the lattice spacing was found and the values of the masses were found to be compatible with their value in the quenched continuum limit. The addition of scattering states to the variational basis did not alter this conclusion, see Ref. [67].
More recently, these masses were computed using a purely gluonic variational basis for an anisotropic lattice with \(N_{f}=2\) clover-improved Wilson fermions in Ref. [68]. No unquenching effects were detected on the masses of the pseudo-scalar and tensor channels. The scalar channel appears to have a slightly suppressed mass with respect to its quenched counterpart. However, no continuum limit is considered and more investigations are needed before relating this effect to mixing. The scalar glueball was the focus of Ref. [69], where for the first time, two-hadrons (\(\pi\pi\), \(K\bar{R}\) and \(\eta\eta\)), \(q\bar{q}\) and purely-gluonic operators were included in the variational basis. A single ensemble of \(N_{f}=2+1\) clover improved Wilson fermions was analyzed at \(m_{\pi}\sim 390\,MeV\), using the stochastic LapH method to evaluate all-to-all quark propagation. The aim of the authors was to understand whether, starting from the light hadron spectrum obtained from only linear
Figure 6: (left) In purple, the spectrum of glueballs for \(N_{f}=4\) clover improved twisted mass fermions, in green, the quenched spectrum from Refs. [46, 49]\(m_{\rm PS}\sim 250\) MeV. Taken from Ref. [63]. (right) Quenched and Unquenched spectrum of the scalar glueball at finite \(a\) and various values of \(m_{\pi}\). The green diamonds are from Ref. [62], the blue and purple crosses from Ref. [64], the red squares from Ref. [61]. The quenched results from Ref. [47] are represnted as black circles. Taken from Ref. [64].
combinations of fermionic operators, additional states were observed to appear upon inclusion of glueball operators in the variational basis. Curiously, no new state appears within the energy range considered. This is an indication that further study is needed on the effects systematics introduced by the choice of the variational basis.
At this conference, a calculation of the scalar glueball mass with \(N_{f}=4\) clover improved twisted mass fermions was presented, see Ref. [63]. The low-quark mass regime was explored, with \(m_{\pi}\sim 250\,MeV\) and while in the pseudo-scalar and tensor channel the masses were roughly found to agree with the corresponding quenched values, a new light state was observed in the scalar channel. Notably, the mass of the first and second excited states was found to be similar to that the ground state and first excited quenched glueballs, respectively. The spectrum is displayed in in the left-hand panel of Figure 6. It is suggested that the new low-lying state is \(\pi\pi\) or a \(q\bar{q}\) state. A similar calculation was performed for \(N_{f}=2+1+1\). The fact that the mass of the additional low-lying state was shown to depend strongly on \(m_{\pi}\) suggests that it might contain a large quark content. The above results illustrate the need to improve our understanding of the unquenched glueball spectrum, especially in the continuum limit. However, the most pressing questions are on the effects of mixing.
A summary of the estimates of the spectrum in unquenched lattice QCD at finite lattice spacing is displayed in Figure 7.
The formalism to study the effects of mixing on the spectrum was described in detail in Ref. [65]
Figure 7: A summary of estimates of the unquenched glueball spectrum. In light blue, the results from Ref. [61], in light orange and green, the results from Ref. [68], in red, the results in Ref. [70], in purple the results from Ref. [67], in brown, the results from Ref. [63], in cyan the quenched results from Ref. [48].
where the mixing is found to be substantial. The same analysis was improved and obtained at smaller lattice spacing in Ref. [64], with non-perturbatively improved clover fermions at \(a\sim 0.1\ fm\). In Ref. [66], an approximated value of the non-diagonal element of the mixing matrix was obtained in a system of \(N_{f}=2+1\) improved staggered fermions, with \(m_{\pi}\sim 280\ MeV\) and \(m_{\pi}\sim 360\ MeV\). While the behaviour of the mixed correlation function was in agreement with expectations, the data was still too noisy to draw any quantitative conclusion. Recently, the problem of computing the disconnected contribution to the mixed correlators has received some attention. In Ref. [71, 72], the distillation method was used in a system with \(N_{f}=2\) heavy quarks. In Ref. [73], \(N_{f}=2\) flavors of heavy quarks are considered on an anisotropic lattice. The mixing energy is computed from the explicit calculation of mixed glueball-quarkonia correlators. A mixing energy of \(49(6)\ MeV\) and a mixing angle of \(|\theta|=4.3(4)^{\circ}\) are obtained. A similar study was performed in Ref. [74] using distillation to compute the values of the disconnected diagrams involved in the mixing dynamics. A system of \(N_{f}=2\) flavors of clover improved fermions on an anisotropic lattice at \(m_{\pi}\sim 350\ MeV\) was considered. The mixing energy was found to be \(107(14)\ MeV\) and the mixing angle was estimated to be \(|\theta|=2.47(46)^{\circ}\); this is small enough to argue that the mixing effects can be largely ignored in the pseudo-scalar channel at \(N_{f}=2\).
A summary of the estimates of the spectrum in unquenched lattice QCD at finite lattice spacing is displayed in Figure 7.
The prediction of the decay width of glueballs from QCD is crucial to the comparison with experiments and is tightly related to the problem of determining the mixing between glueballs and other iso-singlet states. A glueball with a very small decay width would already have been identified, while a very large resonance could remain out of reach forever.
The calculation of decay widths from lattice QCD is notoriously difficult and numerically very expensive. Despite the recent progresses in computing decay widths for mesons using the Luscher-Lellouch method, see Ref. [75], much remains to be done before this method can be fruitfully used for glueballs. The main difficulty in the case of glueballs lies in the fact that, since glueballs are iso-singlet states, studying their mixing with \(q\bar{q}\) state will necessarily involve the computation of correlators corresponding to disconnected diagrams. Decays glueballs to a pair of pseudo-scalar
Figure 8: The spectrum of light hadrons without (left) and with (right) glueball operators in the variational basis, with \(m_{\rm ref}=2m_{\rm K}\). Taken from Ref. [69].
mesons were first studied in Refs. [76, 77]. The 2-point functions of purely gluonic states with two-body meson operators at zero and non-zero value of the back-to-back momentum were studied in the quenched approximation. The width of the decay to two pseudo-scalars was obtained as \(108(29)\ MeV\) and the total decay width was estimate to be smaller than \(200\ MeV\), which would make the glueball well identifiable in experiments. Related to the decay of glueballs to other states is the decay of the \(J/\psi\) to glueballs. In Ref. [78, 79, 80], the estimates of the decay widths to a photon plus a glueball are given for the scalar, pseudo-scalar and tensor channel in the quenched approximation and on anisotropic lattices, using the formalism described in Ref. [81]. A width of \(0.35(8),\ keV\) is found, corresponding to a branching ratio of \(3.8(9)\times 10^{-3}\) for the scalar glueball. A width of \(1.01(22)(10),\ keV\), corresponding to a branching ratio of \(1.1(2)\times 10^{-2}\) for the tensor glueball. A width of \(0.0215(74),\ keV\), corresponding to a branching ratio of \(2.31(80)\times 10^{-4}\) for the pseudo-scalar glueball. The study of the potential between glueballs and their scattering is a related and relevant problem, i.e. for models of glueball dark matter. In this respect, see Refs. [82, 83].
## 5 Experimental results
The detection of glueball states is one of the long standing unsolved problems in hadron spectroscopy. Several strategies have been developed in order to identify a signal that could confirm their existence and a large number of studies have been conducted with that aim. For recent review, see Ref. [84, 85, 86].
The typical experimental signature expected from a glueball is the appearance of supernumerary states, that do not fit into the \(q\bar{q}\) nonets of the minimal quark model. Their decay should exhibit branching fractions that are incompatible with those expected from \(SU(3)\),
\[\pi\pi:K\bar{K}:\eta\eta:\eta\eta^{\prime}=3:4:1:0\, \tag{14}\]
and their detection should be best visible in Gluon-rich channels. The latter are processes in which the not involving a glueball are suppressed, for example by the OZI-rule. The two confounding effects that have prevented the identification of a glueball signal so far are the mixing mentioned above on which, unfortunately, little is known, and the possible presence of decay form factors.
Three examples of gluon-rich channels that are currently under scrutiny in search of a glueball are displayed in Figure 9. The \(\bar{p}p\) annihilation is represented on the right-hand panel, a \(q\bar{q}\) pair can annihilate and a glueball may be formed. This reaction was studied at the Crystal Barrel experiment, see Ref. [87], and will be the focus of the future PANDA experiment at FAIR, see Refs. [88] for a discussion involving the scalar glueball. The diffractive scattering of hadrons is represented in the centre panel, and also known as double pomeron exchange. Since no valence quarks are exchanged, a glueball may form. This reaction was studied at the WA102 experiment. The radiative decay of the \(J/\psi\) resonance is represented on the left-hand panel. In this case, the OZI rule suppresses decay into light quarks, and the interesting process if the decay to a (detectable) photon and a pair of gluons. The pair of gluons might form a glueball, and its decay to a pair of mesons might be identified. The BESIII experiment is currently collecting data on this process.
Below, the focus is on the scalar sector, which is the most controversial. The number of states observed under \(\sim 2.5\ GeV\) is still debated and so is their identification. The states in this channel are listed in Table 3. Among those, 9 are listed in the 2020 issue of the PDG. The \(f_{0}(500)\) and
\(f_{0}(980)\) have been mainly interpreted as molecular states. The states \(f_{0}(1370)\), \(f_{0}(1500)\) where established by the Cristal Barrel experiment through their decays to \(\eta\eta\) and \(\pi^{0}\pi^{0}\). Neither have large coupling to \(K\bar{K}\), and this would indicate that they cannot have a large \(s\bar{s}\) component. The results from the WA102 experiment are in agreement with this view. The state \(f_{0}(1710)\) was reported to decay predominantly to \(K\bar{K}\), indicating that it is predominantly \(s\bar{s}\). Clearly, not all of these states can be scalar iso-scalars, and one of them must be supernumerary. The fact that experiments at LEP have indicated that the \(f_{0}(1500)\) is practically absent from \(\gamma\gamma\to K\bar{K}\) and \(\gamma\gamma\to\pi^{+}\pi^{-}\) decays would suggest that it is predominantly \(s\bar{s}\), in contradiction with the above picture. The \(f_{0}(1500)\) is thus thought to be supernumerary. Yet the pattern of its decays to \(\pi\pi\), \(\eta\eta\), \(\eta^{\prime}\eta^{\prime}\) and \(K\bar{K}\) indicate that it cannot be a pure glueball either. This points to the idea that the \(f_{0}(1500)\) is a mixed state, partly glueball partly \(q\bar{q}\).
Many studies revolve around the idea of mixing first explored in Ref. [89] for the scalar sector. Let \(|G\rangle\), \(|n\bar{n}\rangle\) and \(|s\bar{s}\rangle\), be the _bare_ states, where \(|n\bar{n}\rangle=(|u\bar{u}\rangle+|d\bar{d}\rangle)/\sqrt{2}\) and let \(M\) be the mass matrix,
\[M=\begin{bmatrix}M_{G}&f&\sqrt{2}f\\ f&M_{S}&0\\ \sqrt{2}f&0&M_{N}\end{bmatrix},\quad f=\langle s\bar{s}|V|G\rangle=\langle n \bar{n}|V|G\rangle/\sqrt{2}\, \tag{15}\]
and \(V\) is the potential generating the mixing. Then, the observed states, i.e. the \(f_{0}(1370)\), \(f_{0}(1500\) and \(f_{0}(1710)\) are the eigenvalues of \(M\).
Mixing also underlies the idea of a distributed glueball, put forward in Refs. [90, 91]. In this case, the various resonances in Table 3 are ascribed either to a singlet or to a octet of \(SU(3)\). Their assignement can for example be decided from their Regge trajectories. While octet mesons should not appear in radiative decays of the \(J/\psi\), they are abundantly produced. Moreover, singlets should be produced, but their yield shows a peak at \(\sim 1.865\,GeV\). This enhancement is interpreted as a scalar glueball mixed into the wave-function of _mainly_-octet and _mainly_-singlet mesons.
In particular, a coupled channel analysis of the decays of \(J/\psi\) into \(\gamma\pi^{0}\pi^{0}\), \(\gamma K_{S}K_{S}\), \(\gamma\eta\eta\) and \(\gamma\phi\omega\) indicates that a tenth state, the \(f_{0}(1770)\) is required to fit the invariant mass distributions of
\begin{table}
\begin{tabular}{l c c c c c} res. & \(f_{0}(1370)\) & \(f_{0}(1500)\) & \(f_{0}(1710)\) & \(f_{0}(1770)\) & \(f_{0}(2020)\) & \(f_{0}(2100)\) \\ G fraction & \(5(4)\%\) & \(<5\%\) & \(12(6)\%\) & \(25(10)\%\) & \(16(9)\%\) & \(17(8)\%\) \\ \end{tabular}
\end{table}
Table 2: The glueball fraction found from a coupled-channel analysis of the decays of the \(J/\psi\) into \(\gamma\pi^{0}\pi^{0}\), \(\gamma K_{S}K_{S}\), \(\gamma\eta\eta\) and \(\gamma\phi\omega\). Taken from Ref. [90].
Figure 9: A sketch of three examples of Gluon rich processes. In the left hand panel, the radiative decay \(J/\psi\to\gamma mm\), where \(m\) is a meson. In the centre panel, pomeron-pomeron collision, and in the right hand panel, hadron central production. Taken from Ref. [84]
these decay products. The pole masses and width were obtained with the K-matrix approach, and the glueball contents of the resonances could be determined. They are reported in Table 2. Note that the fractions only amount to \(\sim 78\%\) with a small fraction expected from \(f_{0}(2020)\) and \(f_{0}(2100)\). This indicates the presence of a "fractional glueball" and to argue for a distributed scalar glueball with \(m(0^{++})=1.865(50),\,\mathrm{GeV}\) and decay rate \(\Gamma\sim 0.370(50)\,\mathrm{GeV}\).
## 6 Conclusion
Glueballs are among the predictions of QCD that still await undisputed confirmations. Many different phenomenological models, more or less QCD inspired predict masses that are similar, and in agreement with Lattice QCD results, obtained in the quenched approximation
\[m(0^{++})=1.6(1)\;GeV,\quad m(2^{++})=2.4(1)\;GeV,\quad m(2^{++})=2.5(1)\;GeV, \tag{16}\]
Figure 10: In the left-hand panel, the yield of mainly octet(circle) and mainly-singlet(squares) mesons in the radiative \(J/\psi\) decays, plotted as a function of the mass \(M_{\mathrm{res}}\) of the resonance. In the right-hand panel, the glueball content of the resonances as a function of \(M_{\mathrm{res}}\). The solid line is a Breit-Wigner distribution normalized to unit area, while the black squares are the values of \(\sin^{2}\phi_{n}^{s}\), where \(\phi_{n}^{s}\) is the mixing angle between non-glue states. Note that the distribution of yields and the distribution of the glueball across the resonances is the same. Taken from Refs. [90].
\begin{table}
\begin{tabular}{l|c c} name & \(m(\mathrm{MeV})\) & \(\Gamma(\mathrm{MeV})\) \\ \hline \(f_{0}(500)\bullet\) & \(400\to 500\) & \(480(30)\) \\ \(f_{0}(980)\bullet\) & \(990(1)20\) & \(71(10)\) \\ \(f_{0}(1370)\bullet\) & \(1200\to 1500\) & \(200\to 500\) \\ \(f_{0}(1500)\bullet\) & \(1506(6)\) & \(112(9)\) \\ \(f_{0}(1710)\bullet\) & \(1704(12)\) & \(123(18)\) \\ \(f_{0}(1770)\) & \(1765(15)\) & \(180(20)\) \\ \(f_{0}(2020)\bullet\) & \(1992(16)\) & \(442(60)\) \\ \(f_{0}(2100)\bullet\) & \(2086^{20}_{-24}\) & \(284^{60}_{32}\) \\ \(f_{0}(2200)\bullet\) & \(2187(14)\) & \(\sim 200\) \\ \(f_{0}(2330)\bullet\) & \(\sim 2330\) & \(250(20)\) \\ \end{tabular}
\end{table}
Table 3: The masses and decay widths of the ten scalar iso-scalar resonances considered in the text. The bullets indicate those that are confirmed in Ref. [92]. The additional resonance \(f_{0}(1770)\) was first proposed in Ref. [93] and is argued in Ref. [90] to be the tenth isoscalar resonance.
and \(\Gamma\simeq 0.1\;GeV\) for the decay width.
Unquenched results, that are numerically more expensive, are also affected by obscuring effects, mixing and decays, and a first principles study that includes a continuum extrapolation at the physical pion mass is still lacking. Decay widths have been calculated but are affected by large systematical errors.
The experimental situation is debated. There is agreement on the present of a supernumerary state in the scalar iso-scalar channel, but several scenarios, based on the idea of mixing, are equally plausible for the identification of the scalar glueball. Further information, coming from the currently running BESIII experiment might prove useful.
## Acknowledgments
The author would like to thank B. Lucini, A. Athenodorou, V. Drach, C. McNeile, C. Bonanno, M. Peardon, A. Patella, G. Bali, and F. Knechtli for useful discussion.
|
2303.11654 | Mitigating climate and health impact of small-scale kiln industry using
multi-spectral classifier and deep learning | Industrial air pollution has a direct health impact and is a major
contributor to climate change. Small scale industries particularly bull-trench
brick kilns are one of the key sources of air pollution in South Asia often
creating hazardous levels of smog that is injurious to human health. To
mitigate the climate and health impact of the kiln industry, fine-grained kiln
localization at different geographic locations is needed. Kiln localization
using multi-spectral remote sensing data such as vegetation indices can result
in a noisy estimates whereas relying solely on high-resolution imagery is
infeasible due to cost and compute complexities. This paper proposes a fusion
of spatio-temporal multi-spectral data with high-resolution imagery for
detection of brick kilns within the "Brick-Kiln-Belt" of South Asia. We first
perform classification using low-resolution spatio-temporal multi-spectral data
from Sentinel-2 imagery by combining vegetation, burn, build up and moisture
indices. Next, orientation aware object detector YOLOv3 (with theta value) is
implemented for removal of false detections and fine-grained localization. Our
proposed technique, when compared with other benchmarks, results in a 21 times
improvement in speed with comparable or higher accuracy when tested over
multiple countries. | Usman Nazir, Murtaza Taj, Momin Uppal, Sara Khalid | 2023-03-21T07:54:58Z | http://arxiv.org/abs/2303.11654v2 | Mitigating climate and health impact of small-scale kiln industry using multi-spectral classifier and deep learning
###### Abstract
Industrial air pollution has a direct health impact and is a major contributor to climate change. Small scale industries particularly bull-trench brick kilns are one of the key sources of air pollution in South Asia often creating hazardous levels of smog that is injurious to human health. To mitigate the climate and health impact of the kiln industry, fine-grained kiln localization at different geographic locations is needed. Kiln localization using multi-spectral remote sensing data such as vegetation indices can result in a noisy estimates whereas relying solely on high-resolution imagery is infeasible due to cost and compute complexities. This paper proposes a fusion of spatio-temporal multi-spectral data with high-resolution imagery for detection of brick kilns within the "Brick-Kiln-Belt" of South Asia. We first perform classification using low-resolution spatio-temporal multi-spectral data from Sentinel-2 imagery by combining vegetation, burn, build up and moisture indices. Next, orientation aware object detector YOLOv3 (with \(\theta\) value) is implemented for removal of false detections and fine-grained localization. Our proposed technique, when compared with other benchmarks, results in a \(21\times\) improvement in speed with comparable or higher accuracy when tested over multiple countries.
## 1 Path to Climate Impact
Industrial air pollution has a direct health impact and is a major contributor to climate change. Unregulated small-scale informal industries spread across vast areas are common in resource-limited settings and can be difficult to locate and monitor. Remote identification of kilns and monitoring of their carbon production can assist air pollution surveillance, regulation, and climate mitigation efforts. The exact numbering and location of kilns is needed to understand and address the kiln sector's climate and health impacts.
## 2 Introduction
Vehicles and industries are considered as one of the major contributors of pollution resulting in low air quality and smog Haque et al. (2022). According to an estimate \(20\%\) of global black carbon emission is from the brick kilns Maithel (2014). These kilns are also a major source of employment, however according to the Global Slavery Index of \(2019\), the "Brick-Kiln-Belt" of South Asia (stretching between Afghanistan, Pakistan, India, and Nepal) is home to approximately \(60\%\) (\(24.3\) million individuals) of modern-day slavery Landman & Silverman (2019). Keeping in view the UN's Sustainable Development Goals (SDGs) \(3.9\) and \(8.7\) which specifically aim to address air pollution and forced labor respectively, mapping brick kilns in the South Asia region is an essential first step.
Manual surveying of "Brick-Kiln-Belt" is infeasible due to the large geographical spread (\(1,551,997\ km^{2}\)) as well as geographic boundaries. Due to recent advancements in machine learning as well as availability of remote sensing data, automated surveys are now more commonly used for such large-scale analysis Redmon & Farhadi (2018); He et al. (2017); Li et al. (2018); Xie et al. (2016); You et al. (2017); Cotruto et al. (2018). Recently, remote sensing images have also been used to analyze the extent of modern slavery Boyd et al. (2018); Jackson et al. (2018); Misra et al. (2019); Foody et al. (2019). The "Slavey from Space" project Boyd et al. (2018) proposed a crowdsourced procedure to manually detect brick kilns from satellite imagery. They randomly sampled the potential kiln areas into \(320\) cells of \(100\) km\({}^{2}\) each. However, they were only able to manually annotate \(30\) geographic cells (i.e. only \(2\%\) of the entire Brick-Kiln-Belt). As a result, the manual crowd-sourced method lacks generalization and scalability as is evident from the resulting sparsely annotated maps. On the other hand, low-resolution multi-spectral satellite data has also been used to classify brick kilns in the region surrounding the Delhi state in India Misra et al. (2019). The analysis in this work was based on normalized difference vegetation index (NDVI) and transfer learning, which unfortunately is prone to generate a large number of false detections. In contrast, high-resolution satellite imagery has also been used to detect brick kilns to the east of Jaipur, which is the capital city of India's Rajasthan state Foody et al. (2019). This work used Faster R-CNNs to automate the process of brick kiln identification in the given tile of images. However, owing to the large computational complexity, this approach is difficult to apply at a large scale. Moreover, it yields a high false positive rate for which the study proposed to train a two-stageed R-CNN classifier to achieve acceptable performance which further increased the processing time. A more recent approach called KilnNet Nazir et al. (2020) combined inexpensive an classifier with object detector in a two-stage strategy to address the issue of time complexity. This approach too is only based on high resolution satellite imagery and is infeasible due to data acquisition and processing costs.
Here we also propose a two-stage strategy for automated detection of brick kilns. However, our approach is over \(21\times\) faster than state-of-the-art (SOTA) benchmarks and mostly relies on freely available low resolution remote sensing data. Most existing object detection techniques in low resolution satellite imagery are significantly less accurate whereas computation is very costly for high resolution satellite imagery. To overcome this our proposed methodology decouples classification and localization. Classification is performed using spectral properties while localization is accomplished using orientation adapted detector. This results in a coarse-to-fine search in which fine-grained orientation aware localization via object detection is performed as a second stage only on less than \(10\%\) of the total region. This results in a \(21\times\) improvement in speed in addition to improvement in accuracy. We tested our approach on three countries (Pakistan, India and Afghanistan) and showed that it is scalable as well as generalizable to varying structural, environmental and terrain conditions.
This paper has the following four technical contributions: **(i) Fusion of spectral Indices**: Classification is performed using mixture of spectral indices as shown by equation 1 in the paper. **(ii) Fusion of low resolution and high resolution imagery**: Our proposed approach processes input from low-resolution sensors for the generation of potential candidates for kilns which are then filtered via high-resolution input via Orientation-aware YOLOv3. **(iii) Processing large datasets**: The proposed strategy reduces the computational burden associated with processing of large datasets by fusion
Figure 1: Illustrative example of working of the proposed approach. In the first step we apply a rule based classifier on spectral indices (NDVI, EVI, NDMI, NDBI, BAI) on regions of interest to classify brick kilns as shown in the heatmap (Satellite images courtesy Google Earth).
of low resolution and high resolution imagery. SOTA benchmarks take on average \(674\) seconds to process three datasets in Table 1 of the paper. Our proposed approach on the other hand reduces this compute time to \(38\) seconds only. **(iv) Detection of other objects**: Our multi-sensor and multi-stage strategy can also be used to detect other objects that have differentiable spectral signatures and well defined shapes e.g. industrial units, oil tanks, warehouses, tennis courts, parking lots etc. Here we have only demonstrated its application for geo-localization of brick kilns.
## 3 Proposed Methodology
Brick kilns are typically identifiable through a visual inspection of satellite imagery. However, while considering a large geographic area, several inherent complexities in satellite imagery make automated detection of brick kilns a challenging task. This include, but are not limited to, i) variations in imaging sensors, ii) differences in kiln structure across the countries, iii) dynamic kiln surroundings and iv) variations in luminosity, seasonal changes, and pollution levels etc. It is particularly challenging to identify brick kilns in low resolution imagery (\(\frac{10-30~{}meters}{pixel}\)). Existing pixel classification along with transfer learning approach Misra et al. (2019, 2020) for detection of brick kilns is not scalable as well as generalizable at large scale due to cost complexities. In our proposed multi-spectral approach brick kilns are classified using spectral indices (without transfer learning) due to the fact that reflectance spectra of different land covers are different. The indices are designed to exploit these differences to accentuate particular land cover types. The land covers are separable at one or more wavelengths due to spectral reflectance of different materials (see Fig. 3 and Fig. 4 in Appendix A). Brick kilns being man-made structures have a high built-up index. Unlike other man-made structures and due to the specific nature of work, the kiln surrounding has a low vegetation index. Furthermore the baking process and smoke from chimneys also result in a high burn index. Thus in this work we classify brick kilns using mixture of spectral indices namely NDVI, EVI, NDBI, NDMI and BAI (see Appendix A). The proposed architecture is shown in Fig. 1.
### Klin Candidates via Multi-Spectral Classification
Based on the assumption that kiln locations have low vegetation and moisture index whereas high build-up and burn area index, small or negative values of NDVI, EVI and NDMI with positive values of NDBI and BAI are classified as brick kilns (see Fig. 2 (i)). Thus our classification rule is defined as:
\[f(x,y)=\begin{cases}1&if~{}NDVI<0.2~{}\&EVI<0.2~{}\&NDMI<0~{}\&NDBI>0~{}\&BAI> 5e^{-8}\\ 0&\text{otherwise}\end{cases} \tag{1}\]
where \((x,y)\) is a location in latitude and longitude and function \(f(\cdot)\) gives the classification decision. We set the threshold of \(0.2\) for NDVI and EVI as values \(>0.2\) are considered as healthy vegetation Huete et al. (2002). In the second stage we apply orientation aware detector (YOLOv3 with \(\theta\) value) for false removal and kilns bounding boxes.
### Orientation aware detector: YOLOv3
Although the unique spectral characteristics of brick kilns distinguish them from other classes, however they are still confused with other small industries' chimneys as they exhibits similar spectral
\begin{table}
\begin{tabular}{c|c c c c c c c c c} \hline \hline \multirow{2}{*}{**Testing Datasets**} & \multicolumn{5}{c}{**Classification Score**} \\ \cline{2-10} & **Network Architectures** & **Type** & **Type** & **Type** & **Type** & **Densities** & **Production** & **Roxel** & **F1 score** & **Time threshold** \\ \hline \multirow{4}{*}{Pulsicana (Kusen)} & **Nobile-Spectral Approach** & 21 & 300 & 1 & 0 & 0.05 & 1 & 0.12 & 3 \\ & Two-Stage Study & 13 & 300 & 1 & 6 & 0 & 0.03 & 0.68 & **0.79** & 195.5 \\ & Two-Stage SSD & 12 & 300 & 1 & 7 & 0.92 & 0.63 & 0.75 & 179.5 \\ & Klin-Net (Two-Stage YOLO) & Nair et al. (2019) & 3001 & 0 & 0 & 1 & 1 & 162.5 & 1 \\ & Proposed Multi-Spectral Bagging & 21 & - & - & - & - & - & - & - & - & - \\ & **Multi-Spectral Bagging** & 32 & - & - & - & - & - & - & - & - \\ & Two-Stage Stage & K-NN & 37 & 441 & 1 & 3 & 0.97 & 0.93 & 0.95 & 276.1 \\ & Multi-Spectral Bagging & 32 & - & - & - & - & - & - & - & - \\ & Two-Stage Stage & K-NN & 37 & 441 & 1 & 3 & 0.97 & 0.93 & 0.95 & 276.1 \\ & Multi-Spectral Bagging & 34 & 442 & 0 & 0 & 0 & 1 & 0.53 & 0.7 & 233.8 \\ & Multi-Spectral Bagging & 35 & - & - & - & - & - & - & - & - & - \\ & Two-Stage Stage & K-NN & 100 & 407 & 5 & 2 & 0.85 & 0.82 & 0.88 & 35.2 \\ & Multi-Spectral Bagging & 30 & 494 & 8 & 2 & 0.92 & 0.92 & 0.94 & **0.82** & 416.4 \\ & Multi-Spectral Bagging & 38 & 437 & 29 & 77 & 0 & 0.75 & 0.72 & 279.6 \\ & Proposed Multi-Spectral Bagging & 198 & 17 & 96 & 142 & 0.92 & 0.73 & 0.83 & 231 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table showing quantitative evaluation of the proposed approach compared with state-of-the-art methods. Top-3 ranking methods are in bold and, in particular, red (1st), violet (2nd) and black (3rd).
properties particularly NDVI and NDBI. We eliminated the resulting false detection via object detector. Unlike urban housing, kilns in South Asia are usually build at sparse locations which are mostly surrounded by agriculture land. Consequently they are usually built at arbitrary orientations. Thus the detection of axis aligned kilns Misra et al. (2020); Nazir et al. (2020) is not applicable and results in increased missed detections. To address this problem, bounding box with orientation can be used Zhang et al. (2019). We therefore modified the YOLOv3 detector and added the neuron for regressing orientation with each bounding box. We only provided filtered images (potential candidates for kilns), obtained after classification stage, to the orientation-aware YOLOv3 model and obtained brick kiln bounding boxes as output results.
## 4 Quantitative and Qualitative Evaluation
For detailed experimentation of our proposed procedure, we choose an evaluation dataset of three cities named Deh Sabz, New Delhi and Kasur from three different South Asian countries namely Afghanistan, India and Pakistan. For fair comparison we use the same geographical regions in these cities as defined in KilnNet paper Nazir et al. (2020). In addition, in order to generate training data for our object detector, we also manually annotated bounding boxes for each one of the \(1300\) brick kiln images from the 'Asia14' dataset Nazir et al. (2020).
We evaluated our proposed multispectral two-stage strategy with comparison to ResNet-152 He et al. (2016) classifier followed by the SOTA detector: Faster R-CNN Ren et al. (2015), SSD Liu et al. (2016) and YOLOv3 Redmon & Farhadi (2018). A coarse-to-fine strategy is proposed that aims to filter the bulk of the data using spectral properties while the detector is only applied on a small amount of positive detections to generate localization information while filtering false positives.
It can be seen from Table 1 that the overall F1-score of our proposed strategy is comparable with all the SOTA two-stage architectures. Simple multi-spectral approach is \(54\times\) faster as compared to other strategies but results in many false positives and less F1-score. On the other hand our proposed approach is \(21\times\) faster while retaining high F1-Score. Testing dataset images are \(256\times 256\) pixels for quantitative evaluation, If the image size is larger than \(256\times 256\) pixels, it is detected in two image patches. To deal with this issue, we describe the duplicates in Table 1. Our proposed architecture also outperforms in region of Afghanistan where kiln and non-kiln regions exhibit extremely low contrast as illustrated in our qualitative evaluation in Fig. 2 (ii) (see training parameters, Kiln-Net vs. proposed approach and compute cost comparison in Appendix B, C & D respectively).
## 5 Conclusion and Future Work
This paper proposes a fusion of spatio-temporal multi-spectral data with high-resolution imagery for detection of brick kilns within the "Brick-Kiln-Belt" of South Asia. To achieve this, we first perform classification using low-resolution spatio-temporal multi-spectral data from Sentinel-2 imagery utilizing spectral indices. Then orientation aware object detector: modified YOLOv3 (with \(\theta\) value) is implemented for removal of false detections and fine-grained localization. Our proposed
Figure 2: (i) Qualitative evaluation of our proposed Multi-spectral approach on region of Punjab, Pakistan. In first stage of our proposed two stage strategy, around \(>99\%\) data is filtered out and only positive potential candidates (red pixels images) are passed to second stage for localization. (ii) Qualitative evaluation of Orientation aware YOLOv3. (Satellite images courtesy Google Earth).
technique results in a \(21\times\) improvement in speed with comparable or higher accuracy when tested over multiple countries. In future, we also aim to evaluate our proposed strategy and detection of illegal brick kiln activity during winter smog period on all over the "Brick-Kiln-Belt" of South Asia. Remote identification of illegal industrial activity can improve monitoring of carbon production and any forced labour in aid of law enforcement, spatial planning, and climate mitigation policy-making.
|
2305.10834 | AIwriting: Relations Between Image Generation and Digital Writing | During 2022, both transformer-based AI text generation sys-tems such as GPT-3
and AI text-to-image generation systems such as DALL-E 2 and Stable Diffusion
made exponential leaps forward and are unquestionably altering the fields of
digital art and electronic literature. In this panel a group of electronic
literature authors and theorists consider new oppor-tunities for human
creativity presented by these systems and present new works have produced
during the past year that specifically address these systems as environments
for literary expressions that are translated through iterative interlocutive
processes into visual representations. The premise that binds these
presentations is that these systems and the works gener-ated must be considered
from a literary perspective, as they originate in human writing. In works
ranging from a visual memoir of the personal experience of a health crisis, to
interac-tive web comics, to architectures based on abstract poetic language, to
political satire, four artists explore the capabili-ties of these writing
environments for new genres of literary artist practice, while a digital
culture theorist considers the origins and effects of the particular training
datasets of human language and images on which these new hybrid forms are
based. | Scott Rettberg, Talan Memmott, Jill Walker Rettberg, Jason Nelson, Patrick Lichty | 2023-05-18T09:23:05Z | http://arxiv.org/abs/2305.10834v1 | # AIwriting: Relations Between Image Generation and Digital Writing
###### Abstract
During 2022, both transformer-based AI text generation systems such as GPT-3 and AI text-to-image generation systems such as DALL*E 2 and Stable Diffusion made exponential leaps forward and are unquestionably altering the fields of digital art and electronic literature. In this panel a group of electronic literature authors and theorists consider new opportunities for human creativity presented by these systems and present new works have produced during the past year that specifically address these systems as environments for literary expressions that are translated through iterative interlocutive processes into visual representations. The premise that binds these presentations is that these systems and the works generated must be considered from a literary perspective, as they originate in human writing. In works ranging from a visual memoir of the personal experience of a health crisis, to interactive web comics, to architectures based on abstract poetic language, to political satire, four artists explore the capabilities of these writing environments for new genres of literary artist practice, while a digital culture theorist considers the origins and effects of the particular training datasets of human language and images on which these new hybrid forms are based.
AI, GPT-3, DALL*E 2, Stable Diffusion, electronic literature, image generation, digital narrative, language models
## 1 Patrick Lichty: Latent Space
Aesthetics of the Latent Space: Taxonies and Architectures of the Latent Space will present a durational exploration of the poetics of Machine Learning-based image generation. Text-based machine learning has swept the digital art world from generative PFPs (Bored Ape Yacht Club) to poetic AI-based art exhibitions (The Grand Exhibition of Prompts). For the past four years, the author has been exploring the aesthetic of the latent space in AI-based imaging systems. Examining these image spaces centers on the methodological exploration of GANs and CLIP-based prompt-based art. As opposed to the representational figurative and landscape work commonly seen, the two works discussed explore analytical spaces designed to explore the poetics of Machine Learning. The first, "Personal Taxonomies," stems from a GAN-based analysis of over one thousand daily abstract calligraphies compared to finding Chomsky-Esque "deep structures" of commonality between the glyphs. Conversely, "Architectures" seeks to create ambiguous spaces subtly adjusted when representation becomes too explicit about exploring the visual locus of the latent space.
As the author wrote in the latest volume of Hegeland's "The Future of Text" anthology [1], the creation of prompt-based Machine learning-generated images is not about art but the intersection of code and poetics and visual concretism. In the case of Personal Taxonomies, this arises from a latent space of images based on Foucault's notion of the Calligram or the liminally legible glyph. AI/GANs are used to differentiate/decode these images into forms of personal Rorschach. Conversely, Architectures focuses on the prompt as a site of abstract concretized prose, with the writer struggling against representation. This presentation compares/contrasts these projects and examines the formal/aesthetic relations between AI models and their relation to the poetics of machine learning-based imaging.
## 2 Jason Nelson: Prompt and Rethink
"The Awkward Handshake," "The many occasions of moving," and "robot birds are not" are all works/collections of digital writing that engage with our utilize AI tools/engines/code/systems in some manner. Albeit most iterations of digital art engage with machine intelligence in some form, with intelligence being the ideological hinge. What seems to separate AI systems from those with more iconic/symbolic interfaces (the arrow selects, the glass magnifies) is the conversation between the human creator and our code-made collaborators. These three works engage with this conversation, this prompt and change and rethink
Figure 1: Image from βArchitectures of the Latent Space.β CC-BY 2022 Patrick Lichty
process, in variant ways, divergent for the creator and what and how the user/player/reader experiences.
"The Awkward Handshake" explores how to create the programmable skeleton of digital writing with AI tools. With coaxing and persistence, the exchange between GTP3 and my hands cobbles together a series of interfaces for digital writing. The conversation to create these literary codesets is truncated and clumsy, as the AI cannot (or will not) experience the interaction, cannot move the words and click through the poetic amalgamation of image and word. The result is a series of guesses by both actors, with the digital poet as director and arbitrator and eventual claimant of authorship.
What's spoken between the human and the AI in "The many occasions of moving" is less about the artwork and more about inspiration for the creation. As a digital art-game, TMOM creates a series of poetic worlds the reader/player can inhabit and explore. The visuals are derived from text-to-image AI chatting and cutting and reforming. The detailed imagery living in the work was only possible through this conversation. The detail of shading and style, the accurate imperfections of line and object, are the product of the engine scrapping the creativity of giant hoards of humans. That's very helpful for a digital poet with poor drawing skills.
Whereas the first two works are built from mining the expertise inherently stacked in the AI corpus, "robot birds are not" switches the human writer role to one of translation. After asking the programmed brain to make a series of surreal, broken, but recognisably comic strip, images, I attempt to understand the narrative created in the visuals. I write the bubbling speech, create the narrative connections between frames, give the comic strip images lives and relational fun-times. This process continues, two electric brains, bouncing ideas. The result is a graphic novel, built in the back and forth and imagined during hundreds of loading bars.
Nevertheless I find it critical to understand that while I imagine having a conversation with these AI tools, and I imagine we are collaborating to create digital writing, interactive and visual narratives, the AI does not know or care. I am hoping to birth new digital creatures, whereas my programmed other has no interest either way. I can type or close, listen or leave, ask or mumble, but the maybe smart computer only borrows from what the world has made, passes it on to me in variations, uncaring and yet, at times, disarmingly beautiful.
### Talan Memmott: Reinventing the Self
"Introducing Lary" is an ongoing AI project that explores the re-invention of the self following life-changing cancer surgery and treatment. Based on the artist's own experience with a laryngeal cancer diagnosis and the medical interventions that follow, the project uses a variety of AI image generating platforms along with journal entries and medical reports as text prompts to produce images that range from the mythic to the horrific; the historical to speculative.Though at times the resulting images may emphasize the emergent body horror of cancer surgeries and treatments, they serve as a form of therapeutic aesthetics for the artist as patient.
Rather than allowing the images to remain purely digital artifacts, the work extends its output in physical forms such as large scale canvas prints that include interventions such as tracheotomy tubes and sutures, and photo books that include poetic and narrative texts. "Alocutive Interpolation," a 1.5 by 1.5 meter canvas print, the first physical manifestation of the "Introducing Lary " series, was exhibited at the International Digital Media Arts Association Weird Media Exhibition in June of 2022.
Figure 3: βStomaβ from βIntroducing Laryβ. CC-BY 2022 Talan Memmott
Figure 2: Image from βThe Awkward Handshakeβ. CC-BY 2022 Jason Nelson
**Scott Rettberg: Text-to-Image Political Parody**
"Republicans in Love" is an AI text-to-image project produced during the month following the November 2022 United States Congressional election that explores the extent to which platforms such as DALL+E 2 can be used for satirical literary purposes. The project brings together elements of art history, politics, and social media discourse while also serving as a case study in the capabilities and limitations of the platform. "Republicans in Love" is a series of images based on one-line prompts that revisit historical incidents, ironies, and dangers of contemporary Trumpian populism. At the same time, the project traverses the history of European and American visual art through the manifestation of the styles of artists specified in the prompts. The project has resulted in an artist's book and a set of playing cards and will be featured in an exhibition of prints in Bergen in 2023.
How should we to understand the function and ontological status of text-to-image generation systems such as DALL+E 2, Stable Diffusion, Midjourney AI and so on? While some fear that these systems are either harbingers of doom for human artists and designers or techno-utopian manifestations of the achievement of a foundational model of generalized AI, it is the author's premise that from the user's perspective, AI-based text-to-image generation systems are best understood as writing environments. The human writer engages with the nonconscious cognitive system of the AI, which accesses immense datasets of human language and existing imagery. The system attempts to draw correlations between the language provided the user, approximations of where those language elements might meet in the latent space of its language dataset, and corresponding image elements that can be understood as conceptual approximations of the language provided. For example, a prompt such as "Republicans in love, angry about the news, eating greasy cheeseburgers at the President's desk in the Oval Office, in the style of Caravaggio" will yield images that to varying degrees of success draw together many of the elements provided.
The resulting image clearly incorporate some elements of the Renaissance painter's style, represents an idea of what a "Republican" might look like, integrates the burger, interior design similar to that of the oval office, and deals fairly well with the seemingly contradictory emotions of love and anger. There are many different ways of interpreting what this AI system is ontologically. Characterizing the interaction between human cognition, writing, machine cognition, and image production to the status of an artist's "tool" would be reductive. Rather than conceptualizing this process as akin to that of producing an image with Photoshop, we should understand it as an environment for the literary production of visual narrative. The results of Latent Diffusion Models, or Stable Diffusion systems in this regard bear a strong resemblance to practices in postmodern literature and visual art, as they function as pastiche machines, blenders of immense proportions that access to massive datasets of human expression, bringing together semiotic vertices that the systems can calculate and sketch but not comprehend. The collection of approximately 100 text and image pairs of "Republicans in Love" serves an experiment in using this environment of human and machine cognition to produce a sustained and recognizably literary work.
**Jill Walker Rettberg: Dataset Culture**
How does the data that AI models are trained on affect the art and literature they able to generate? This paper uses a critical dataset studies approach to identify the data current AI models are built upon. It then analyses selected output to demonstrate how certain genres and styles of art are emphasised above others.
Generative AI models like DALL-E and the GPT series are trained on massive datasets of images and texts. They are trained on our shared cultural heritage - or at least on a small part of it. In fact, the GPT models are mostly trained on websites that have been upvoted on Reddit, self-published fiction and the English-language Wikipedia (Brown et.al. 2020). Audits of the datasets have found that more than half the websites are from US domains, and that the countries with the 2nd, 3rd and 4th largest English speaking populations (India, Pakistan, Nigeria and the Philippines) are represented in less than 4% of the corpus. Any text with words on a blocklist meant to avoid porn, swearwords and slurs is filtered out, but this also
Figure 4: βRepublicans in love, angry about the news, eating greasy cheeseburgers at the Presidentβs desk in the Oval Office, in the style of Caravaggioβ from βRepublicans in Loveβ. CC-BY 2022 Scott Rettberg |
2302.05520 | Synchrony/Asynchrony vs. Stationary/Mobile? The Latter is Superior...in
Theory | Like Asynchrony, Mobility of faults precludes consensus. Yet, a model M in
which Consensus is solvable, has an analogue relaxed model in which Consensus
is not solvable and for which we can ask, whether Consensus is solvable if the
system initially behaves like the relaxed analogue model, but eventually morphs
into M. We consider two relaxed analogues of M. The first is the traditional
Asynchronous model, and the second to be defined, the Mobile analogue. While
for some M we show that Consensus is not solvable in the Asynchronous analogue,
it is solvable in all the Mobile analogues. Hence, from this perspective
Mobility is superior to Asynchrony.
The pie in the sky relationship we envision is: Consensus is solvable in M,
if and only if binary Commit-Adopt is solvable in the mobile analogue.
The ``only if'' is easy. Here we show case by case that the ``if'' holds for
all the common faults types. | Eli Gafni, Vasileios Zikas | 2023-02-10T21:49:55Z | http://arxiv.org/abs/2302.05520v1 | # Synchrony/Asynchrony vs. Stationary/Mobile? The Latter is Superior...in Theory.
###### Abstract.
Like Asynchrony, _Mobility_ of faults precludes consensus. Yet, a model \(M\) in which Consensus is solvable, has an analogue relaxed model in which Consensus is not solvable and for which we can ask, whether Consensus is solvable if the system initially behaves like the relaxed analogue model, but eventually morphs into \(M\). We consider two relaxed analogues of \(M\). The first is the traditional Asynchronous model, and the second to be defined, the Mobile analogue. While for some \(M\) we show that Consensus is not solvable in the Asynchronous analogue, it is solvable in all the Mobile analogues. Hence, from this perspective Mobility is superior to Asynchrony.
The pie in the sky relationship we envision is: Consensus is solvable in \(M\), if and only if binary Commit-Adopt is solvable in the mobile analogue.
The "only if" is easy. Here we show case by case that the "if" holds for all the common faults types.
###### Acknowledgements.
This version was submitted to PODC 2020.
## 1. Introduction
### Mobility and Message Adversary
The notion of _indulgence_, a term usually frowned upon, has found merit in distributed computing as an adjective for an algorithm that solves a task in an environment which is initially asynchronous but eventually behaves synchronously [(14)]. An algorithm indulgences the asynchronous period in the sense of preserving safety. It becomes live in the times of synchrony. Obviously this notion can be extended to a task \(T\), by saying that \(T\) is indulgent if there exist an indulgent algorithm to solve \(T\). More precisely, a task \(T\) is indulgent if when \(T\) is solvable in any model with certain type and number of faults, it is also solvable eventually.
Thus, indulgence takes a task and all the models in which the task is solvable. For each model we assume a relaxed model is defined. Then the task is indulgent if it is solvable when the system starts in the respective relaxed model, but eventually behaves as the (unrelaxed) model.
Thus indulgence has there parameters: The Task \(T\), the fault type, and the pairs of unrelaxed/relaxed models.
Here we consider the task to be Consensus, and the pairs to be Synchronous/Asynchronous for the common fault types.
This paper's motivation is the displeasing observation that for two fault types consensus is not indulgent:
1. Send-Omission Faults: Consensus in the synchronous case is achievable for \(t<n\), where \(t\) is the number of faults and \(n\) is the number of processors. In contrast, it is a folklore that at the time of asynchrony no safety can be maintained for \(t\geq n/2\) as network partition occurs.
2. Authenticated Byzantine: Consensus in the synchronous case is achievable for \(t<n/2\)[(10; 7)]. In contrast, at the time of asynchrony no safety can be maintained for \(t\geq n/3\)[(8)].
To avoid displeasure, we investigate replacing the Synchrony/Asynchrony pair with the Stationary/Mobile pair. The system is synchronous, each processor gets a signal of an end of a round after which no message is in transit to it. The misbehavior is when the faults are mobile, and the desired behavior is when the faults become stationary. If the number of fault is \(t\), in the stationary case only fixed \(t\) processors can experience faults. In the mobile case, each round \(t\) different processors may exhibit faulty behaviour.
Usually, faults are attributed to processors. Here we restrict ourselves to an adversary which attacks a processor by controlling their messages sending interface. Thus a 1-omission resilient mobile system will be a _synchronous_ system where all send to all and in each round an adversary chooses a processor and can drop any number of the messages the processor sends (cf. [15]). In the Byzantine case, the adversary can not only drop messages but also tamper with them. And finally, we attend to the Authenticated Byzantine, which posed the foremost technical difficulty of defining the notion of mobility, and proving the desired result for it.
When defining authentication, one often thinks of its cryptographic instantiation by means of digital signature. But authentication has an English-Dictionary definition independent of signatures. It is about verification that a claim holds. the verification of a claim that a painting is by Picasso, or that a fossil is from the Paleolithic Period, or that a Diary was written by Hitler, is called Authentication. Thus in this paper Authenticated Byzantine is an abstract assumptions on which claims can be verified (and consequently not forged) and which cannot. Obviously in our case it will be about "if processor \(p_{i}\) claimed it received a message \(m\) at round \(j\) from \(p_{k}\)" can the receiver of such a claim can verify verify whether the claim is true or not. Thus, it constraints the adversary, to make only claims that cannot be verified as false. Consequently, forgetting the means of verification, we consider authentication set of pairs \((p_{i},j)\) where \(p_{i}\) is a processor and \(j\) is round number. The messages sent by \(p_{i}\) at round \(j\) can be forged while message generated by pairs not in the set cannot be forged.
Our results in the Authenticated Byzantine is for this abstract constrains. To our knowledge, Authenticated Byzantine was never defined for the mobile case at this clean level of abstraction. A challenge this paper poses is to find an implementation, e.g., by using cryptography under appropriate assumptions, of the functionality of our abstract definition. Nevertheless, proof can still be done on the abstract functionality level.
Our first encounter with idea of mobility was through the beautiful observation of Santoro and Widmayer [19] that the FLP proof of consensus impossibility translates verbatim to the mobile setting, and afterwards for privacy-violating corruptions in the context of _proactive-security_[17].
It isn't clear what Santoro and Widmayer had in mind as the cause of faults when they talked about omission faults. Why did the omission faults occur? There can be two views for that. One is that the processor misbehaved, the other is that an adversary intercepted messages sent, and dropped them. Which view one chooses, conceptually makes a big difference. In the Byzantine failures case the generalization of the former is that a virus got control of the processor. The generalization of the latter is that a deamon only tampered with messages.
Theoretically speaking, the latter view of a deamon, rather than a virus--which in fact gave rise to the notion of _message-adversary_ (MAd) by Afek and Gafni []--is more pleasing: In this approach a message adversary unified many seemingly unrelated notions. For instance, MAd when generalized here to the Byzantine, has processors being always innocent and good. Thus we avoid questions like "are corrupt processors required to output and, if so, what should their output be?" or specifying a problem in terms of "correct" and "incorrect" which are notions associated with an execution, defeating the presentation of task as solely a mathematical relation, independent of the environment in which it is to be solved. The analogue of a function in centralized computing.
Alas, thinking of the adversary, especially, in the Byzantine case as being a Message Adversary may not sit well with reality. This paper isn't about reality, it is about whether we can get the mathematics to be nicer.
This paper is in the mold of _set consensus_[(5)]. There is no _killer application_ for set consensus. There might never be. Nevertheless, the mathematics say that for distributed computing there is no preference to consensus over set consensus. Similarly here. Yet, in fact, some solutions for proactive security move in our theoretical direction of MAd [(17)].
### Consensus vs. Commit-Adopt
Commit-Adopt (CA) (K
An _interactive processor_ is a processor that in addition to local computation, has the ability to communicate with other processors by sending them messages over some network. A _protocol_ among \(n\) processors is a collection of \(n\) interactive processor specifications.1
Footnote 1: We restrict our attention to non-reactive tasks where processors receive only one input at the beginning and produce a single local output (see below). This type of protocols is sufficient for our results; however, one can extend this definition to reactive tasks.
A _model_\(M\) consists of two components, the _system model_ and the _communication model_. The system model specifies the types and capabilities of the processors, along with the properties of the communication between them. The _failure (aka adversary) model_ specifies the types, e.g., fail-stop, omission, byzantine, etc, and combination, e.g., up to \(t\) faults, of different faults that the processors might endure.
A task \(T\) with corresponding relation \(R_{T}\) is _solvable_ in a model \(M\) if there exists a protocol in \(M\) such that if processors start with some input tuple, they all output from an output tuple relating, through \(R_{T}\), to the input tuple.
Definition 1 ().: _An \(n\)-processor task \(T\) described by relation \(\mathcal{R}_{T}\subseteq\mathcal{I}^{n}\times\mathcal{O}^{n}\) is solvable in a model \(M\) if there exists a protocol \(\Pi\) in \(M\) such that \(\forall(x_{1},\ldots,x_{n})\in\mathcal{I}^{n}\), if every processor \(p_{i}\) runs (his code of) \(\Pi\) on input \(x_{i}\), then every processor outputs \(y_{i}\in\mathcal{O}\) such that \(((x_{1},\ldots,x_{n}),(y_{1},\ldots,y_{n}))\in\mathcal{R}_{T}\)._
_Commit-Adopt._ In addition to consensus we will use Commit-Adopt (CA) task (Cafini and Vasileios Zikas, 2017). CA (also referred to as _graded consensus_(Kal
parties. This makes our corruption model more suitable to capture a mobile adversary who in the course of the protocol might corrupt every processor.
To be able to achieve such stronger guarantees we consider adversaries that operate at "the network interface" (e.g., communication tapes) of the parties, rather than corrupting the parties' internal state. This allows us to define producing an output as writing it on a special write-only and append-only output tape which is out of bounds for the adversary. Here is how our adversary is defined.
We consider a central adversary who might affect messages sent by parties it corrupts; we refer to such an adversary as a _message adversary_, in short _MAd_ (adversary). We note in passing that the notion of _corrupted processor_ at a round is just for descriptive means of delineating the power of the MAd Adversary. More concretely, a MAd adversary might intercept the _outgoing messages_, rather than the internal state, of processors. In a nutshell, in every round, each corrupted party prepares its messages for the current round, according to the messages received from previous rounds, by following his protocol instructions; depending on the privacy assumption on the underlying communication model, a MAd adversary can tamper with theses messages.
To make the strongest possible statements, here we will consider the full information model [12] whereby the adversary gets to see all messages exchanged in the protocol. We will assume that, subject to its constraints, the adversary is non-deterministic, and can produce either garbage or the set of messages that will be most detrimental to an operation of an algorithm. So rather than describing the adversary as an algorithm operating in its history, we will describe simply which messages _cannot_ be non-deterministically produced given a corruption pattern. As an example, unforgeability of signatures for keys inaccessible to the adversary, e.g., keys of an uncorrupted party \(p_{i}\), would correspond to restricting the messages that the adversary might be able to inject to the protocol on behalf of \(p_{i}\) to those that are actually generated by \(p_{i}\).
The adversary is described by means of when parties are corrupted--_stationary, mobile_, or _eventually stationary (mobile)_--and by the corruption types--_omission, Byzantine_, and _Authenticated Byzantine_, as discussed below.
The stationary MAd adversaryA stationary \(t\)-MAd Adversary is an adversary that can corrupt at most \(t\) processor throughout an execution of a protocol.2
Footnote 2: This includes both static and adaptive corruptions as defined in the cryptographic literature.
The mobile MAd adversaryThe _mobile_\(t\)-MAd adversary is restricted to corrupting at most \(t\) processors in a _round_. Thus over time all processor may experience message tampering albeit in different round.
The eventually-stationary (mobile) MAd adversaryThis is an adversary that for a finite (but unknown to the protocol) number \(\rho\) of rounds behaves as a mobile adversary, but from round \(\rho+1\) on becomes stationary. More concretely, an _eventually-stationary_\(t\)-_MAd adversary_ is an adversary that plays a mobile \(t\)-MAd adversary strategy for a finite number of rounds and then chooses and from some point on confines its corruption to a fixed set of at most \(t\) processors.
#### 2.3.1. MAd-Corruption Types considered in the paper
Send Omission failureThe adversary is restricted to just removing messages but it is not constrained to remove all messages of a processor in rounds after it removed some.
Byzantine failureThe adversary can tamper with messages replacing them with any of its own choosing.
_Authenticated Byzantine failure._ Like Byzantine only that if at round \(j\) processor \(p_{i}\) was not corrupted, then at subsequent rounds \(k>j\) no processor can claim messages sent by \(p_{i}\) at round \(j\) anything that has not really been sent, aside from just pretending a message was not received. More formally, in the synchronous authenticated Byzantine setting we will assume wlog that any message sent in a protocol from \(p_{i}\) in round \(\rho\) has the formal \((p_{i},\rho,m)\) where \(m\in\{0,1\}^{*}\) is the contents of the message and \(p_{i}\) and \(\rho\) are associated metadata. The authenticated byzantine adversary model mandates then that if for any message \((p,\rho,m)\), where \(p\in\mathcal{P}\), and for any party \(p^{\prime}\in\mathcal{P}\), \((p^{\prime},\rho^{\prime},m^{\prime})\) is (an encoding of) a substring of \(m\), where \(\rho^{\prime}\leq\rho\), then either \(p^{\prime}\) was corrupted in round \(\rho^{\prime}\) or he was uncorrected and sent \((p^{\prime},\rho^{\prime},m^{\prime})\).
Remark 1 (Not giving up corrupted parties--even corrupted parties produce outputs).: _One might consider a natural MAd-analogue of standard Byzantine and omission corruption to tamper with both incoming and outgoing communication. However, defining things this way leads to complications with how broadcast and consensus are defined. In particular, the traditional cryptographic definitions give up corrupted parties, i.e., give no guarantees about the output of corrupted parties. This means that not only these tasks cannot be defined as simple functions taking only processor's inputs into account but have to also consider the adversary, but also, the definition might give up a party that performs all its operations correctly, just because he is "stained" as being corrupted. Instead, here we we want to consider feasibility for the task-based natural definition of consensus discussed above, where corrupted processors are not discriminated. Clearly if the adversary can tamper or block incoming communication, it is impossible to avoid corrupted parties from outputting no output (e.g., \(\bot\)) or even a wrong output depending on the setup and the adversary's capabilities. For this reason, we restrict a MAd-adversary to only tamper outgoing communication. Note that this adversary might still inject messages as outgoing messages of corrupted parties._
## 3. Cross-Model Reductions
Let \(M\) be any of the above models in which in the stationary case Consensus is solvable, and in the mobile case CA is solvable.
As a simple consequence of [14] we obtain an indulgent protocol for consensus: Take any protocol \(\Pi_{C}\) that solve consensus in \(M\) and any protocol \(\Pi_{CA}\) that solves CA in the mobile analogue of \(M\), and run them alternately. A processor outputs when it commits in \(\Pi_{CA}\). Nevertheless, notice that processors in our models work forever (as usually Consensus is to implement a ledger). Hence we have no notion of _halt_.
But indulgence is a motivating side show. We want to show that Consensus and CA are twins, always solvable for the same \(M\) with only stationery and mobility, respectively.
One direction follows from the beginning of Section 4 below (Theorems 3 and Corollary 4) and the fact that if a task is solvable in the mobile model it is also solvable in the stationary.
Thus we get a theorem:
Theorem 3.1.: _If CA is solvable in mobile model \(M\), then Consensus is solvable in stationary \(M\)._
By applying [14] to the above theorem, we get the following corollary.
Corollary 2.: _If CA is solvable in mobile model \(M\), then Consensus is indulgent in \(M\)._
We would like to have the theorem: If Consensus is solvable in \(M\), then binary CA is solvable in mobile \(M\). We conjecture there is a way to prove this generically, but for now we show it case by case.
## 4. Commit-Adopt for a stationary adversary
As a warmup we start with stationary adversasry, i.e., the adversary corrupts up to \(t\) processors through the protocol and never changes his corruption. Most of the results in this section can be easily obtained by existing literature. Nonetheless, we include them here for completeness and to be able to refer to them in the following section.
In order to establish the connection between consensus and commit-adopt we use the following simple reduction from (Friedman, 1994; Goyal and Goyal, 2006) (which in terms relies on ideas from (Bartos et al., 2010; Goyal and Goyal, 2006)). Let CA be a protocol for commit-adopt in the stationary setting secure against \(t\) corruptions. Then the following simple phase king protocol, which we refer to as \(\mathsf{Consensus}_{Stat}\) allows us to construct consensus out of CA.
* Let \(x_{i}\) be the input of processor \(p_{i}\). Every party sets \(temp_{i}:=x_{i}\)
* For \(i=1,\ldots,n\)
1. The processors execute CA on inputs \(temp_{1},\ldots,temp_{n}\); Denote by \(y_{i}\) the output of \(p_{i}\) in CA. By definition of CA, for each \(p_{j}\), for some \(b_{j}\) we have \(y_{j}\in\{commit(b_{j}),adopt(b_{j})\}\)
2. \(p_{i}\) sends \(b_{i}\) to every \(p_{j}\) who denotes the received value as \(b_{i\to j}\).
3. Each \(p_{j}\) sets \(temp_{j}:=\left\{\begin{array}{l}b_{j}\text{ if }y_{j}=commit(b_{j})\\ b_{i\to j}\text{ otherwise}\end{array}\right.\)
* Every processor outputs \(temp_{n}\)
Theorem 3 ((Friedman, 1994; Goyal and Goyal, 2006)).: _If CA solves Commit-Adopt in the stationary Byzantine MAd adversary model then \(\mathsf{Consensus}_{Stat}\) solves consensus in the same model._
It is straight-forward to verify that the above theorem applies verbatim to send-omission corruption and the authenticated Byzantine setting. For completeness we state this in the following corollary.
Corollary 4.: _If CA solves Commit-Adopt in the stationary (send)-omission or Authenticated Byzantine MAd adversary model then \(\mathsf{Consensus}_{Stat}\) solves consensus in the corresponding model._
Furthermore, the following theorem follows trivially from the trivial reduction of commit-adopt to consensus:
1. The processors run consensus; denote by \(y_{i}\) the output of processor \(p_{i}\)
2. Every \(p_{i}\) outputs \(commit(y_{i})\)
Theorem 5.: _If there exists a protocol for solving binary consensus in the stationary (send)-omission, Byzantine, or Authenticated Byzantine MAd adversary model, then there exists a protocol CA solving binary commit-adopt in the corresponding model._
### (Send)-Omission Corruption
A protocol for send-omission corruptions tolerating any number \(t<n\) of corrupted processors follows from (Goyal and Goyal, 2006). This bound is trivially tight. We note that the analogous positive result in our model--where every party needs to output--follows directly from the corresponding bound for mobile adversary (cf. Theorem 9). It following from Theorem 5 and Corollary 4 that this bound is also tight for CA.
Theorem 6.: _There exists a protocol for solving CA in the stationary (send)-omission \(t\)-MAd adversary model if and only if \(t<n\)._
### Byzantine
Lamport et al. (Lamport et al., 2016; Lamport et al., 2017) proved that byzantine consensus is possible if and only if \(t<n/3\) of the parties are corrupted. An efficient protocol for this bound was later given by Berman et al. (Berman et al., 2017). We will call their protocol BGP-Consensus. Although their definition of consensus does not give any guarantees on the output of corrupted processors, one can easily use their protocol in a black-box way to add this guarantee, by adding and extra round in which every party sends his output from BGP-Consensus to everyone, and everyone outputs the value received by at least \(2n/3\) of the processors. Since in BGP-Consensus all uncorrupted processors output the same value \(v\), all processors will receive \(v\) for all of them and \(v^{\prime}\neq v\) from at most \(t<n/3\) parties, so they will all output \(v\). From the above and the equivalence of consensus and commit-adopt in the byzantine model (Theorem 5 and Corollary 4) we get the following:
Theorem 7 ((Berman et al., 2017)).: _There exists a protocol for solving CA in the stationary Byzantine \(t\)-MAd adversary model if and only if \(t<n/3\)._
### Authenticated Byzantine
In the Authenticated Byzantine setting a lower bound of \(t<n/2\) corruptions was proved by Fitzi (Fitzi, 2000, Proposition 3.1). A protocol for the authenticated setting follows from the observation that, in the stationary model, one can achieve our restrictions on the Authenticated Byzantine MAd adversary by assuming (perfectly) unforgeable signatures, and having every party digitally sign his messages using a standard existentially unforgeable signatures scheme.3 Indeed, under this implementation, the adversary will be unable to create any message on behave of any uncorrupted processor.
Footnote 3: Indeed, our definition of Authenticated Byzantine is equivalent to having an imaginary perfect signature scheme. Note that, in reality, such signatures do not exists, hence in an actual realization, the protocol will achieve consensus except with negligible probability.
The above observation implies that the folklore reduction of consensus to broadcast for \(t<n/2\) yields a consensus protocol in our model: Have every party use Dolev-Strong broadcast (Dolev and Strong, 1987) to broadcast his input to everyone, and then take majority of the broadcasted values.
Theorem 8 ((Dolev and Strong, 1987; Fitzi, 2000)).: _There exists a protocol for solving binary CA in the stationary authenticated Byzantine \(t\)-MAd adversary model if and only if \(t<n/2\)._
## 5. Commit-Adopt for a Mobile Adversary
### (Send)-Omission Corruption
We show that if in each round a MAd adversary can pick any \(n-1\) processors and drop any message of this processor it wishes, and then change to another \(n-1\) processors in the next round then nevertheless binary-CA is solvable.
The algorithm proceeds in two rounds:
1. Every processor sends its input to everyone
2. Every processor \(p_{i}\): If there is a bit \(b\) such that only \(b\) was received, then send \(\mathit{propose-commit}(b)\) to everyone. Else send \(\mathit{no-commit}\)
3. Every processor \(p_{i}\): If for the bit \(b\) only \(\mathit{propose-commit}(b)\) was received, then output \(commit(b)\); else if for a unique \(b\), \(commit(b)\) was received (by any party), output \(\mathit{adopt}(b)\); otherwise output \(\mathit{adopt}(0)\).
In each round there is at least one processor heard by all. Thus, after one round where processors exchange inputs, a processor proposes commit \(b\) in the next round only if it heard just \(b\). Since all heard the same bit from some processor only a single bit can be proposed to be committed, in the next round.
In the second round, a processor that only receives proposed commit \(b\), commits \(b\), else if it receives proposed commit \(b\), it adopts \(b\). If a processor committed then it received a proposed commit \(b\) from the at least one processor all hear from. thus, if one commits all receive proposed commit, and thus will at least adopt.
Theorem 5.1 ().: _There exists a protocol solving commit-adopt against an mobile MAD \(t\)-adversary for any \(t<n\)._
### Byzantine
It follows directly from Theorem 5.1 and the fact that a mobile adversary is at least as strong as a stationary that at most \(t<n/3\) Byzantine mobile corruptions can be tolerated for commit-adopt. In the following we describe a construction that meets this bound. Our commit-adopt protocol follows the structure of the graded consensus construction from (Han et al., 2016; Han et al., 2017), where we use a weak consensus primitive.
\(Protocol\,\mathrm{WeakConsensus}(\mathcal{P},\vec{x}=(x_{1},\ldots,x_{n}))\).
1. Each \(p_{i}\in\mathcal{P}\) sends \(x_{i}\) to every \(p_{j}\); \(p_{j}\) denotes the set of players who sent him 0 (resp. 1) as \(P_{j}^{(0)}\) (resp. \(P_{j}^{(1)}\)).
2. Each \(p_{j}\) sets \(y_{j}:=\left\{\begin{array}{cl}0&\mbox{if }|P_{j}^{(0)}|>2n/3,\mbox{ else}\\ 1&\mbox{if }|P_{j}^{(1)}|>2n/3,\mbox{ else}\\ \mbox{``n/v''}\end{array}\right.\)
Theorem 5.2 ().: _The protocol \(\mathrm{WeakConsensus}\) satisfies the following properties against an eventually-static mobile MAD \(t\)-adversary with \(t<n/3\): (weak consistency) There exists some \(y\in\{0,1\}\) such that every \(p_{j}\in\mathcal{P}\) sets \(y_{j}\in\{y\), "n/v"\(\}\) (persistency) If every \(p_{i}\in\mathcal{P}\) has the same input \(x\) then they all set \(y_{j}:=y=x\). (termination) All parties set their \(y_{i}\) value after a single round_
Proof.: (termination) Termination is trivial since all parties set the value of \(y_{i}\) at the end of their single round.
(weak consistency) Assume, that a player \(p_{i}\) sets \(y_{i}=0\). This means that \(y_{i}=0:|P_{i}^{(0)}|>2n/3\). But since less than \(1/3n\) of the parties in \(|P_{i}^{(0)}|\) might be corrupted, this means that more than \(1/3\) of the parties are uncorrupted during the protocol's single round and also send 0 to every other \(p_{j}\). Hence \(|P_{i}^{(1)}|\leq 2n/3\) which means that no \(p_{j}\) will decide on 1.
(persistency) If all non-actively corrupted players have input 0 (the case of pre-agreement on 1 can be handled symmetrically) then every \(p_{i}:\mathcal{P}\) receives 0 from at least all those parties, i.e., \(y_{i}=0:|P_{i}^{(0)}|>2n/3\) and therefore outputs 0.
\(Protocol\,\mathtt{CA}(\mathcal{P},\vec{x}=(x_{1},\ldots,x_{n}))\).
1. The players invoke \(\mathrm{WeakConsensus}(\mathcal{P},\vec{x})\) and let \(y_{i}\) denote the value set by \(p_{i}\) (note that \(y_{i}\in\{0,1\), "n/v"\(\}\)).
2. Each \(p_{i}\in\mathcal{P}\) sends \(y_{i}\) to every \(p_{j}\). \(p_{j}\) denotes the sets of players who sent him 0, 1, and "n/v" as \(P_{j}^{(0)},P_{j}^{(1)}\), respectively
3. Each \(p_{j}\) sets \(b_{j}:=\left\{\begin{array}{cl}0&\mbox{if }|P_{j}^{(0)}|\geq|P_{j}^{(1)}|\\ 1&\mbox{otherwise}\end{array}\right.\)
4. Each \(p_{j}\) output \(o_{j}:=\left\{\begin{array}{cl}commit(b_{j})&\mbox{if }|P_{j}^{(x_{j})}|>2n/3\\ adopt(b_{j})&\mbox{otherwise}\end{array}\right.\)
**Theorem 11**.: _The protocol CA solves commit-adopt against an mobile Byzantine \(t\)-MAd adversary with \(t<n/3\)._
Proof.: We need to prove the following properties:
* _(Property 1)_ If for some \(z\in\{0,1\}\) some processor \(p_{i}\in\mathcal{P}\) outputs \(commit(b)\) then every processor \(p_{j}\in\mathcal{P}\) outputs \(o_{j}\in\{commit(b),adopt(b)\}\).
* _(Property 2)_ If every \(p_{i}\in\mathcal{P}\) has the same input \(x\) then they output \(commit(x)\).
The properties are proved in the following:
**Property 1.** Assume, that a player \(p_{i}\) outputs \(o_{j}=commit(0)\) (the case of \(o_{j}=commit(1)\) is handled symmetrically). This means that \(p_{i}\) received \(0\) in the second round from more than \(2n/3\) parties, i.e., \(|P_{i}^{(0)}|>2n/3\). Since less than \(n/3\) of these parties are corrupted in Round 2, this means that \(p_{i}\) have received \(0\) from more than \(n/3\) of the parties that were uncorrupted in Round 2, who therefore also sent \(0\) to every other \(p_{j}\). Hence in Round \(2\) every \(p_{j}\) has received \(0\) more than \(n/3\) times, i.e., \(|P_{i}^{(0)}|>n/3\). Additionally, since the message sent in Round 2 is the message that is set during Round 1 (i.e., the outcome of weak consensus) the output of parties from Round 1 must have been in \(\{0,\mbox{``n/v''}\}\) and therefore no party who is uncorrupted in Round 2 might sent 1; hence, since there are at most \(n/3\) corrupted parties per round, \(|P_{j}^{(1)}|\leq n/3<|P_{j}^{(0)}|\) and every party sets \(z_{j}=1\).
**Property 2.** If all non-actively corrupted players have input \(0\) (the case of pre-agreement on \(1\) can be handled symmetrically) then by persistency of WeakConsensus everyone sets \(y_{i}=0\) in Round 1, hence in Round 2 every uncorrupted party sends \(0\) and therefore every party \(p_{i}\) received \(0\) from at least \(2/3n\) times and therefore outputs \(commit(0)\)
### Authenticated Byzantine
Again, it follows directly from Theorem 8 and the fact that a mobile adversary is at least as strong as a stationary that at most \(t<n/2\) Authenticated Byzantine mobile corruptions can be tolerated for commit-adopt. In the following we describe a construction that meets this bound.
We introduce a task that we call \(t\)-MAd-Authenticated-Byzantine-SM: Every processor \(p_{i}\) has an input value \(x(i)\) from some domain \(V\) and outputs a vector \(\vec{y}_{i}=(y(1)_{i},\ldots,y(n)_{i})\) such that each \(y(j)_{i}\in V\cup\{\bot\}\) and the following conditions hold:
* There exists a set of indices \(Ind\subseteq[n]\), with \(|Ind|\geq n-t\) such that for each \(j\in Ind,y(j)_{i}=x(j)\) for all \(i\in[n]\)
* if \(p_{i}\) and \(p_{j}\) output \(y(\ell)_{i}\neq\bot\) and \(y(\ell)_{j}\neq\bot\), respectively, then \(y(\ell)_{i}=y(\ell)_{j}\)
_Protocol \(t-MAdABSM(\mathcal{P},\vec{x}=(x(1),\ldots,x(n)))\)._
1. Every \(p_{i}\) sends \(x(i)\) to every \(p_{j}\), who denotes the received value as \(v(i)_{j}\) (\(v(i)_{j}\) is set to a default value, e.g., \(0\), if no value was received). Let \(\vec{v}_{j}=(v(1)_{j},\ldots,v(n)_{j})\).
2. Every \(p_{i}\) sends \(\vec{v}_{i}\) to every \(p_{j}\). \(p_{j}\) denotes the received vector by \(\vec{v}_{i\to j}=(v(1)_{i\to j},\ldots,v(n)_{i\to j})\), where \(\vec{v}_{i\to j}:=\bot^{n}\) if \(\vec{v}_{i\to j}\notin V^{n}\) was received.
3. Every \(p_{j}\) and every \(\ell\in[n]\): if for some \(b\in V\) and some set \(I_{i}\subseteq[n]\) with \(|I_{i}|\geq n-t\), \(v(\ell)_{i\to j}=b\) for all \(i\in I_{i}\) and \(v(\ell)_{i\to i}=\bot\) for all \(i\in[n]\setminus I_{i}\) then set \(y(\ell)_{i}=b\) else set \(y(\ell)_{i}=\bot\). Output \(\vec{y}=(y(1)_{i},\ldots,y(n)_{i})\).
**Theorem 12**.: _Protocol \(t-MAdABSM\) solves \(t\)-MAd-Authenticated-Byzantine-SM in the mobile \(t\)-MAd Authenticated Byzantine adversary model for \(t<n/2\)._
\(\Box\)
Protocol \(\mathsf{CA}(\mathcal{P},\bar{\mathfrak{a}}=(\mathsf{x}(1),\ldots,\mathsf{x}(n)))\).
1. Execute \(t-MAdABSM(\mathcal{P},\bar{\mathfrak{x}})\); every processor \(p_{j}\) denoted its output as \(\vec{z}_{j}\).
2. For each \(p_{j}\): if there is a value \(v\neq\perp\) such that majority of the elements in \(\vec{z}_{j}\) equal \(v\), then set \(o(j):=\mathit{propose}.commit(v)\), else set \(o(j):=\mathit{propose}.no.commit\).
3. Execute \(t-MAdABSM(\mathcal{P},\bar{\mathfrak{o}}=(o(1),\ldots,o(n)))\); every processor \(p_{j}\) denotes its output as \(\vec{d}_{j}=(d(1)_{j},\ldots,d(n)_{j})\).
4. Every \(p_{j}\): if for some bit \(b\) the number of indices \(\ell\) such that \(d(\ell)_{j}=commit(b)\) is more than \(n/2\) then output \(commit(b)\): else if for some bit \(b^{\prime}\): \(|commit(b^{\prime})|>|commit(1-b^{\prime})|>0\) then \(\mathit{adapt}(b^{\prime})\): else \(\mathit{adapt}(\mathsf{x}(j))\).
Theorem 13: _Protocol \(\mathsf{CA}\) solves binary CA in the mobile \(t\)-MAd Authenticated Byzantine adversary model for \(t<n/2\)._
Proof of Theorem 12 (sketch).: Consider processor \(p_{i}\) which is not corrupted in line 1 (i.e. the first round). All processor \(p_{j}\) will have \(v(i)_{j}=\mathsf{x}(i)\); consequently since minority is corrupted in the second round (line 2), then majority will send \(\mathsf{x}(i)\) in the second round. On the other hand the processors corrupted in the second round cannot forge \(\mathsf{x}(i)\), hence their value for \(p_{i}\) will be \(\mathsf{x}(i)\) or \(\perp\).
Now let \(p_{i}\) be a processor corrupted in the first round. To output a value different than \(\perp\) (line 3) processor \(p_{j}\) received same value majority \(v(i)_{k}\) from indices \(k\) whose cardinality is at least a majority. At most minority was corrupted out of this majority, thus this same value \(v(i)_{k}\) is the only candidate value (not \(\perp\) ) to be output by any processor for the value \(y(i)_{j}\) which is the same for all \(j\).
\(\Box\)
Proof of Theorem 13 (sketch).: First we show that if all start with the same value they all commit to this value. By the definition of \(t\)-MAd-Authenticated-Byzantine-SM, all processors will have a set \(I\) with \(|I|>n/2\) which return an input value which in this case they are all the same. Consequently, all processors will propose to commit this value \(v\). Hence, by the property of \(t\)-MAd-Authenticated-Byzantine-SM which is used the second time, the will all output \(commit(v)\) for this value.
To complete the proof, we argue that if some processor \(p_{i}\) outputs \(commit(v)\) then everyone outputs \(\mathit{adapt}(v)\). Notice that only a single \(\mathit{propose}-commit(v)\) can be output in Line 2. This is because a propose commit \(v\) by \(p_{i}\) requires that in a majority of indices in its output it has value \(v\). By the property of \(t\)-MAd-Authenticated-Byzantine-SM, no other processor output conflicts with a different value on these majority indices, then majority for another value for any other processor is impossible.
Assume now \(p_{i}\) committed \(v\). Then it returned from line 3 of its output vector with majority \(\mathit{propose}.commit(v)\). Let this majority value be \(m\). Since processors have no conflict on their output entries returning from \(t\)-MAd-Authenticated-Byzantine-SM, it is easy to see that if another processor \(p_{j}\) has a \(\perp\) in an entry in which \(p_{i}\) has \(\mathit{propose}.commit(v)\), this is a result of \(p_{j}\) having been corrupted in the first round of the \(t\)-MAd-Authenticated-Byzantine-SM. Thus, for each index in which \(p_{i}\) has \(\mathit{propose}.commit(v)\) for \(p_{j}\) to miss it, the adversary has to spend a corruption of the first round. Suppose \(p_{j}\) has \(q\) indices in which \(p_{i}\) output \(\mathit{propose}.commit(v)\), and it output \(\perp\) thus it has another \(t-q\) indices it can corrupt to have \(\mathit{propose}.commit(\bar{v})\). Thus, the number of \(\mathit{propose}.commit(\bar{v})\) is \(t-q\) which the number of \(\mathit{propose}.commit(v)\) it has is \(m-q\). Since \(m>t\) it will adopt \(v\).
## 6. Conclusions
We have examined the Stationary/Mobile replacing Synchronous/Asynchronous and shown that the former when considered in the MAd adversary model is indulgent for common models unlike its counterpart.
We contend that there is a much richer clean distributed Theory when we consider MAd adversary in the context of Stationary/Mobile as conjectured in the paper. Showing that our case by case analysis was superfluous is a beautiful challenge. Same goes for changing binary CA to multi-valued CA in the mobile benign omission case.
Is beautiful theory with no current application worth developing? It is called Basic Research (BR) and we still root for BR.
In (Belle and Gafni, 2010), Dolev and Gafni analyse mixtures of Stationary and mobiles faults. It is interesting the re-examine [] in light of this paper.
Finally, it will be nice to identify other natural "pairs" for which indulgence can be defined.
|
2307.00286 | CMA-ES for Post Hoc Ensembling in AutoML: A Great Success and
Salvageable Failure | Many state-of-the-art automated machine learning (AutoML) systems use greedy
ensemble selection (GES) by Caruana et al. (2004) to ensemble models found
during model selection post hoc. Thereby, boosting predictive performance and
likely following Auto-Sklearn 1's insight that alternatives, like stacking or
gradient-free numerical optimization, overfit. Overfitting in Auto-Sklearn 1 is
much more likely than in other AutoML systems because it uses only low-quality
validation data for post hoc ensembling. Therefore, we were motivated to
analyze whether Auto-Sklearn 1's insight holds true for systems with
higher-quality validation data. Consequently, we compared the performance of
covariance matrix adaptation evolution strategy (CMA-ES), state-of-the-art
gradient-free numerical optimization, to GES on the 71 classification datasets
from the AutoML benchmark for AutoGluon. We found that Auto-Sklearn's insight
depends on the chosen metric. For the metric ROC AUC, CMA-ES overfits
drastically and is outperformed by GES -- statistically significantly for
multi-class classification. For the metric balanced accuracy, CMA-ES does not
overfit and outperforms GES significantly. Motivated by the successful
application of CMA-ES for balanced accuracy, we explored methods to stop CMA-ES
from overfitting for ROC AUC. We propose a method to normalize the weights
produced by CMA-ES, inspired by GES, that avoids overfitting for CMA-ES and
makes CMA-ES perform better than or similar to GES for ROC AUC. | Lennart Purucker, Joeran Beel | 2023-07-01T09:47:59Z | http://arxiv.org/abs/2307.00286v1 | # CMA-ES for Post Hoc Ensembling in AutoML:
###### Abstract
Many state-of-the-art automated machine learning (AutoML) systems use greedy ensemble selection (GES) by Caruana et al. (2004) to ensemble models found during model selection post hoc. Thereby, boosting predictive performance and likely following Auto-Sklearn 1's insight that alternatives, like stacking or gradient-free numerical optimization, overfitting. Overfitting in Auto-Sklearn 1 is much more likely than in other AutoML systems because it uses only low-quality validation data for post hoc ensembling. Therefore, we were motivated to analyze whether Auto-Sklearn 1's insight holds true for systems with higher-quality validation data. Consequently, we compared the performance of covariance matrix adaptation evolution strategy (CMA-ES), state-of-the-art gradient-free numerical optimization, to GES on the 71 classification datasets from the AutoML benchmark for AutoGluon. We found that Auto-Sklearn's insight depends on the chosen metric. For the metric ROC AUC, CMA-ES overfits drastically and is outperformed by GES - statistically significantly for multi-class classification. For the metric balanced accuracy, CMA-ES does not overfit and outperforms GES significantly. Motivated by the successful application of CMA-ES for balanced accuracy, we explored methods to stop CMA-ES from overfitting for ROC AUC. We propose a method to normalize the weights produced by CMA-ES, inspired by GES, that avoids overfitting for CMA-ES and makes CMA-ES perform better than or similar to GES for ROC AUC.
1]Lennart Purucker
Joeran Beel
## 1 Introduction
Auto-Sklearn (Feurer et al., 2015) was the first automated machine learning (AutoML) system to discover that building an ensemble of models found during model selection is possible in an efficient manner and superior in predictive performance to the single best model. Afterwards, several other AutoML systems also build an ensemble _post hoc_: AutoGluon (Erickson et al., 2020), Auto-Pytorch (Mendoza et al., 2018; Zimmer et al., 2021), MLJAR (Plonska and Plonski, 2021), and H2O AutoML (LeDell and Poirier, 2020) all implemented _post hoc ensembling_.
Besides H2O AutoML, all of these systems implemented _greedy ensemble selection_ (GES) (Caruana et al., 2004, 2006), a greedy search for a weight vector to aggregate the predictions of base models. In AutoML systems, GES is trained using the base models' predictions on the _validation data_, which are computed while evaluating a base model during model selection. The frequent usage of GES likely follows Auto-Sklearn's reported insight that alternatives like _stacking_(Wolpert, 1992) or gradient-free numerical optimization overfit and are more costly than GES.
Auto-Sklearn 1, by default, only has limited validation data for post hoc ensembling, that is, a 33% hold-out split of the training data. We deem this to be low-quality validation data because, depending on the dataset, 33% are not enough instances to avoid overfitting while training GES. Hence, we were motivated to analyze if Auto-Sklearn's insight also holds true for an AutoML system with higher-quality validation data, _e.g._, AutoGluon with \(n\)-repeated \(k\)-fold cross-validation. Moreover, we were motivated to focus on gradient-free numerical optimization instead of stacking. Stacking is generally well-known in ensembling for machine learning and is used by H2O AutoML for post hoc ensembling. In contrast, gradient-free numerical optimization has not been used so far.
Thus, we compare the performance of GES to _covariance matrix adaptation evolution strategy_ (CMA-ES) (Hansen and Auger, 2014; Hansen, 2016), state-of-the-art gradient-free numerical optimization (Hansen et al., 2010; Szymkiewicz, 2018; Li et al., 2020). We chose CMA-ES due to its widespread usage in numerical optimization (Li et al., 2020). Moreover, CMA-ES's update is efficient and therefore enables fast training in post hoc ensembling; similar to GES's training. Furthermore, the function evaluation in post hoc ensembling, i.e., calculating the score of aggregated predictions, takes seconds (Feurer et al., 2015). Thus, we disregarded Bayesian optimization, which is appropriate for tasks with expensive function evaluation such as hyperparameter optimization (Lan et al., 2022).
In this study, we aim to boost the predictive performance as much as possible with post hoc ensembling. Note that GES selects a small ensemble, while methods like gradient-free numerical optimization or stacking produce an ensemble that includes all base models. Thus, the inference time and size of the final model are larger for the latter two than for GES.
Our first contribution is an application of CMA-ES for AutoGluon on the 71 classification datasets from the AutoML Benchmark (Gijsbers et al., 2022). Thereby, we show that Auto-Sklearn's insight w.r.t. overfitting of gradient-free numerical optimization depends on the chosen metric. We contradict the insight for the metric _balanced accuracy_ by showing that CMA-ES statistically significantly outperforms GES. And we confirm the insight for the metric _ROC AUC_ by showing that GES outperforms CMA-ES due to overfitting.
As a follow-up, our second contribution is a method to avoid overfitting for CMA-ES. Motivated by the successful application of CMA-ES for balanced accuracy, we explored methods to stop CMA-ES from overfitting to _salvage_ CMA-ES for ROC AUC. We identified the chosen method to normalize the ensemble's prediction probabilities as the key to avoiding overfitting. With this knowledge, we propose a novel normalization method, inspired by GES's implicit constraints during optimization, that makes CMA-ES perform as well as GES and avoids overfitting for ROC AUC. Interestingly, our normalization method also enables us to keep the size of the ensemble small.
Our code and data are publicly available: see Appendix E for details.
2 Related Work
Besides Auto-Sklearn 1's (Feurer et al., 2015) statement related to post hoc ensembling, only H2O AutoML names theoretical guarantees (van der Laan et al., 2007) as the reason for using stacking, but does not comment on GES. In general, details about post hoc ensembling in publications about AutoML systems were only a short comment without experiments or a reference to Auto-Sklearn 1 (Feurer et al., 2015; Mendoza et al., 2018; Erickson et al., 2020; LeDell and Poirier, 2020). We are only aware of the work by Purucker and Beel (2022), which proposed a first benchmark and framework for post hoc ensembling. The results in their Appendix also showed that GES can outperform stacking. To the best of our knowledge, no other work on post hoc ensembling for AutoML exists.
CMA-ES was previously applied to machine learning problems like hyperparameter optimization (Nomura et al., 2021; Loshchilov and Hutter, 2016) or feature weighting (Tasci et al., 2018)1. However, we found no work that used CMA-ES to directly optimizes the weights of an ensemble. Likewise, we have found no work that applies normalization to the solutions produced by CMA-ES nor comparable machine learning methods that apply normalization in this way to combat overfitting.
Footnote 1: To the best of our knowledge, this work is not available in English. We read a machine-translated version.
## 3 Application of CMA-ES for Post Hoc Ensembling
In our application of CMA-ES for post hoc ensembling, we search for an optimal weight vector \(W=(w_{1},...,w_{m})\) to aggregate pool \(P\) of \(m\) base models that minimizes a user-defined loss \(L(P,W)\). Thereby, \(L\) aggregates the predictions of models in \(P\) by taking the \(W\)-weighted arithmetic mean.
Hence, we employ CMA-ES, as implemented in pycma (Hansen et al., 2019), with default values to find \(W\) by minimizing \(L\). Following GES's first iteration, we set the initial solution \(x_{0}\) to be the
weight vector representing the single best model, that is, the weight for the single best model is one while all other models are weighted zero. The initial standard deviation is 0.2 following the intuition that a good weight vector might be close to the initial solution and that the granularity of weights can be small, e.g., between 0 and 1, like in GES.
### Experiments: CMA-ES vs. GES
We compared CMA-ES to GES w.r.t. ROC AUC following the AutoML Benchmark (Gijsbers et al., 2022). ROC AUC requires prediction probabilities and is independent of a decision threshold that would transform prediction probabilities into labels. We use macro average one-vs-rest ROC AUC for multiclass. We complemented the comparison by also evaluating w.r.t. balanced accuracy, which requires predicted labels and, thus, depends on a decision threshold.
For a threshold-dependent metric, the prediction of CMA-ES is, in our application, the class with the highest value after aggregating the prediction probabilities with the \(W\)-weighted mean. For a threshold-independent metric, we transform the aggregated probabilities for each instance using the softmax function, _i.e._, we treat the aggregated probabilities of each class as decision functions and take their softmax. Otherwise, the aggregated probabilities would not represent prediction probabilities, as \(W\) can have negative or positive values of any granularity.
To compare the ensembling methods, we obtained base models and their validation data with AutoGluon (Erickson et al., 2020) for each fold of the 71 classification datasets from the AutoML benchmark (AMLB) (Gijsbers et al., 2022) - for both metrics. Then, per fold, we trained the ensemble methods on the validation data, i.e., search for \(W\), and scored them on validation and test. The final validation/test score of a method for a dataset is the average over the 10 folds.
Following the AMLB, we ran AutoGluon for 4 hours with 8 cores (AMD EPYC 7452 CPU) and 32 GB of memory. We increased the memory for several datasets to 64 or 128 GB to avoid that insufficient memory made it impossible to produce multiple base models. In the end, AutoGluon produced between 2 and 24 base models, see Appendix F for details per dataset and metric.
We used the same resources and hardware to train and evaluate the ensemble methods. However, instead of training ensemble methods for 4 hours, we follow Auto-Sklearn's default and stop training GES after 50 iterations. This results in \(m*50\) total evaluations of \(L\) by GES. Therefore, we terminated CMA-ES after \(m*50\) evaluations of \(L\).
We included the single best base model (SingleBest) in the comparison as a baseline. To evaluate the statistical difference between the methods, we perform a Friedman test with a Nemenyi post hoc test (\(\alpha=0.05\)), following the AMLB. See Appendix I.1 for more details on the statistical tests.
### Results: CMA-ES vs. GES
We split the results for binary and multi-class classification in all our evaluations following the AutoML Benchmark (Gijsbers et al., 2022). Figure 1 shows the mean rank and results of the statistical test with critical difference (CD) plots. The Friedman tests were significant in all our experiments. We observe that CMA-ES is statistically significantly better than GES for balanced accuracy but fails to perform similarly well for ROC AUC.
To analyze the impact of overfitting on this outcome, we inspect the change of the mean rank of CMA-ES when switching from validation to test data for both metrics, see Table 1. A detailed overview for all methods can be found in Appendix G.1. While the single best is always ranked last, GES overtakes CMA-ES when switching from validation to test data for ROC AUC. Notably, CMA-ES has a mean rank of almost 1 for validation data in 3 out of 4 cases.
On validation data, GES is only competitive for multi-class ROC AUC, where it has a mean rank of 1.6. Nevertheless, GES has a larger distance to the single best on validation for balanced accuracy than it has for test data with a mean rank of \(\sim\)2 against the single best's \(\sim\)3.
In summary, we conclude that Auto-Sklearn's insight w.r.t. overfitting does not generalize to an AutoML system with higher-quality validation data, _i.e._, AutoGluon, for _balanced accuracy_. In
contrast, _the insight holds for ROC AUC._ Furthermore, we observe that CMA-ES is able to achieve peak performance for ROC AUC on validation data.
4 Normalization to Combat Overfitting The results we just presented motivated us to salvage CMA-ES for ROC AUC. Due to its good performance for ROC AUC and its wide adaptation by AutoML systems, we decided to analyze GES to determine how to avoid overfitting. As a result, we found two properties that inspired our approach to salvage CMA-ES for ROC AUC. This section describes why and how we use normalization to combat overfitting for a threshold-independent metric like ROC AUC. Since our approach is inspired by GES, we start with preliminaries regarding GES and its properties.
### Preliminaries
Greedy ensemble selection with replacement (Caruana et al., 2004, 2006) performs an iterative greedy search to build a list of (repeated) base models, the ensemble \(E\), that minimizes a user-defined loss function. In each iteration, the base model minimizing the loss, when added to \(E\), is _selected_ to be part of \(E\). To produce predictions and evaluate any \(E\), the (repeated) predictions of all base
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Metric & Task Type & Mean \(Rank_{Validation}\) & Mean \(Rank_{Test}\) & Absolute Rank (Val \(\rightarrow\) Test) \\ \hline Balanced Accuracy & Binary & 1.00 & 1.12 & 1.0 \(\rightarrow\) 1.0 \\ Balanced Accuracy & Multi-class & 1.03 & 1.25 & 1.0 \(\rightarrow\) 1.0 \\ ROC AUC & Binary & 1.02 & 1.83 & 1.0 \(\rightarrow\) 2.0 \\ ROC AUC & Multi-class & 1.42 & 2.12 & 1.0 \(\rightarrow\) 2.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean rank change from validation to test data for CMA-ES compared to GES and SingleBest.
Figure 1: **CD Plots Comparing GES and CMA-ES: The mean rank (lower is better) of a method is its lineβs position on the axis. Methods connected by a bar are not significantly different.**
models in \(E\) are aggregated with the arithmetic mean. Taking the arithmetic mean of \(E\) weights base models that exit multiple times higher. Hence, given \(E\), we can compute a weight vector. Assuming we run GES for \(N\) iterations2, then \(|E|=N\) and we compute the weight vector using: Footnote 2: We always denote \(N\) as the number of the iteration the final \(E\) was found in. Depending on the implementation of GES, the final \(E\) does not need to be from the final iteration. \[W^{pDisc}=\left\lceil\frac{countIn(p_{i},E)}{N}\ \middle|\ p_{i}\in P\right\rceil.\] (1) While analysing GES, we found two constraints of the weight vector \(W^{pDisc}\) that we believe to be essential for its performance. That is, \(W^{pDisc}\) is _pseudo-discrete_ and _sparse_. Both properties are only _implicitly_ respected by GES and were, to the best of our knowledge, never formally defined.
**Pseudo-Discrete**. We call \(W^{pDisc}\) pseudo-discrete because one can transform every weight vector produced by GES into a discrete count of how often a base model has been selected. This can be done by multiplying \(W^{pDisc}\) with \(N\), reversing Equation 1. In fact, every weight vector produced by GES is in the set \(\mathcal{G}=\{W^{\prime}\mid W^{\prime}\in H(N)\text{ and }\sum_{i=1}^{m}w_{i}=1\}\) with \(H(N)\) the \(m\)-fold Cartesian product of \(\{0,1/N,2/N,...,1\}\):
\[H(N)=\{0,1/N,2/N,\ldots,1\}\times\cdots\times\{0,1/N,2/N,\ldots,1\}. \tag{2}\]
In other words, every weight \(w_{i}\in W^{pDisc}\) can be expressed as a positive fraction with denominator \(N\), and the weight vector sums to 1. This follows from GES iteratively building a list of base models \(E\) and calculating the final weight vector with Equation 1.
We would like to remark that this formulation of GES is very similar to mallows' model average (MMA) (Hansen, 2007, 2008; Le and Clarke, 2022) and that GES might share MMA's asymptotic guarantees for regression if \(L\) is the squared error (Le and Clarke, 2022).
**Sparse**. \(W^{pDisc}\) is sparse, that is, a weight vector where many models are assigned zero weight - as intended for an _ensemble selection_ approach (Tsoumakas et al., 2009). To the best of our knowledge, a guarantee for sparseness was never formally introduced or proven for (greedy) ensemble selection, cf. (Caruana et al., 2004, 2006; Tsoumakas et al., 2009). Here, we shortly provide an argument for why it is likely that GES produces a sparse weight vector:
GES only adds new base models to \(E\) if they reduce the loss. Hence, it would require at least \(m\) iterations where adding a new base model would reduce the loss more than adding an existing base model again (increasing its weight). As a result, for appropriate values for \(m\) and \(N\), it is unlikely that enough iterations happened such that each model was able to reduce the loss once. Auto-Sklearn, for example, uses \(m=50\) and \(N=50\) by default. Moreover, once \(E\) becomes large, the changes to the aggregated prediction that are induced by adding a new base model are minimal. Thus, it also becomes less likely that the changes result in a different loss. Additonally, the larger \(E\) is, the more likely GES has reached a (local) optimum, which can not be improved upon by adding new models. In short, the iterative greedy approach to add models to \(E\) likely makes \(W^{pDisc}\) sparse.
### Motivation
Since all solutions produced by GES are pseudo-discrete and (likely) sparse, and since GES does not seem to overfit, we hypothesized that both properties might help to avoid overfitting.
Note, the properties can be seen as constraints. They constrain the weight vector to be sparse, sum to 1, and contain only values such that \(0\leq w_{i}\leq 1\). In contrast, our application of CMA-ES uses no such constraints. By default, CMA-ES produces a continuous and dense vector which does not need to sum to 1 and may contain negative or positive values of any granularity.
Thus, our first idea was to constrain the optimization process of CMA-ES such that it would produce results that match the constraints of GES. However, we found that once the same constraints
are introduced, CMA-ES often violates the constraints; making CMA-ES inefficient and often leading to an endless loop due to rejection sampling. In other words, we were not able to make CMA-ES produces solution vectors that fulfill all constraints of GES. In general, constraining CMA-ES is also not trivial (Biedrzycki, 2020), and we leave more sophisticated approaches to constrain CMA-ES for post hoc ensembling, like methods based on repair-and-inject or penalization (Hansen, 2016) or with relaxed constraints, to future work.
Instead of constraining the optimization process of CMA-ES, we moved to adding the constraints directly to the weight vector when they are evaluated, following a concept observed from GES. That is, we observed that while the constraints of GES are an implicit result of the algorithm as defined by Caruana et al. (2004), they manifested explicitly only when one computes the weight vector with Equation 1. The optimization loop of GES, _i.e._, iteratively building \(E\), does not explicitly consider these constraints, but only greedily minimizes a user-defined loss. In other words, the optimizer is only implicitly constrained _by applying constraints during the computation of the weight vector; before evaluating the vector's performance_.
In detail, every time GES computes the loss for an ensemble \(E\), it first transforms \(E\) into \(W^{pDisc}\) using Equation 1. Thereby, applying the constraints that the resulting weight vector must sum to 1, is sparse, and \(0\leq w_{i}\leq 1\). Then, the \(L(P,W^{pDisc})\) is returned as the loss of \(E\). At this point, it becomes clear that changing Equation 1 leads to different constraints; the loss of \(E\) could change without touching the optimization loop of GES.
As a result, we were motivated to apply the same concept to CMA-ES by normalizing the weight vector before we aggregate the predictions of the base models. Thus, changing the loss associated with a weight vector proposed by CMA-ES outside of its optimization process. In contrast, our application in Section 3 normalized the aggregated predictions for ROC AUC using softmax - we normalized _after aggregation_. Now, however, we propose to normalize _before aggregation_ as in GES. In turn, this also changes the optimization process of CMA-ES, _e.g._, the parameter update, because a weight vector might have a different loss depending on normalizing before or after aggregation.
### Normalization Methods
We propose three distinct normalization methods. Two of the methods we propose are based on the concept of GES such that the last proposed method tries to simulate Equation 1 fully.
**1) Softmax (CMA-ES-Softmax)**. Initially, we propose a simple alternative to our previous usage of CMA-ES by moving the (non-linear) softmax before the aggregation. That is, we normalize the weight vector \(W\) by taking its softmax. That is, for a weight \(w_{i}\in W\), we calculate: \(w_{i}^{s}=\frac{\exp(w_{i})}{\sum_{j=1}^{m}\exp(w_{j})}\) (3), resulting in \(W^{s}=(w_{1}^{s},...,w_{m}^{s})\) with \(\sum_{j=1}^{m}w_{j}^{s}=1\) and \(0\leq w_{j}\leq 1\) for \(w_{j}\in W^{s}\).
**2) Softmax & Implicit GES Normalization (CMA-ES-ImplictGES)**. Next, we propose to re-normalize \(W^{s}\) with the aim of producing an equivalent to a pseudo-discrete weight vector \(W^{pDisc}\); simulating GES's \(\mathcal{G}\) (see Equation 2). Therefore, we round each value of \(W^{s}\) to the nearest fraction with denominator \(N_{hyp}\) producing a _rounding-discrete_ weight vector \(W^{rDisc}\). Then, \(N_{hyp}\) represents the number of _hypothetical iterations_ for a simulated \(\mathcal{G}\). We set \(N_{hyp}=50\), similar to GES.
We produce \(W^{rDisc}=(w_{0}^{rDisc},...,w_{m}^{rDisc})\) by multiplying each \(w_{i}^{s}\) with \(N_{hyp}\) and rounding each element to the nearest integer afterwards; rounding up for values larger than 0.5. Therefore, we first compute the integer vector \(R=(r_{1},...,r_{m})\) using \(r_{i}=\lfloor w_{i}^{s}*N_{hyp}\rfloor\). Note, \(R\) can be thought of as a vector of repetitions where \(r_{i}\) denotes how often a model has been repeated in a hypothetical list of repeated base models \(E_{hyp}\). That is, \(E_{hyp}\) is connected to \(W^{rDisc}\) like an \(E\) to its \(W^{pDisc}\). Hence, we can compute \(W^{rDisc}\) using \(R\), paralleling Equation 1:
\[W^{rDisc}=\left[\frac{r_{i}}{\sum_{j=1}^{m}r_{j}}\mid r_{i}\in R\right]. \tag{4}\]
\(W^{rDisc}\) sums to 1, and each element is between 0 and 1. Interestingly, we found that this approach also _implicitly trims_ base models, as the nearest fraction can be \(\frac{6}{N_{hyp}}\) such that the method assigns zero weight to base models in these cases.
**3) Softmax & Explicit GES Normalization (CMA-ES-ExplicitGES).** Finally, we propose to explicitly trim base models and perfect the simulation of Equation 1. We can explicitly trim base models based on \(N_{hyp}\). We found that a weight \(w^{s}_{j}\) is set to zero by rounding if \(w^{s}_{j}*N_{hyp}\leq 0.5\). If we reformulate the inequality to \(w^{s}_{j}\leq 0.5*\frac{1}{N_{hyp}}\), we see that this parallels GES, where the number of iterations determines the minimal weight a model can be assigned, _i.e._, \(\frac{1}{N}\).
Furthermore, we found that CMA-ES-ImplictGES does not simulate GES sufficiently. We observed that rounding may result in \(\sum_{j=1}^{m}r_{j}\neq N_{hyp}\). That is, the total number of repetitions in \(R\) did not match the number of simulated iterations nor the (hypothetical) length of \(E_{hyp}\). \(R\) was supposed to relate to \(E_{hyp}\) for \(W^{rDisc}\) like an \(E\) to its \(W^{pDisc}\). Yet for GES, it holds that \(|E|=N\) while \(|E_{hyp}|\neq N_{hyp}\) can happen in CMA-ES-ImplictGES.
Considering both, we implemented the third method, shown in Algorithm 1. First, we compute \(W^{s}\) and trim any base model smaller than \(\frac{0.5}{N_{hyp}}\) (Line 2). If we set all weights to zero, we fall back to an unweighted average (Line 5). Second, we round to the nearest integer, producing \(R^{\prime}\) (Line 8).
Next, we set \(R^{\prime\prime}=R^{\prime}\) and modify \(R^{\prime\prime}\) to achieve \(\sum_{j=1}^{m}r^{\prime\prime}_{j}=N_{hyp}\). We want to keep the distribution of \(R^{\prime\prime}\) as close as possible to the distribution of \(R^{\prime}\). Hence, we keep the relative distances between the individual elements in \(R^{\prime}\) and \(R^{\prime\prime}\) similar.
If \(\sum_{j=1}^{m}r^{\prime}_{j}>N_{hyp}\), we decrement elements in \(R^{\prime\prime}\) by 1 until \(\sum_{j=1}^{m}r^{\prime\prime}_{j}=N_{hyp}\) (Line 11). We decrement in order from lowest to highest valued element in \(R^{\prime}\), that is, lowest to highest weighted base model in the resulting weight vector. Thus, first trimming base models with only one repetition. Finally, if \(\sum_{j=1}^{m}r^{\prime}_{j}-N_{hyp}\) is large enough, we decrement the most repeated elements. Note, due to rounding, we must decrement each element once in the worst case. If \(\sum_{j=1}^{m}r^{\prime}_{j}<N_{hyp}\), we have to increase the value of elements in \(R^{\prime\prime}\). To keep the relative distances similar, we equally distributed \(N_{hyp}-\sum_{j=1}^{m}r^{\prime}_{j}\) increments between all non-zero elements in \(R^{\prime\prime}\) (Line 13). Finally, the \(R^{\prime\prime}\) is transformed into a weight vector with Equation 4.
```
0: Weight vector \(W^{\prime}\) of length \(m\), the number of hypothetical iterations \(N_{hyp}\)
0: Weight vector \(W\)
1:\(W\gets W^{s}\) computed with Equation 3 using \(W^{\prime}\)\(\triangleright\) Apply softmax.
2:for\(i=1\) to \(m\)do\(\triangleright\) Trim base models.
3:if \(w_{1}\leq\frac{0.5}{N_{hyp}}\)then
4:\(w_{i}\gets 0\)
5:if\(\sum_{i=1}^{m}w_{i}=0\)then\(\triangleright\) Fallback to unweighted average.
6:return \((\frac{1}{m},-\frac{1}{m})\)\(\triangleright\) Initialize an empty vector of repetitions.
7:\(R^{\prime}\leftarrow[0\cdots 0]\)\(\triangleright\) Round to nearest integer.
8:for\(i=1\) to \(m\)do\(\triangleright\) Round to nearest integer.
9:\(r^{\prime}_{i}\leftarrow[w^{s}_{i}*N_{hyp}\triangleright]\)
10:\(R^{\prime\prime}\leftarrow[R^{\prime}\]
11:if\(\sum_{j=1}^{m}r^{\prime}_{j}>N_{hyp}\)then
12:\(R^{\prime\prime}\leftarrow\) Decrement elements from lowest to highest valued element in \(R^{\prime\prime}\) by 1 until \(\sum_{j=1}^{m}r^{\prime\prime}_{j}=N_{hyp}\)
13:if\(\sum_{j=1}^{m}r^{\prime}_{j}<N_{hyp}\)then
14:\(R^{\prime\prime}\leftarrow\) Equally distributed \(N_{hyp}-\sum_{j=1}^{m}r^{\prime}_{j}\) increments between all non-zero elements in \(R^{\prime\prime}\)
15:return\(W\) computed with Equation 4 using \(R^{\prime\prime}\).
```
**Algorithm 1** The Procedure for CMA-ES-ExplicitGES
### Comparing Normalization Methods
We use CMA-ES-ExplicitGES for the final evaluation below because it is the only approach that is in line with GES's concepts. Nevertheless, here, we provide an additional comparison of the three
normalization methods on the same data as used in Section 3.1. We run CMA-ES, as described above, with the three different methods for normalization on the data from AutoGluon for ROC AUC. We ignore the threshold-dependent balanced accuracy because CMA-ES is not affected by overfitting for balanced accuracy. Besides normalization, the main difference to the application from Section 3 is that we do not apply softmax after aggregation anymore when we apply normalization.
First, a note regarding sparseness. On average, across all datasets for ROC AUC, \(\sim\)13.2 base models exist, see Appendix F for each dataset's number. For comparison, we computed the average number of non-zero weighted base models for the ensemble methods, see Appendix H. This shows that CMA-ES without normalization has an average ensemble size, that is, the number of non-zero weighted base models, of \(\sim\)12.9. In contrast, CMA-ES-ExplicitGES has an average ensemble size of \(\sim\)6.3, CMA-ES-ImplicitGES of \(\sim\)5.4. For context, GES has an average ensemble size of \(\sim\)5.8 Hence, we conclude that CMA-ES produces dense weight vectors. While our normalization approaches are able to produce sparse vectors like GES.
Next, we repeat the statistical test performed in Section 3.1 for all normalization methods, CMA-ES, and the SingleBest, see Figure 4 in the Appendix H. We observe that all normalization methods outperform CMA-ES and that CMA-ES-ExplicitGES ranks highest. Furthermore, the different normalization methods are not statistically significantly different from each other. Only CMA-ES-ExplicitGES is significantly different from CMA-ES for multi-class.
## 5 Overall Experiments
In our final evaluation, we mirror the experiments from Section 3.1 and compare the SingleBest, GES, CMA-ES, and CMA-ES with normalization (CMA-ES-ExplicitGES). We additionally include stacking in our comparison because it is part of Auto-Sklearn's insight and used by H2O AutoML. For our implementation of stacking (Wolpert, 1992), we use a default Logistic Regression classifier from scikit-learn (Pedregosa et al., 2011) as a stacking model. We adjusted the code such that we terminate after \(m*50\) evaluations to make the method comparable to GES and CMA-ES. For CMA-ES we stick to the implementation and default hyperparameters as described in Section 3.
Besides the statistical tests, we also inspect the difference in the distributions of relative performance. Therefore, we follow the AutoML benchmark (Gijsbers et al., 2022) and use _normalized improvement_ to make the scores of methods comparable across different datasets. We scale the scores for a dataset such that \(-1\) is equal to the score of a baseline, here the SingleBest, and \(0\) is equal to the score of the best method on the dataset. We employ a variant of normalized improvement as we ran into an edge case where the normalized improvement is undefined if the difference between the single best model and the best method is \(0\). In our variant, for this edge case, we set everything as good as the SingleBest to \(-1\) and penalize all methods worse than the baseline with \(-10\); following a penalization approach like PAR10 from Algorithm Selection (Lindauer et al., 2019). We provide a formalized definition of normalized improvement in Appendix I.2.
## 6 Overall Results
Figure 2 shows the results of the statistical tests and mean rankings for the compared methods. The distribution of the relative performance is shown in Figure 3. Additionally, the performance per dataset is provided in Appendix J.
**Overall Predictive Performance**. All post hoc ensembling methods always outperform the SingleBest on average, although not always statistically significant - see Figure 2. Yet, post hoc ensembling can overfit and become worse for specific datasets, as indicated by the black dots left of the red bar and the number of outliers in square brackets in Figure 3.
_For balanced accuracy_, we observe that CMA-ES significantly beats all methods. Likewise, we observe that stacking and CMA-ES-ExplicitGES outperform GES by a small non-significant margin.
_For ROC AUC,_ we see that GES and CMA-ES-ExplicitGES outperform all other methods and differ only by a small non-significant margin. Both are also significantly different from the SingleBest; unlike stacking. Moreover, Figure 3 shows us that CMA-ES-ExplicitGES has similar or better relative performance distributions than GES (see the medians and whiskers).
**Normalization to Combat Overfitting**. See Table 2 to inspect overfitting for CMA-ES-ExplicitGES. See Appendix G.2 for an overview of the rank change for all compared methods. In general, CMA-ES-ExplicitGES's mean rank, compared to GES and the SingleBest, changes only minimally between validation and test data. Showing us that it overfits less than CMA-ES (compare to Table 1, Section 3.2). As before, the SingleBest is always the worst-ranked method. GES is worse than CMA-ES-ExplicitGES on test data for all but ROC AUC Binary. On validation data, however, GES is better than CMA-ES-ExplicitGES in all cases except for ROC AUC multi-class, where it is tied. Now, GES is _more affected by overfitting_ than CMA-ES with normalization.
**No Free Lunch**. CMA-ES-ExplicitGES for balanced accuracy ranks worse than CMA-ES but better than GES. In contrast, CMA-ES-ExplicitGES ranks better than CMA-ES for ROC AUC. A decrease in performance for balanced accuracy was to be expected as the normalization method constrained the solutions of CMA-ES to be sparse and pseudo-discrete to combat overfitting, but CMA-ES did not overfit for balanced accuracy. Moreover, it indicates that satisfying these properties of GES for balanced accuracy is suboptimal. Hence, our results also indicate the need to select the best method per task and metric instead of always using the same method; in line with the _no free lunch theorem_. Likewise, the drastic differences in performance of the methods between metrics suggest that the optimization landscapes, and the impact of overfitting on them, differ drastically.
## 7 Conclusion
Greedy ensemble selection (GES) (Caruana et al., 2004) is often used for post hoc ensembling in AutoML; likely as a result of Auto-Sklearn 1's (Feurer et al., 2015) reported insight that GES is superior to potential alternatives, like gradient-free numerical optimization, for post hoc ensembling.
Figure 2: **CD Plots for all Methods: Methods connected by a bar are not significantly different.**
In this paper, we have shown that Auto-Sklearn's insight w.r.t. overfitting depends on the metric when tested for an AutoML system with higher-quality validation data than Auto-Sklearn, e.g., AutoGluon (Erickson et al., 2020). Indeed, for the metric ROC AUC, GES does not overfit meaningfully, while gradient-free numerical optimization, e.g., CMA-ES (Hansen and Auger, 2014; Hansen, 2016), overfits drastically. However, for balanced accuracy, CMA-ES does not overfit and outperforms GES.
As a direct consequence, we were motivated to find a method that combats the overfitting of CMA-ES for ROC AUC. Therefore, we proposed a novel normalization method, is inspired by GES, which successfully salvages CMA-ES for ROC AUC by making CMA-ES perform better than or similar to GES.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Metric & Task Type & Mean \(Rank_{Validation}\) & Mean \(Rank_{Test}\) & Absolute Rank (Val \(\rightarrow\) Test) \\ \hline Balanced Accuracy & Binary & 1.74 & 1.78 & 2.0 \(\rightarrow\) 1.0 \\ Balanced Accuracy & Multi-class & 1.73 & 1.78 & 2.0 \(\rightarrow\) 1.0 \\ ROC AUC & Binary & 1.63 & 1.70 & 2.0 \(\rightarrow\) 2.0 \\ ROC AUC & Multi-class & 1.50 & 1.57 & 1.5 \(\rightarrow\) 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean rank change for CMA-ES-ExplicitGES compared to GES and SingleBest. In the case of a tie for the absolute rank, we assign all tied values the average of their tie-broken ranks.
Figure 3: **Normalized Improvement Boxplots**: Higher normalized improvement is better. Each black point represents the improvement for one dataset. A value smaller than \(-1\) is worse than the single best model (red vertical line), while \(0\) is the best observed value. The number in square brackets counts the outliers of a method left of the plotβs boundary.
**Acknowledgements**. The CPU nodes of the OMNI cluster of the University of Siegen (North Rhine-Westphalia, Germany) were used for all experiments presented in this work.
|
2310.15522 | Optimization of quantum noise in space gravitational-wave antenna DECIGO
with optical-spring quantum locking considering mixture of vacuum
fluctuations in homodyne detection | Quantum locking using optical spring and homodyne detection has been devised
to reduce quantum noise that limits the sensitivity of DECIGO, a space-based
gravitational wave antenna in the frequency band around 0.1 Hz for detection of
primordial gravitational waves. The reduction in the upper limit of energy
density ${\Omega}_{\mathrm{GW}}$ from $2{\times}10^{-15}$ to
$1{\times}10^{-16}$, as inferred from recent observations, necessitates
improved sensitivity in DECIGO to meet its primary science goals. To accurately
evaluate the effectiveness of this method, this paper considers a detection
mechanism that takes into account the influence of vacuum fluctuations on
homodyne detection. In addition, an advanced signal processing method is
devised to efficiently utilize signals from each photodetector, and design
parameters for this configuration are optimized for the quantum noise. Our
results show that this method is effective in reducing quantum noise, despite
the detrimental impact of vacuum fluctuations on its sensitivity. | Kenji Tsuji, Tomohiro Ishikawa, Kentaro Komori, Koji Nagano, Yutaro Enomoto, Yuta Michimura, Kurumi Umemura, Ryuma Shimizu, Bin Wu, Shoki Iwaguchi, Yuki Kawasaki, Akira Furusawa, Seiji Kawamura | 2023-10-24T05:04:14Z | http://arxiv.org/abs/2310.15522v1 | Optimization of quantum noise in space gravitational-wave antenna DECIGO with optical-spring quantum locking considering mixture of vacuum fluctuations in homodyne detection
###### Abstract
Quantum locking using optical spring and homodyne detection has been devised to reduce quantum noise that limits the sensitivity of DECIGO, a space-based gravitational wave antenna in the frequency band around 0.1 Hz for detection of primordial gravitational waves. The reduction in the upper limit of energy density \(\Omega_{\rm QW}\) from 2\(\times\)10\({}^{-15}\) to 1\(\times\)10\({}^{-16}\), as inferred from recent observations, necessitates improved sensitivity in DECIGO to meet its primary science goals. To accurately evaluate the effectiveness of this method, this paper considers a detection mechanism that takes into account the influence of vacuum fluctuations on homodyne detection. In addition, an advanced signal processing method is devised to efficiently utilize signals from each photodetector, and design parameters for this configuration are optimized for the quantum noise. Our results show that this method is effective in reducing quantum noise, despite the detrimental impact of vacuum fluctuations on its sensitivity.
\({}^{A}\) _Department of Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8602, Japan_
\({}^{B}\) _Research Center for the Early Universe (RESCEU), School of Science, University of Tokyo, Tokyo 113-0033, Japan_
\({}^{C}\) _Department of Physics, University of Tokyo, Bunkyo, Tokyo 113-0033, Japan_
\({}^{D}\) _LQUOM, Inc., Tokiwadai, Hodogaya, Yokohama city, Kanagawa, 240-8501, Japan_
\({}^{E}\) _Department of Applied Physics, School of Engineering, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan_
\({}^{F}\) _LIGO Laboratory, California Institute of Technology, Pasadena, California 91125, USA_
\({}^{G}\) _Center for Quantum Computing, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan_
\({}^{H}\) _The Kobayashi-Maskawa Institute for the Origin of Particles and the Universe, Nagoya University, Nagoya, Aichi 464-8602, Japan_
## I. Introduction
Since the first detection of gravitational waves in 2015 [1], the ongoing observation of gravitational waves resulting from the mergers of binary black holes and binary neutron stars has marked a momentous juncture in the domain of gravitational wave astronomy. This continuous endeavor bestows upon us novel perspectives into astronomical phenomena, catalyzing a revolutionary transformation within the field of astronomy [2]. Moreover, it is envisaged that the growing significance of gravitational wave detections will play an increasingly pivotal role in the realm of astronomy with contributions from ground-based detectors such as Einstein Telescope [3] and Cosmic Explorer [4], or space-based detectors such as LISA [5].
This is owing to their unique ability to discern enigmatic phenomena that pose formidable challenges for observation through electromagnetic waves.
The focus of our research is primordial gravitational waves. If we can directly detect these gravitational waves, which originate from quantum fluctuations in the spacetime during the cosmic inflation, we can gain a more detailed understanding of the early stages of the universe, including confirming the occurrence of cosmic inflation [6]. Detectors exhibiting high sensitivity in the frequency band around 0.1 Hz, which are free of seismic noise from the Earth and thermal noise from mirror suspension, are required. However, conventional ground-based detectors, LIGO [7], VIRGO [8] and KAGRA [9] cannot remove them.
Our project for these detections is called the DECi-hertz Interferometer Gravitational-wave Observatory (DECIGO) and has been promoted as a space-based gravitational wave antenna in Japan [10, 11]. DECIGO stands out due to its distinctive utilization of three drag-free satellites deployed in space and a Fabry-Perot interferometer configuration, spanning a length of 1000 km, to mitigate the adverse effects of earth's seismic noise and thermal noise from mirror suspensions. This configuration therefore allows us to target a lower frequency band (in this case the 0.1 Hz band) than the frequency band where ground-based detectors have high sensitivity, typically around 100 Hz. Meanwhile, recent analysis of observations by the Planck satellite and others [12] have shown that the original design of DECIGO was not sensitive enough to detect primordial gravitational waves. Hence, developing techniques to improve the sensitivity of DECIGO is required. Previous studies have proposed optimizing the design parameters of DECIGO [13, 14, 15], as well as a technique known as quantum locking [16, 17], to enhance sensitivity by mitigating quantum noise. Quantum noise predominantly affects the low-frequency band and arises from the quantum fluctuations of laser light. Notably, quantum locking involves incorporating sub-cavities on both sides that share a mirror with the 1,000 km primary cavity, resulting in a reduction of radiation pressure noise. Given the high optical loss and lack of squeezing feasibility in the main cavity of DECIGO, this method proves to be exceptionally effective [18], and its utility has been progressively confirmed through in-principle verification experiments [19]. Additionally, the configuration termed optical-spring quantum locking, which employs optical springs [20] and homodyne detection in sub-cavities for quantum locking, has theoretically exhibited a considerable enhancement in sensitivity [21].
However, prior investigations have not employed homodyne detection based on an experimental setup that faithfully reflects the actual optical effects, rather assumed ideal homodyne detection. This limitation arises due to the treatment of parameter \(\eta\), responsible for determining the direction of homodyne detection, which has not been ascertained through interference light but rather regarded as an independent variable subject to arbitrary choice during simulations. In order to faithfully model the system based on the actual setup, it becomes imperative to account for the influence of the mixture of vacuum fluctuations. Consequently, the determination of the homodyne detection direction necessitates the extraction of light from the conventional optical path to produce interference, wherein the use of beam splitters, acting as points of interference, is indispensable. Hence, the primary objective of this paper is the accurate evaluation of the homodyne detection approach, taking into consideration the mixture of vacuum fluctuations. Furthermore, in this configuration, an additional photodetector can be employed, and we propose a method to utilize the signal from this supplementary photodetector. In this paper, we apply this method while optimizing each design parameter in a manner akin to previous research [21].
In Section II, a more elaborate elucidation of the optical design is presented, encompassing the mitigation of quantum noise through optical-spring quantum locking, along with the configuration of homodyne detection incorporating these constituents. The intricate approach to signal optimization, achieved through the combination of multiple signals, is expounded upon in Section III. Furthermore, this section offers a comprehensive block diagram illustrating the acquisition of these signals. Sections IV through V showcase simulation results, demonstrating the parameters that yield the utmost sensitivity for DECIGO under this configuration, as well as the corresponding obtained signals. Additionally, we delve into the sensitivity difference of DECIGO in comparison with prior research.
## II Optical Design
### Optical-Spring Quantum Locking
In this subsection, we describe optical-spring quantum locking used to reduce quantum noise. Initially, we elucidate the methodology employed to address quantum fluctuations, utilizing a mathematical framework known as quadrature-phase amplitude [22]. This formalism incorporates the application of creation and annihilation operators, denoted as \(a_{j}\) and \(a_{j}^{\dagger}\), respectively, which satisfy the commutation relation (Eq. 1). These operators play a pivotal role in the analysis of the optical mode within each cavity.
\[[a_{j},a_{j^{\prime}}^{\dagger}]=2\pi\delta_{jj^{\prime}} \tag{1}\]
Here, \(j\) is an identifier of the three cavities, indicating Main(=m), Sub1(=s1) or Sub2(=s2). Then, using \(a_{j}\) and \(a_{j}^{\dagger}\) operators, the amplitude quantum fluctuation \(q_{j}\) and phase quantum fluctuation \(p_{j}\) are defined by
\[q_{j}=\frac{1}{\sqrt{2}}(a_{j}+a_{j}^{\dagger})\ \,\ \ p_{j}=\frac{1}{i\sqrt{2}}(a_{ j}-a_{j}^{\dagger}). \tag{2}\]
We now elaborate on the approach of reducing quantum noise through quantum locking. Figure **1** illustrates the standard configuration of quantum locking designed for DECIGO. In this configuration, supplementary sub-cavities, equipped with shorter cavity lengths and shared mirrors, are appended to both sides of the main optical cavity, which spans a distance of 1,000 km between DECIGO satellites. Notably, the laser sources employed for the sub-cavities differ from the one utilized for the main cavity. Based on this setup, crucial information concerning the amplitude quantum fluctuations of the main laser light can be extracted from the signals transmitted via the sub-cavity laser light. By regulating the shared mirror using these signals, it becomes possible to effectively nullify the radiation pressure noise stemming from the main cavity, while simultaneously preserving the underlying gravitational wave signal.
Moreover, we utilize the optical spring for the sub-cavities. The optical spring is a contrived technology that accomplishes heightened sensitivity by creating the interaction between mechanical and optical effects. A mirror positioned slightly away from the resonance point experiences a constant external force that counterbalances the radiation force within the cavity. Consequently, when the radiation force changes due to quantum fluctuations, the mirror behaves akin to one mounted on a spring due to its relationship with the radiation force.
## Appendix B Homodyne Detection
In addition to employing optical-spring quantum locking, we adopt homodyne detection to mitigate the influence of quantum noise. The concept of homodyne detection is elucidated in Figure **2**. In this configuration, beam splitter 1 (=BS1) is strategically positioned to extract light as a local oscillator from the incident light directed towards the cavity. Meanwhile, beam splitter 2 (=BS2) brings the local oscillator light to interference with the light that either reflects or traverses the cavity. This arrangement allows for detection along a direction different from that of the normal carrier light, playing a crucial role in quantum noise reduction. The interfered light is subsequently captured by two photodetectors, as depicted in the figure. The variables \(\eta^{(A)}\) and \(\eta^{(B)}\), which determine the detection axis angle, are derived through the ensuing steps. To commence, let us delineate the electric field \(E_{1}\) of the light emanating from the laser source as follows:
\[E_{1}=E_{0}e^{i\omega_{0}t}, \tag{3}\]
where \(E_{0}\) is a constant representing the amplitude of the electric field, and \(\omega_{0}\) is the angular frequency of the laser light. At the same time, the local oscillator \(E_{2}\) is obtained using the amplitude reflectance \(r_{1}^{(h)}\) of BS1, given by
\[E_{2}=r_{1}^{(h)}e^{i\xi}E_{0}e^{i\omega_{0}t}=r_{1}^{(h)}e^{i\xi}E_{1}. \tag{4}\]
Here, \(\xi\) is a parameter defined as the relative phase shift associated with the change in the optical path length. The light directed towards the cavity combines the light entering the cavity with the light reflected by the input mirror, and the carrier
Figure 1: Standard configuration of quantum locking in DECIGO. The path that the red laser light passes through represents the main cavity, while the paths that the green laser lights pass through correspond to the sub-cavities. The two mirrors of the main cavity are shared with the sub-cavities, and the photodetector signals from the sub-cavities contain information regarding the noise induced by the main laser light.
light \(E_{3}\) is expressed as follows:
\[E_{3}=\Big{[}-r_{1}^{(sc)}+\frac{{t_{1}^{(sc)}}^{2}r_{2}^{(sc)}e^{i\phi^{\prime}} }{1-r_{1}^{(sc)}r_{2}^{(sc)}e^{i\phi^{\prime}}}\Big{]}t_{1}^{(h)}E_{1}. \tag{5}\]
Here, \(r_{1}^{(h)}\) represents the amplitude transmittance of BS1, while \(r_{1}^{(sc)},t_{1}^{(sc)}\) and \(r_{2}^{(sc)},t_{2}^{(sc)}\) correspond to the amplitude reflectance or the amplitude transmittance of the mirrors used in the sub-cavities. For the subsequent calculations, we set \(r_{2}^{(sc)}\) to 1, and \(t_{2}^{(sc)}\) to 0. Besides, \(\phi^{\prime}\) is defined as the change in laser phase when the laser light completes one round trip through the cavities. Consequently, the field falling on the photodiodes \(E_{\rm PD}^{(A)},E_{\rm PD}^{(B)}\) by lights passing through each optical path is given as follows:
\[E_{PD}^{(A)} =t_{2}^{(h)}E_{3}+r_{2}^{(h)}E_{2}\] \[=\Big{[}t_{1}^{(h)}t_{2}^{(h)}\Big{\{}-r_{1}^{(sc)}+\frac{{t_{1}^ {(sc)}}^{2}r_{2}^{(sc)}}{(1-r_{1}^{(sc)}r_{2}^{(sc)})^{2}+4r_{1}^{(sc)}r_{2}^{ (sc)}\sin^{2}\frac{\phi^{\prime}}{2}}\Big{(}e^{i\phi^{\prime}}-r_{1}^{(sc)} \Big{)}\Big{\}}+r_{1}^{(h)}r_{2}^{(h)}e^{i\xi}\Big{]}E_{1}\] \[=\Bigg{[}\Big{\{}t_{1}^{(h)}t_{2}^{(h)}\Big{(}-r_{1}^{(sc)}+\frac {{t_{1}^{(sc)}}^{2}r_{2}^{(sc)}(\cos\phi^{\prime}-r_{1}^{(sc)})}{(1-r_{1}^{(sc )}r_{2}^{(sc)})^{2}+4r_{1}^{(sc)}r_{2}^{(sc)}\sin^{2}\frac{\phi^{\prime}}{2}} \Big{)}+r_{1}^{(h)}r_{2}^{(h)}\cos\xi\Big{\}} \tag{6}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+i\Big{\{} t_{1}^{(h)}t_{2}^{(h)}\frac{{t_{1}^{(sc)}}^{2}r_{2}^{(sc)}\sin\phi^{\prime}}{(1-r_{1}^{(sc)}r_{2}^{(sc)})^{2}+4r_{1}^{ (sc)}r_{2}^{(sc)}\sin^{2}\frac{\phi^{\prime}}{2}}+r_{1}^{(h)}r_{2}^{(h)}\sin \xi\Big{\}}\Bigg{]}E_{1}\] \[\equiv(A_{1}+iA_{2})E_{1}\] \[E_{PD}^{(B)} =-r_{2}^{(h)}E_{3}+t_{2}^{(h)}E_{2}\] \[=\Big{[}-t_{1}^{(h)}r_{2}^{(h)}\Big{\{}-r_{1}^{(sc)}+\frac{{t_{1}^ {(sc)}}^{2}r_{2}^{(sc)}}{(1-r_{1}^{(sc)}r_{2}^{(sc)})^{2}+4r_{1}^{(sc)}r_{2}^{ (sc)}\sin^{2}\frac{\phi^{\prime}}{2}}(e^{i\phi^{\prime}}-r_{1}^{(sc)})\Big{\}} +r_{1}^{(h)}t_{2}^{(h)}e^{i\xi}\Big{]}E_{1}\] \[=\Bigg{[}\Big{\{}-t_{1}^{(h)}r_{2}^{(h)}\Big{(}-r_{1}^{(sc)}+\frac {{t_{1}^{(sc)}}^{2}r_{2}^{(sc)}(\cos\phi^{\prime}-r_{1}^{(sc)})}{(1-r_{1}^{(sc )}r_{2}^{(sc)})^{2}+4r_{1}^{(sc)}r_{2}^{(sc)}\sin^{2}\frac{\phi^{\prime}}{2}} \Big{)}+r_{1}^{(h)}t_{2}^{(h)}\cos\xi\Big{\}}\] (7) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad+i\Big{\{}-t_{1}^{(h)}r_{2}^{(h)} \frac{{t_{1}^{(sc)}}^{2}r_{2}^{(sc)}\sin\phi^{\prime}}{(1-r_{1}^{(sc)}r_{2}^{( sc)})^{2}+4r_{1}^{(sc)}r_{2}^{(sc)}\sin^{2}\frac{\phi^{\prime}}{2}}+r_{1}^{(h)}t_{2}^{(h)} \sin\xi\Big{\}}\Bigg{]}E_{1}\] \[\equiv(B_{1}+iB_{2})E_{1}.\]
Figure 2: Concept of homodyne detection. The lower path represents the optical path for the local oscillator, which undergoes a phase shift by \(\xi\). \(E_{i}\) is the classical electric field of the laser light. The regions enclosed by dotted lines are quantum fluctuations of the laser right or vacuum fluctuations. We use superscript \((l)\) to denote quantum fluctuations of the laser light and superscript \((v)\) to denote vacuum fluctuations, distinguishing their respective noise sources.
Hence, through the consideration of the electric field \(E_{1}\) pertaining to the laser emission at its immediate inception as the point of reference, each instance of interference light is detected along an axis rotated at an angle ascertained by the subsequent equation:
\[\eta^{(A)}=\arctan\frac{A_{1}}{A_{2}},\ \ \ \ \eta^{(B)}=\arctan\frac{B_{1}}{B_{2}}. \tag{8}\]
Next, we consider the quantum state at this juncture. If the quantum state of the laser light manifests as a coherent state, then the light extracted as a local oscillator, influenced solely by the amplitude reflectance \(r_{1}^{(h)}\) of BS1, also assumes a coherent state. On the contrary, the light emanating from the cavity to the BS2 exhibits a squeezed state. This arises due to the amplitude fluctuations of the laser light, which result in mirror displacements and changes in the optical path length, thereby causing amplitude fluctuations to manifest as phase fluctuations. This phenomenon is referred to as ponderomotive squeezing [23]. Consequently, the interference light resulting from the combination of these two beams is squeezed in its quantum state. Thus, quantum noise can be ameliorated by judiciously selecting an appropriate angle for the detection axis, as defined in Eq. 8. The optimal angle is determined such that the projective component of the quantum fluctuation along this axis nullifies each other.
Incidentally, it should be noted that this particular configuration embraces vacuum fluctuations at BS1. Omitting consideration of vacuum fluctuations would lead to a reduction in the quantum fluctuation of the light transmitted or reflected by BS1. Hence, we introduce a symmetrical position with respect to the beam splitter, as illustrated in the figure, wherein vacuum fluctuations are injected. This serves to compensate for the reduction in quantum fluctuation of the laser light.
## III Signal Processing
### Block Diagram
We utilize a block diagram to portray the interferometer configuration shown in Fig. 1, which integrates optical-spring quantum locking and accounts for the amalgamation of vacuum fluctuations in homodyne detection, as depicted in Fig. 3. This schematic comprises three distinct sections: the upper and lower segments represent the sub-cavities, while the central segment denotes the main cavity. The purple and yellow regions on the left side correspond to quantum fluctuations of the laser light and the mixed vacuum fluctuations in homodyne detection, respectively. Within these regions, the upper port signifies amplitude quantum fluctuations, whereas the lower port pertains to phase fluctuations. On the right side, the cyan region designates the detection port.
The amplitude transmittance and amplitude reflectance of a mirror are defined using the identifier \((k)\), such as (sc) representing the sub-cavity, and the identifier \(n\) representing 1 or 2, as follows
\[\left[r_{n}^{(k)}\right]^{2}+\left[t_{n}^{(k)}\right]^{2}=1. \tag{9}\]
Note that this equation assumes no loss effects for all mirrors. In addition, all mirrors used in each cavity have the same mass. In the diagram, the mirrors of the main cavity, which are shared with the sub-cavities, are depicted as blocks located in the upper or lower parts of the sub-cavities. In addition, \(G_{ij}\) denotes the matrix of optical-spring effects, which is determined by the detuning angle \(\phi\) and sideband frequency \(\Omega\), and can be expressed as:
\[g(\Omega) =\left[1-r_{1}r_{2}e^{-i(\phi+\frac{2L}{c}\Omega)}\right]^{-1} \tag{10}\] \[G(\Omega) =\frac{1}{2}\begin{bmatrix}g^{*}(\Omega)+g(-\Omega)&i[g^{*}( \Omega)-g(-\Omega)]\\ -i[g^{*}(\Omega)-g(-\Omega)]&g^{*}(\Omega)+g(-\Omega)\end{bmatrix}, \tag{11}\]
where \(c\) is the speed of light taken as 3\(\times\)10\({}^{8}\) m/s, and \(L\) is the cavity length. Next, \(\eta_{A/B}^{\prime}\) represents the angle between the phase direction of transmitted or reflected light by BS2 in \(E_{3}\) and the axis determined by Eq. 8, and it is defined as follows
\[\eta_{A/B}^{\prime}=\frac{\pi}{2}+\eta^{(A/B)}-\eta_{0}^{(A/B)}, \tag{12}\]
where \(\eta_{0}^{(A/B)}\) denotes the angular orientation of each carrier light with respect to the phase direction of \(E_{1}\). The influence of gravitational waves is introduced into the system within the region enclosed by the red line positioned at the diagram's center. This region corresponds to the displacement of the shared mirror induced by gravitational waves. It is worth noting that the impact of gravitational waves on the sub-cavities is not taken into consideration, given their relatively abbreviated cavity lengths, resulting in negligible alterations in the optical path length caused by gravitational waves. Hence, gravitational
Figure 3: Block Diagram of the detection system incorporating optical-spring quantum locking and accounting for the effects of vacuum fluctuations. The upper and lower sections are the sub-cavities, and the central section is the main cavity. Purple areas represent quantum fluctuations of the laser light, yellow areas represent vacuum fluctuations mixed in during homodyne detection, and cyan areas represent detection ports.
waves are detected in the phase direction by the photodetector situated at the central portion of the diagram, which captures the main laser light.
Likewise, by utilizing this block diagram, we can ascertain the amplification or attenuation of individual quantum fluctuations and their subsequent detection as noise. Table 1 showcases the transfer functions of this system, elucidating the correlation between mirror displacement caused by gravitational waves or each quantum fluctuation and the signals obtained by the five photodetectors. Within this table, the gravitational wave signal is denoted by \(x\), while \(p\) and \(q\), as defined in the preceding subsection, represent the bases of the quantum fluctuations. By employing these bases and transfer functions, \(V_{\rm{main}}\), the signal acquired from the photodetector for the main laser light, can be expressed as follows:
\[V_{\rm{main}}=a\mathbf{x}+Aq_{m}^{(l)}+Bp_{m}^{(l)}+Cq_{s1}^{(l)}+Dp_{s1}^{(l)}+Cq_ {s2}^{(l)}+Dp_{s2}^{(l)}+Eq_{1}^{(v)}+Fp_{1}^{(v)}+Eq_{2}^{(v)}+Fp_{2}^{(v)}. \tag{13}\]
In the same way, \(V_{\rm{sub1}}^{(A)}\), \(V_{\rm{sub1}}^{(B)}\), \(V_{\rm{sub2}}^{(A)}\), \(V_{\rm{sub2}}^{(B)}\) which are the signals obtained from the other four photodetectors, are defined by the combinations shown in the table. Here, note that these signals contain some common noise components, as the sub-cavities on both sides are assumed to have the same configuration.
**B. Completing the Square**
Completing the square in quantum locking is a signal optimization method that has shown high effectiveness in previous research for reducing quantum noise [18]. In this approach, a new signal \(V=V_{1}+\chi V_{2}\) is defined from the two signals \(V_{1}\) and \(V_{2}\), with the combination coefficient \(\chi\) chosen to minimize the power spectrum. To effectively utilize the signals from homodyne detection at two ports, we introduce additional combination coefficients and devise a method to optimize the combination of three signals. Thus, by using \(V_{1}\),\(V_{2}^{(A)}\) and \(V_{2}^{(B)}\) along with the combination coefficients \(\chi_{A}\) and \(\chi_{B}\), a new signal \(V\) is defined as follows:
\[V=V_{1}+\chi_{A}V_{2}^{(A)}+\chi_{B}V_{2}^{(B)}. \tag{14}\]
The power spectrum \(S_{V}\) of this signal given by:
\[S_{V} =\left|V_{1}\right|^{2}+\left|V_{2}^{(A)}\right|^{2}\Bigg{|}\chi_ {A}+\frac{V_{2}^{(A)\dagger}V_{1}+\chi_{B}V_{2}^{(A)\dagger}V_{2}^{(B)}}{V_{ 2}^{(A)\dagger}V_{2}^{(A)}}\Bigg{|}^{2}\] \[\quad+\Bigg{(}\left|V_{2}^{(B)}\right|^{2}-\frac{\left|V_{2}^{(A )}V_{2}^{(B)}\right|^{2}}{\left|V_{2}^{(A)}\right|^{2}}\Bigg{)}\Bigg{|}\chi_{ B}+\frac{\left|V_{2}^{(A)}\right|^{2}\left(V_{2}^{(B)\dagger}V_{1}\right)- \left(V_{2}^{(A)}V_{2}^{(B)\dagger}\right)\left(V_{1}V_{2}^{(A)\dagger}\right) }{\left|V_{2}^{(A)}\right|^{2}\left|V_{2}^{(B)}\right|^{2}-\left|V_{2}^{(A) \dagger}V_{2}^{(B)}\right|^{2}}\Bigg{|}^{2}\] \[\quad-\frac{\left|V_{1}^{\dagger}V_{2}^{(A)}\right|^{2}\left|V_{2 }^{(B)}\right|^{2}+\left|V_{1}^{\dagger}V_{2}^{(B)}\right|^{2}\left|V_{2}^{(A) }\right|^{2}-\left(V_{2}^{(B)\dagger}V_{1}\right)\left(V_{2}^{(A)\dagger}V_{2} ^{(B)}\right)\left(V_{1}^{\dagger}V_{2}^{(A)}\right)-\left(V_{2}^{(B)}V_{1}^ {\dagger}\right)\left(V_{2}^{(A)}V_{2}^{(B)\dagger}\right)\left(V_{1}V_{2}^{( A)\dagger}\right)}{\left|V_{2}^{(A)}\right|^{2}\left|V_{2}^{(B)}\right|^{2}- \left|V_{2}^{(A)\dagger}V_{2}^{(B)}\right|^{2}}. \tag{15}\]
Therefore, the values of \(\chi_{A}^{(\rm{opt})}\) and \(\chi_{B}^{(\rm{opt})}\) that minimize its power spectrum are determined as follows:
\[\chi_{A}^{(\rm{opt})} =-\frac{\left|V_{2}^{(B)}\right|^{2}\left(V_{2}^{(A)\dagger}V_{1} \right)-\left(V_{2}^{(B)}V_{2}^{(A)\dagger}\right)\left(V_{1}V_{2}^{(B) \dagger}\right)}{\left|V_{2}^{(A)}\right|^{2}\left|V_{2}^{(B)}\right|^{2}- \left|V_{2}^{(A)\dagger}V_{2}^{(B)}\right|^{2}} \tag{16}\] \[\chi_{B}^{(\rm{opt})} =-\frac{\left|V_{2}^{(A)}\right|^{2}\left(V_{2}^{(B)\dagger}V_{1} \right)-\left(V_{2}^{(A)}V_{2}^{(B)\dagger}\right)\left(V_{1}V_{2}^{(A) \dagger}\right)}{\left|V_{2}^{(A)}\right|^{2}\left|V_{2}^{(B)}\right|^{2}- \left|V_{2}^{(A)\dagger}V_{2}^{(B)}\right|^{2}}. \tag{17}\]
\begin{table}
\begin{tabular}{c c|c c c c c c c c c c} \hline \hline To & From & \(\mathbf{x}\) (gw) & \(q_{m}^{(l)}\) & \(p_{m}^{(l)}\) & \(q_{s1}^{(l)}\) & \(p_{s1}^{(l)}\) & \(q_{s2}^{(l)}\) & \(p_{s2}^{(l)}\) & \(q_{1}^{(v)}\) & \(p_{1}^{(v)}\) & \(q_{2}^{(v)}\) & \(p_{2}^{(v)}\) \\ \hline \hline \multirow{4}{*}{Photo} & Main & \(a\) & \(A\) & \(B\) & \(C\) & \(D\) & \(C\) & \(D\) & \(E\) & \(F\) & \(E\) & \(F\) \\ \cline{2-11} & Sub1-A & \(-\) & \(G\) & \(H\) & \(I\) & \(J\) & \(-\) & \(-\) & \(K\) & \(L\) & \(-\) & \(-\) \\ \cline{2-11} & Sub2-A & \(-\) & \(G\) & \(H\) & \(-\) & \(-\) & \(I\) & \(J\) & \(-\) & \(-\) & \(K\) & \(L\) \\ \cline{2-11} Detector & Sub1-B & \(-\) & \(M\) & \(N\) & \(O\) & \(P\) & \(-\) & \(-\) & \(Q\) & \(R\) & \(-\) & \(-\) \\ \cline{2-11} & Sub2-B & \(-\) & \(M\) & \(N\) & \(-\) & \(-\) & \(O\) & \(P\) & \(-\) & \(-\) & \(Q\) & \(R\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Transfer functions from the effects of gravitational waves or quantum fluctuations to each photodetector.
Finally, we utilize the three signals as follows:
\[V_{1}=V_{\rm main}/a\,\ \ \ \ \ V_{2}^{(A)}=\Big{[}V_{\rm sub1}^{(A)}+V_{\rm sub2}^{ (A)}\Big{]}/a\,\ \ \ \ V_{2}^{(B)}=\Big{[}V_{\rm sub1}^{(B)}+V_{\rm sub2}^{(B)}\Big{]}/a, \tag{18}\]
where \(a\) is a transfer function from the displacement of the shared mirror caused by gravitational waves to the main photodetector as shown in Tab. **1**. Each signal is divided by \(a\) to calibrate it with the gravitational wave signal.
## IV Simulations
In this section, we delineate the parameter conditions employed in the simulation, as well as the method utilized for evaluating the sensitivity of DECIGO. Table **2** exhibits the symbols and ranges/values employed in the simulation. The upper segment of this table presents the five variable parameters employed for optimizing sensitivity: \(t_{1}^{(h)}\) and \(t_{2}^{(h)}\) represent the amplitude transmittance of each beam splitter. Detuning angle reflects the effect of the optical spring, and finesse corresponds to the effective number of light reflections within the cavity. Additionally, \(\xi\) is a parameter introduced to modulate the length of the optical path taken by the local oscillator. Since the two photodetectors are symmetrically positioned, the A and B signals can be interchanged by swapping the amplitude transmittance and reflectance values of BS2 and adjusting the parameter \(\xi\) to become \(\xi+\pi\). Note that the range of \(\xi\) is therefore defined as between 0 and \(\pi\). The eight parameters presented in the lower part of the table remain constant, with the exception of the cavity length, which differs between the main cavity and the sub-cavities. Next, we adopt Signal-to-Noise Ratio (SNR) as a measure to evaluate the sensitivity of DECIGO to primordial gravitational waves. The SNR is given by [24]:
\[\text{SNR}=\frac{3H_{0}^{2}}{10\pi^{2}}\sqrt{T}\Big{[}\int_{0.1}^{1}df\frac{2 \gamma(f)^{2}\Omega_{\rm GW}^{2}(f)}{f^{6}P_{1}(f)P_{2}(f)}\Big{]}^{\frac{1}{2 }}, \tag{19}\]
where \(P_{1}\) and \(P_{2}\) represent spectral densities of noise, as computed following the methodology outlined in Section III. Notably, in this case, \(P_{1}\) and \(P_{2}\) assume equal values. The remaining parameters employed in the SNR computation are presented
\begin{table}
\begin{tabular}{l l l} \hline \hline Meaning & Symbol & Range/Value \\ \hline \hline Amplitude Transmittance & \(t_{1}^{(h)}\) & 0 to 1 \\ & \(t_{2}^{(h)}\) & 0 to 1 \\ Detuning Angle & \(\phi_{\rm sub}\) & \(-\pi\) to \(\pi\) rad \\ Finesse & \(\mathcal{F}_{\rm sub}\) & 1 to 100 \\ Phase Shift & \(\xi\) & 0 to \(\pi\) rad \\ \hline Cavity Length & \(L_{\rm sub}\) & 1 m \\ & \(L_{\rm main}\) & 1000 km \\ Laser Power & \(I_{\rm sub}\) & 100 W \\ & \(I_{\rm main}\) & 100 W \\ Laser Wavelength & \(\lambda_{\rm sub}\) & 515 nm \\ & \(\lambda_{\rm main}\) & 515 nm \\ Mirror Mass & \(M_{\rm sub}\) & 100 kg \\ & \(M_{\rm main}\) & 100 kg (shared) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Conditions of main parameters for the optimization of the DECIGOβs sensitivity.
\begin{table}
\begin{tabular}{l l l} \hline Meaning & Symbol & Value \\ \hline Hubble Parameter & \(H_{0}\) & \(70\,\text{km}\,\text{sec}^{-1}\cdot\text{Mpc}^{-1}\) \\ Time for Correlation & T & 3 years \\ Frequency & \(f\) & 0.1 to 1 Hz \\ Correlation Function & \(\gamma\) & 1 \\ Energy Density & \(\Omega_{\rm GW}\) & \(10^{-16}\) \\ Noise Power Spectral Densities & \(P_{1},P_{2}\) & \\ \hline \end{tabular}
\end{table}
Table 3: Parameters used to calculate the SNR [21].
in Table 3. In Eq. 19, \(T\) signifies the observation period, which has been set to a duration of 3 years. \(\Omega_{\rm GW}\) denotes the energy density of primordial gravitational waves, and for the purpose of this research, a fixed value of \(10^{-16}\) has been adopted. Furthermore, \(\gamma\) denotes the overlap reduction function [24], assumed to be unity in the configuration of DECIGO. Additionally, the SNR assessment specifically concentrates on the frequency band spanning from 0.1 Hz to 1 Hz, wherein DECIGO is expected to exhibit high sensitivity to gravitational waves.
## V Result and Discussion
Based on the simulation results, we obtained the optimized sensitivity curves of DECIGO with optical-spring quantum locking to detect primordial gravitational waves as shown in Fig. 4. Each sensitivity curve, excluding the shot noise caused by phase quantum fluctuations of the main laser light, exhibits a dip around 0.1 Hz, and these dips align with each other. On the other hand, the shot noise, which has no dip, remains nearly constant within the target frequency band and contributes to the baseline of DECIGO's sensitivity. Furthermore, Table 4 shows the parameters that minimize noise in relation to the gravitational wave signal, resulting in a calculated SNR of 79.6. The more refined model presented in this paper gives a different result than less-detailed modeling (SNR=141), as shown in Fig. 5. 1 In this paper, the local oscillator is extracted from the incoming light directed towards the cavity. As a result, the laser light going to the sub-cavities are smaller, and the noises are relatively higher due to the effects entering from the vacuum fluctuation. In addition, the local light must have a large value in order to obtain an arbitrary homodyne angle. In the previous research, we considered a hypothetical situation in which the local laser power is infinite, independent of the incoming light directed towards the cavity. However, since it is not possible to do so in reality, we obtained degraded but realistic sensitivity. In contrast, the SNR without optical-spring quantum locking is 1.74, indicating that optical-spring quantum locking significantly helps reduce quantum noise even when the effect of vacuum fluctuations is considered.
Figure 4: Optimized sensitivity curves with contributions from noise sources. The total noise is represented by the thick blue line, while each quantum noise caused by fluctuations of the laser light or vacuum fluctuations is shown as thin lines. Especially, quantum noises originating from vacuum fluctuations are depicted as dashed lines. Additionally, the green or yellow lines represent signals obtained from only sub-cavity 1. Therefore, note that when adding the signal from sub-cavity 2 to this figure, the curve will be scaled up by a factor of \(\sqrt{2}\).
## VI Conclusions
To assess the efficacy of homodyne detection in sub-cavities within the optical-spring quantum locking for mitigating noise in DECIGO, it was imperative to consider a detector configuration that accounts for the vacuum fluctuations. In this paper, we investigated the determination of the homodyne angle by employing extracted light serving as a local oscillator, and subsequently constructed a comprehensive block diagram incorporating vacuum fluctuations. Moreover, this particular configuration enables the utilization of two photodetectors, which sets it apart from previous DECIGO configurations that allowed only the use of a single photodetector. We presented an optimization method for processing the signals acquired from each photodetector to obtain the optimized sensitivity curve. By optimizing the parameters with these configurations, the impact of vacuum fluctuations was evaluated. Although the effect of vacuum fluctuation results in a sensitivity slightly inferior to that of the ideal situation shown in the previous research, it still exhibits a marked improvement compared to the scenario without optical-spring quantum locking. Consequently, we demonstrate that homodyne detection in optical-spring quantum locking is an effective technique for noise reduction in DECIGO, with the potential to significantly contribute to the observation of primordial gravitational waves.
## Acknowledgements
We would like to thank David H. Shoemaker for commenting on a draft. This work was supported by JSPS KAKENHI, Grants No. JP19H01924 and No. JP22H01247. This work was also supported by Murata Science Foundation.
|
2303.02166 | Towards a GML-Enabled Knowledge Graph Platform | This vision paper proposes KGNet, an on-demand graph machine learning (GML)
as a service on top of RDF engines to support GML-enabled SPARQL queries. KGNet
automates the training of GML models on a KG by identifying a task-specific
subgraph. This helps reduce the task-irrelevant KG structure and properties for
better scalability and accuracy. While training a GML model on KG, KGNet
collects metadata of trained models in the form of an RDF graph called KGMeta,
which is interlinked with the relevant subgraphs in KG. Finally, all trained
models are accessible via a SPARQL-like query. We call it a GML-enabled query
and refer to it as SPARQLML. KGNet supports SPARQLML on top of existing RDF
engines as an interface for querying and inferencing over KGs using GML models.
The development of KGNet poses research opportunities in several areas,
including meta-sampling for identifying task-specific subgraphs, GML pipeline
automation with computational constraints, such as limited time and memory
budget, and SPARQLML query optimization. KGNet supports different GML tasks,
such as node classification, link prediction, and semantic entity matching. We
evaluated KGNet using two real KGs of different application domains. Compared
to training on the entire KG, KGNet significantly reduced training time and
memory usage while maintaining comparable or improved accuracy. The KGNet
source-code is available for further study | Hussein Abdallah, Essam Mansour | 2023-03-03T17:41:11Z | http://arxiv.org/abs/2303.02166v1 | # Towards a GML-Enabled Knowledge Graph Platform
###### Abstract
This vision paper proposes KGNet, an on-demand graph machine learning (GML) as a service on top of RDF engines to support GML-enabled SPARQL queries. KGNet automates the training of GML models on a KG by identifying a task-specific subgraph. This helps reduce the task-irrelevant KG structure and properties for better scalability and accuracy. While training a GML model on \(KG\), KGNet collects metadata of trained models in the form of an RDF graph called KGMeta, which is interlinked with the relevant subgraphs in \(KG\). Finally, all trained models are accessible via a SPARQL-like query. We call it a GML-enabled query and refer to it as SPARQL.1. KGNet supports SPARQL on top of existing RDF engines as an interface for querying and inferencing over KGs using GML models. The development of KGNet poses research opportunities in several areas, including meta-sampling for identifying task-specific subgraphs, GML pipeline automation with computational constraints, such as limited time and memory budget, and SPARQL query optimization. KGNet supports different GML tasks, such as node classification, link prediction, and semantic entity matching. We evaluated KGNet using two real KGs of different application domains. Compared to training on the entire KG, KGNet significantly reduced training time and memory usage while maintaining comparable or improved accuracy. The KGNet source-code1 is available for further study.
Footnote 1: [https://github.com/CoDS-GCS/KGNET](https://github.com/CoDS-GCS/KGNET)
## I Introduction
Knowledge graphs (KGs) are constructed based on semantics captured from heterogeneous datasets using various Artificial Intelligence (AI) techniques, such as representation learning and classification models [1]. Graph machine learning (GML) techniques, such as graph representation learning and graph neural networks (GNNs), are powerful tools widely used to solve real-world problems by defining them as prediction tasks on KGs. For instance, node classification tasks for problems, such as recommendations [2] and entity alignment [3], can be solved using GML techniques. Similarly, drug discovery [4] and fraud detection [5, 6] problems are tackled as link prediction tasks using GML techniques.
Data scientists often work with KGs, which are typically stored in RDF engines. They are responsible for developing GML pipelines using frameworks, such as PyG [7] and DGL [8], to train models on these KGs. However, there is often a gap between the GML frameworks and RDF engines. This necessitates an initial step of transforming the entire KG from RDF triple format into adjacency matrices in a traditional GML pipeline. Afterward, the data scientist needs to select a suitable GML method from a wide range of KG embedding (KGE) or GNN methods [9, 10] to train the model. For the average user, this responsibility is time-consuming. Furthermore, the trained models are isolated from the RDF engine, where the KG is stored. Therefore, automating the training of GML models on KGs and providing accessibility to the trained models via a SPARQL-like query is essential. We refer to this query as a SPARQL query.
The KG shown in Figure 1 contains information about published papers in DBLP [11]. However, the traditional SPARQL query language cannot be used to apply GML models on top of a KG, such as predicting a node's class or a missing affiliation link for an author. For instance, the venue node in Figure 1 is a virtual node that could be predicted using a node classification (NC) model. It would be fascinating to query this KG using a GML model for NC through a SPARQL-like query to obtain the paper-venue node, as shown in the SPARQL in Figure 2. This query uses a model of type _kgnet:NodeClassifier_ to predict a venue for each paper. The SPARQL triple patterns in lines 8-10 will retrieve all models of type _kgnet:NodeClassifier_ that predict a class of type _dblp:venue_. In the triple pattern \(\langle\texttt{7paper},\texttt{7NodeClassifier},\texttt{7venue}\rangle\), we refer to _?NodeClassifier_ as a user-defined predicate.
Fig. 1: A KG with nodes/edges in red, which could be predicted by classification and link prediction models on the fly.
Fig. 2: SPARQL\({}^{\text{ML}}_{pv}\): a SPARQL\({}^{\text{ML}}\) query uses a node classification model to predict a paperβs venue by querying and inferencing over the KG shown in Figure 1.
Enabling queries like SPARQL\({}_{pv}^{\text{ML}}\), shown in Figure 2, presents several challenges. These include: (_i_) automatically training GML models for various tasks, (_ii_) optimizing SPARQL\({}^{\text{ML}}\) for GML model selection based on accuracy and inference time, and (_iii_) efficiently interacting with the selected model during query execution. Additionally, seamless integration of GML models into RDF engines is necessary. As a result, users should be able to express their SPARQL\({}^{\text{ML}}\) queries easily by following the SPARQL logic of pattern matching, avoiding the explicit use of user-defined functions (UDFs).
There is a growing adoption of integrating GML with existing graph databases, such as Neo4j [12] or Stardog [13]. However, while these databases offer some machine learning primitive methods, such as PageRank and shortest-path using the _Cyber_ language, they do not address the challenges of integrating GML models with RDF engines. For example, Neo4j Graph Data Science [14] supports limited graph embedding methods in a beta version, such as FastRP [15], Node2Vec [16], and Graph-SAGE [17]. However, a user must train the models separately as an initial step. To address these challenges, there is a need to bring GML to data stored in RDF engines instead of getting data to machine learning pipelines. This would encourage the development of KG data science libraries powered by the expressiveness of SPARQL, enabling better analysis and insight discovery based on KG structure and semantics. These libraries would empower data scientists with a full breadth of KG machine learning services on top of KGs stored in RDF engines.
This vision paper proposes KGNet, an on-demand GML-as-a-service on top of RDF engines to support SPARQL\({}^{\text{ML}}\) queries, as illustrated in Figure 3. KGNet extends existing RDF engines with two main components GML-as-a-service (GMLaaS) and SPARQL\({}^{\text{ML}}\) as a Service. KGNet automatically trains a GML model on a KG for tasks, such as node classification or like prediction, and maintains metadata of the trained model as an RDF graph called _KGMeta_. To reduce training time and memory usage while improving accuracy on a specific task \(\mathcal{A}\), KGNet performs meta-sampling to identify a task-specific subgraph \(KG^{\prime}\) of the larger KG that preserves essential characteristics relevant to \(\mathcal{A}\). This enables KGNet to scale on large KGs. GMLaaS is in charge of: (_i_) selecting the near-optimal GML method for training \(\mathcal{A}\) using \(KG^{\prime}\) based on a given time or memory budget, and (_ii_) communicating with RDF engines via HTTP calls requesting inferencing of a specific trained model, (_iii_) storing the trained models and embeddings related to KGs. The SPARQL\({}^{\text{ML}}\) service transparently: (_i_) maintains and interlinks the KGMeta with associated KGs, (_ii_) optimizes the GML model selection for a user-defined predicate, and (_iii_) finally rewrites the SPARQL\({}^{\text{ML}}\) query as a SPARQL query.
In summary, the contributions of this paper are:
* a fully-fledged GML-enabled KG platform2 on top of existing RDF engines. Footnote 2: [https://github.com/CoDS-GCS/KGNET](https://github.com/CoDS-GCS/KGNET)
* GML-as-a-service to provide automatic training of GML models based on a given memory or time budget. This automatic training utilizes task-specific subgraphs extracted using our meta-sampling approach.
* SPARQL\({}^{\text{ML}}\) as a Service to perform meta-sampling, maintain training meta-data in KGMeta, and optimize the GML model selection, i.e., opt for the near-optimal model based on constraints on accuracy and inference time.
* A comprehensive evaluation with different GML methods using three GML tasks on real KGs. Our experiments show that KGNet achieved comparable or improved accuracy compared to training on the entire KG, while significantly reducing training time and memory usage.
The remainder of this paper is organized as follows. Section II provides a background about existing graph machine learning pipelines. Section III outlines the main research challenges of developing a GML-enabled KG engine. Section IV presents the KGNet platform. Section V discusses the results of evaluating our automated pipeline for training GML models. Sections VI and VII are related work and conclusion.
## II Background: ML pipelines for KGs
ML pipelines developed to train models on a KG can be grouped into three main categories: (_i_) traditional ML on KG data in tabular format, (_ii_) traditional ML on KG embeddings, and (_iii_) graph neural networks (GNNs) trained directly on the KG. In the traditional ML approach using KG data in tabular format, data from the KG is transformed into in-memory data frames, and classical ML classifiers are trained using feature engineering techniques and libraries, such as Scikit-Learn or SparkMLib. In contrast, traditional ML on KG embeddings avoids the feature engineering process and generates embeddings for nodes and edges. Apple Saga [18] is an example of this approach, which uses graph ML libraries like DGL-KE [8] to generate KG embeddings. Data scientists have the flexibility to choose the ML method for training.
GNNs have gained significant popularity in recent years. Hence, data scientists frequently utilize them to perform GML tasks. The Open Graph Benchmark (OGB) [19] standardized the GNN training pipeline, emphasizing the best practices for tackling GML tasks and building a GNN training pipeline. Figure 4 summarizes this pipeline, which involves encoding
Fig. 3: The KGNet architecture, which provides an interface language (SPARQL\({}^{\text{ML}}\)) and enables AI applications and data scientists to automatically train GML models on top of KGs for querying and inferencing KGs based on the trained models.
KG nodes and edges, generating adjacency matrices, loading them into memory, and training GNNs using specific methods.
Various GML frameworks, such as DGL [8] and PyG [7], offer multiple implementations of GNN methods. These frameworks support data transformation by loading graphs into memory as graph data structures and applying transformations. However, existing GML frameworks require significant memory and processing time for large KGs and a deep understanding of various GNN methods. In comparison, the OGB pipeline is simple, but it is a semi-automated process that necessitates human intervention and ML expertise to construct an effective pipeline and select an appropriate GNN method. Data scientists may choose the most appropriate GNN method based on various constraints, such as time or memory limitations. Furthermore, as depicted in Figure 4, the separation of the trained models from the KG engines adds an extra layer of complexity for data scientists to apply their models when inferring the KG.
## III Challenges of GML-enabled KG Engine
This section highlights the open research challenges and opportunities raised by developing GML-enabled KG Engine.
### _Automatic Training: Method Selection and Meta-sampling_
There are numerous methods for training models for GML tasks, as summarized in Figure 5. These methods could be classified mainly into two categories KG embeddings (KGE) or graph neural network (GNN) methods. Examples of KGE methods are TransE, RotatE, ComplEx, and DistMult [10]. Some GNN methods support sampling on full graph, such as Graph-SAINT [20], Shadow-SAINT [21], and MorsE [22]. Examples of GNN full-batch training (without sampling) methods are RGCN [23] and GAT [24]. Our taxonomy has more categories, as shown in Figure 5.
GML methods vary significantly in terms of their accuracy, training time, and memory requirements. Furthermore, the complexity of each GML task may differ depending on various factors, such as the size of KGs and the number of node/edge types related to the task. For example, link prediction can be more resource-intensive than node classification. Different GML methods may perform differently under the same budget constraints, and selecting the best method can depend on several factors. Hence, automating a training pipeline for a specific GML task based on a user's budget for time and memory is challenging. For instance, some GML methods perform full-batch training, which requires more memory budget. These methods require huge memory to train models on large KGs. Some other GNN methods may suffer from over-smoothing, which can cause accuracy degradation. Sampling-based GNN (mini-batch training) methods use different types of sampling, which vary in avoiding these limitations. Therefore, automating the selection of GML methods for a specific task based on a given time or memory budget is challenging.
Real KGs can contain millions to billions of triples, such as DBLP [11] and MAG [25]. However, training GML models on these large KGs requires colossal computing resources that exceed the capabilities of a single machine. As a result, there is a need for identifying a smaller training dataset of the KG, which is specific to the task at hand. This process is known as meta-sampling. It has been proposed in various application domains, including computer vision [26, 27] and speech recognition [28], to extract a training dataset that is tailored to the given task. In the context of GML, meta-sampling presents an opportunity to optimize training models on large KGs by selecting a representative sub-graph that is relevant to the task. This approach can help reduce time and memory requirements without sacrificing accuracy. Therefore, exploring the potential benefits of using meta-sampling in training GML models to extract task-specific subgraphs is crucial. By doing so, we can improve the efficiency and effectiveness of GML methods on large-scale KGs. This raises a research opportunity to explore different meta-sampling approaches for GML methods on large knowledge graphs (KGs).
### _Seamless Integration Between GML Models and KGs_
Enabling GML on top of RDF engines poses significant challenges, mainly interfacing between the trained models and the underlying data management engine. One common approach is to use user-defined functions (UDFs) to implement this interface [29, 30, 31]. However, this comes with a cost for query optimizations in data systems [32]. The existence of an extensive catalog of UDFs can limit the expressiveness of ML-based queries. For instance, a large catalog of UDFs makes it difficult for users to choose between UDFs and find the
Fig. 4: A traditional GML pipeline [19] using a GML framework. The pipeline starts with extracting the graph data, followed by data transformation into sparse matrices to train models for a GML task. Finally, the inference step is ready to predict results in isolation from the graph databases.
Fig. 5: A taxonomy of methods for training GML models.
right one for their needs. Most existing query optimizers do not have models estimating the cost of these UDFs. Hence, automating the query optimization of SPARQL\({}^{\text{ML}}\) queries is challenging. There is a research opportunity for seamless integration between trained GML models and RDF. To address these challenges, we proposed KGMeta as a graph representation of metadata of trained models interlinked with the KGs.
### _Optimizing SPARQL\({}^{\text{ML}}\) Queries and Benchmarks_
User-defined predicates were first proposed for SQL [33]. In SPARQL\({}^{\text{ML}}\), a user-defined predicate is used to get a prediction from one of the trained models associated with a specific node in the graph. Estimating the cost of evaluating a user-defined predicate is more complex than estimating the cost of a traditional RDF predicate. While cardinality estimation is used to optimize only the execution time for the latter, a user-defined predicate in a SPARQL\({}^{\text{ML}}\) query can be inferred by multiple models, each with varying accuracy and inference time. RDF engines are unaware of this information, leading to the problem of selecting the best model for inference.
For a SPARQL\({}^{\text{ML}}\) query, the inference step in an RDF engine using a chosen model is a challenging task that requires optimization, specifically for rank-ordering the inference process. The challenge lies in deciding whether to perform the inference in a single call to a UDF or per instance, which may result in an extensive number of UDF calls. Additionally, each model has a unique cardinality, i.e., the total number of predictions it can make. This makes predicting rank-ordering complex as RDF engines lack accurate estimation of UDF costs.
To address these challenges, there are research opportunities for developing benchmarks to evaluate optimization approaches for SPARQL\({}^{\text{ML}}\) queries. These benchmarks should consider various models for different user-defined predicates and be designed to work with large datasets. Furthermore, each SPARQL\({}^{\text{ML}}\) query should vary in the number of user-defined predicates and be associated with variables of different cardinalities. This will enable a comprehensive evaluation of the performance and scalability of varying optimization approaches for SPARQL\({}^{\text{ML}}\) queries.
## IV The KGNet Platform
KGNet provides two main services, namely GML as a Service (GMLaaS) and SPARQL\({}^{\text{ML}}\) as a Service on top of existing RDF engines, as shown in Figure 3.
### _GML as a Service (GMLaaS)_
KGNet is a platform that offers end-to-end automation of GML training on KGs, as depicted in Figure 6. The platform provides _GMLaaS_, a Restful service that manages GML models in terms of automatic training and interactive inferencing. Additionally, it utilizes an embedding store to facilitate entity similarity search tasks by computing the similarity between embedding vectors. The _GML training manager_ automates the training pipeline per task. However, the automation of GML training on KGs is challenging due to the complexity and size of KGs. Therefore, KGNet leverages our meta-sampling approach to optimize the training process by selecting a task-specific subgraph (\(KG^{\prime}\)) that is specific to the given task. This step helps reduce the time and memory required without trading accuracy. The pipeline takes as input a task-specific subgraph (\(KG^{\prime}\)), the GML task, the task budget, and the available resources within the ML environment.
The _Data Transformer_ step converts the subgraph into a sparse-matrix format optimized for in-memory and matrix operations. This format is compatible with popular graph ML data loaders, such as Py-Geometric and DGL, and is ideal for sparse KGs. Our pipeline ensures data consistency by validating node/edge types counts, removing literal data and target class edges, and generating graph statistics. We also perform a train-validation-test split using different strategies like random and community-based. KGNet automates this transformation, making ad-hoc GML training queries possible.
The _Optimal GML Method Selection_ step selects the best GML method for a given task. KGNet supports various GNN methods, including GCN, RGCN, Graph-SAINT, ShadowSAINT, Morse, and KGE methods such as ComplEx. We estimate the required memory for each method based on the size and the number of generated sparse-matrices, as well as the training time based on the matrix dimensions and feature aggregation approach. Moreover, we estimate the training time
Fig. 6: The automation of training pipeline and inference in our GML-as-a-service (GMLaaS). GMLaaS interacts with the KGMeta Manager to train a model for a specific task with limited budget. The automated pipeline opt to the near-optimal GML method for training a model within a limited budget. GMLaaS supports task inference through RestAPI that is called by a UDF.
based on the dimension of the sparse-matrices and GNN neighbour nodes features aggregation approach adopted by each method. For GNN sampling-based methods, the sampling cost basically depends on the sampling heuristic used [34]. Thus, we are working on a more advanced estimation method based on sampling the sparse-matrices and running a few epochs on them.
KGNet's GML-optimizer determines the necessary resources for each method and optimizes the training settings, ensuring scalability in distributed environments. The automated pipeline trains a model and collects evaluation metrics and inference time statistics. A URI is generated for the trained model to distinguish it from other models used for inference tasks. The model meta-data is returned to the KGMeta Manager to update the KGMeta graph. Figure 7.a and b show the generated meta-data for link prediction and node classification models, respectively. The _Embedding Store_ sub-component, shown in Figure 3, is used for fast similarity search by storing, indexing, and searching embeddings. The _GML Inferencing_ receives HTTP calls for inference, serializes the result into a JSON Restful-API response, and sends it back to the RDF engine, as shown in Figure 3. The current version uses FAISS embedding store [35] to enable ad-hoc queries for node similarity search.
### _The SPARQL\({}^{\text{ML}}\) as a Service_
We offer a SPARQL\({}^{\text{ML}}\) as a Service, which comprises three main components: Query Manager, KGMeta Governor, and Meta-sampler. In addition, we provide an interfacing language called SPARQL\({}^{\text{ML}}\) that enables users to express SPARQL-like queries for INSERT, DELETE, or SELECT operations, such that: (_i_) a SPARQL\({}^{\text{ML}}\)_INSERT_ query is used to train a GML model and maintain its metadata in KGMeta (as shown in Figure 8), (_ii_) a SPARQL\({}^{\text{ML}}\)_DELETE_ query is used to delete trained model files and associated embeddings from the GML-aaS component and then deletes its metadata from the KGMeta (as in Figure 9), (_iii_) a SPARQL\({}^{\text{ML}}\)_SELECT_ query is for querying and inferencing the KG, e.g., the query in Figure 10. When a SPARQL\({}^{\text{ML}}\) query is received, the Query Manager parses it. An INSERT or DELETE query is sent to the KGMeta Governor. If it is a SELECT query, it is optimized and rewritten as a SPARQL query.
#### Iv-B1 KGMeta Governor
The KGMeta Governor maintains a KGMeta graph for each KG, using statistics and metadata collected from trained GML models specific to that KG. The INSERT query is a request to train a task on a certain KG. The parsed information includes the task type (such as node classification or link prediction), the task inputs (such as the target nodes and classification labels (Y classes) for a classification task), and a budget (such as memory and time budget). Experienced ML users can provide additional information, such as hyperparameters or a specific GML method. This information is encapsulated as a JSON object, as shown in Figure 8. At line 4, the _TrainGML_ is a UDF that takes as input a JSON object that encapsulates all required information to train a GML model. The KGMeta Governor sends the task to the meta-sampler to obtain a task-specific subgraph (\(KG^{\prime}\)) for the given task. Then governor interacts with the GML Training Manager to automate the training
Fig. 8: A SPARQL\({}^{\text{ML}}\) insert query that trains a paper-venue classifier on DBLP. The TrainGML function is a UDF that is implemented inside the RDF engine.
Fig. 9: A SPARQL\({}^{\text{ML}}\) delete query that deletes a trained model and its meta-data.
Fig. 10: A SPARQL\({}^{\text{ML}}\) query predicting author affiliation link (edge) on DBLP KG.
pipeline for this task. Once training is complete, the KGMeta Governor receives the trained model's metadata, including accuracy and inference time, to maintain the KGMeta, as illustrated in Figure 7.
#### Iv-B2 Meta-sampler
Our meta-sampler aims to identify a task-specific subgraph (\(KG^{\prime}\)) for training a GML task. Each GML task targets nodes of a specific type, such as dblp:Publication in SPARQL\({}_{pv}\). Our meta-sampler extracts a task-specific subgraph (\(KG^{\prime}\)), which comprises a set of triples with representative triples associated with the target nodes. Based on the KG schema structure the size of \(KG^{\prime}\) is much smaller than the size of KG. This smaller size will optimize training time and require less memory for training the GML task \(\mathcal{A}\). However, the KG may contain triples that are not reachable from a target node \(v^{T}\) or connected via more than three hops from \(v^{T}\). These triples do not assist the model in generalizing and may lead to over-smoothing problems [36, 37].
Our SPARQL-based meta-sampling method determines the scope of the extracted subgraph based on two parameters: (_i_) the direction \(d\), where \(d=1\) for outgoing and \(d=2\) for bidirectional (i.e., both outgoing and incoming), and (_ii_) the number of hops \(h\). We evaluated the performance of our method using four combinations of \(d\in\{1,2\}\) and \(h\in\{1,2\}\). Our meta-sampling approach achieved better results with \(d=1\) and \(h=1\) for node classification, whereas for link prediction, our meta-sampling method performed better with \(d=2\) and \(h=1\).
#### Iv-B3 The Query Manager
The Query Manager is responsible for optimizing SPARQL\({}^{\text{ML}}\) queries for model selection and rank-ordering to evaluate user-defined predicates. In the case of SPARQL\({}_{pv}^{\text{ML}}\) shown in Figure 2, the query optimizer fetches all URIs of the models satisfying the conditions associated with the user-defined predicate _?NodeClassifier_. The KGMeta is an RDF graph containing optimizer statistics, such as model accuracy, inference time, and model cardinality. Therefore, we use a SPARQL query to obtain the models' URIs, accuracy, inference time, and cardinality. The query optimizer selects the near-optimal GML model that achieves high accuracy and low inference time. We define this problem as an integer programming optimization problem to minimize total execution time or maximize inference accuracy.
The _SPARQL\({}^{\text{ML}}\) Query Re-writer_ uses the near-optimal GML model with URI \(m\) to generate a candidate SPARQL query. KGNet currently supports two possible execution plans, whose query templates are shown in Figures 11 and 12. The core idea is to map a user-defined predicate into a user-defined function (UDF), such as _sql:UDFS.getNodeClass_, to send HTTP calls during the execution time to the GML Inference Manager in our GMLaaS to get inference based on the pre-trained model \(m\). The number of HTTP calls may dominates the query execution cost. For example, SPARQL\({}_{pv}^{\text{ML}}\) predicts the venue of all papers, whose size is!paperst.
The query template shown in Figure 11 will generate!paperst HTTP calls. However, the query template shown in Figure 12 reduces the number of HTTP calls to one by enforcing an inner select query constructing a dictionary of all papers and their predicted venues. Then, _sql:UDFS.getKeyValue_ is used to look up the venue of each paper. Our query optimizer decomposes the triple patterns related quering the KG triples in the SPARQL\({}^{\text{ML}}\) query into sets per variable associated with a user-defined predicate. For example, in the SPARQL\({}_{pv}^{\text{ML}}\) query shown in Figure 2, our optimizer identifies two triple patterns that match the variable _?paper_ and one triple pattern that matches the variable _?venue_. We use a SPARQL query to get the cardinality of each set, which is the number of distinct values of the variable in the dataset. We formulate this problem as another integer programming optimization problem [38] that minimizes the total number of HTTP calls or minimizes the constructed dictionary size, which is based on the model cardinality. For instance, in the query shown in Figure 12, our optimizer generates a dictionary of all papers and their predicted venues, which is then used to retrieve the venue of each paper using the UDF _sql:UDFS.getKeyValue_.
## V Experimental Evaluation
This section analyzes the ability of KGNet in automating pipelines to train a model for a specific task with less time and memory w.r.t traditional pipelines on full graphs.
### _Evaluation Setup_
**Compared Methods:** We used RGCN [23] as a full-batch training method and GraphSAINT [20], ShadowSAINT [21] as mini-batch sampling-based methods for node classification and MorseE [22] as edge sampling-based method for link prediction. The OGB [19] default configurations are used in both sampling and training. Node features are initialized randomly using Xavier weight initialization in all experiments.
**Computing Infrastructure:** All experiments are conducted on Ubuntu server virtual machine that is equipped with dual 64-core Intel Xeon 2.4 GHZ (Skylake, IBRS) CPUs, 256 GB of main memory and 1TB of disk storage.
**Real KGs:** We mainly focus on two benchmark KGs distinguishing in graph size, graph data domain, task type, and connection density including (DBLP [11] and Yago-4
Fig. 11: A candidate SPARQL for SPARQL\({}_{pv}^{\text{ML}}\)
Fig. 12: A candidate SPARQL for SPARQL\({}_{pv}^{\text{ML}}\)
[39]). We conducted two node classification tasks and one link prediction. We followed the tasks used in OGB [19]. Statistics about used KG and tasks are provided in Table I.
**Endpoints:** We use Virtuoso 07.20.3229 as a SPARQL endpoint, as it is widely adopted as an endpoint for large KGs, such as DBLP. The standard, unmodified installation of the Virtuoso engine was run at the endpoints and used in all experiments.
### _GML Experiments With Real KGs_
Three GML tasks are conducted to evaluate the KGNet automated GML pipeline. For **Node classification task**, GNN methods are used to train node classifiers to predict a venue for each DBLP paper. The KG is loaded into the Virtuoso RDF engine. KGNet performs meta-sampling using \(d1h1\) to extract the task-specific subgraph (\(KG^{\prime}\)) to train RGCN, GraphSAINT, and Shadow-SAINT methods. The task results in Figures 13 and 14 show that our KGNet training pipeline using (\(KG^{\prime}\)) outperforms the traditional pipeline on the full KG in all methods with up to 11% accuracy score. The automated training pipeline of KGNet has successfully enabled GNN methods to achieve significant reductions in memory consumption and training time. Specifically, KGNet has demonstrated a reduction of at least 22% in memory consumption and 27% in training time. These results demonstrate that KGNet can effectively discover task-specific subgraphs for each task.
Our **Link prediction task** aims to predict an author's affiliation link based on their publications and affiliations history on the DBLP knowledge graph. MorsE [22] is the state-of-the-art link-prediction sampling-based method. We use the MorsE in the traditional pipeline with the full KG. In the KGNet pipeline, our meta-sampling first extracts the task-specific subgraph (\(KG^{\prime}\)) using \(d2h1\) to train MorsE. The results, shown in Figure 15, demonstrate that the KGNet automated pipeline outperforms the pipeline trained on the full KG in terms of Hits@10 MRR score. KGNet achieves a significant reduction in memory usage and training time, with a reduction of 94% compared to the pipeline trained on the full KG.
## VI Related Work
The adoption of combining AI and database systems has been growing rapidly, with two main approaches emerging: AI models incorporated in DB systems (AI4DB) and database techniques optimized for AI systems for better scalability (DB4AI) [40]. In KGNet, we classify SPARQLML as part of the AI4DB approach since we have extended the KG engine to query and perform inference on KGs using GML
\begin{table}
\begin{tabular}{l c c} \hline
**Knowledge Graph** & DBLP & YAGO4 \\ \hline
**\#Triples** & 252M & 400M \\ & 50 Venue & \\
**\#Targets** & 51K Affiliations & 200 Country \\ & 1.2M paper & \\
**\#Edge Types** & 48 & 98 \\
**\#Node Types** & 42 & 104 \\
**Tasks** & NC,L,P,ES & NC \\ \hline \end{tabular}
\end{table} TABLE I: Statistics of the used KGs and GNN tasks. We used four times larger KGs (DBLP and Yago) than the ones reported in OGB [19].
Fig. 14: (a) Accuracy, (B) Training Time, and (C) Training Memory for YAGO-4 KG Place-Country node classification task. The KGNet task-oriented sampled subgraph (KGβ) significantly improves accuracy, training time, and memory.
Fig. 13: (a) Accuracy, (B) Training Time, and (C) Training Memory for DBLP KG Paper-Venue node classification task. The KGNet task-oriented sampled subgraph (KGβ) significantly improves accuracy, training time, and memory.
Fig. 15: (a) Accuracy, (B) Training Time, and (C) Training Memory for the DBLP Author Affiliation link prediction task. The KGNet task-oriented edge sampled subgraph (KGβ) significantly improves the Hits@10 MRR score, training time, and training memory.
models. However, we classify GMLaaS as part of the DB4AI approach since we have optimized the training pipeline using our meta-sampling approach, which queries a KG to extract a task-specific subgraph. Works RDFFrames [41], DistRDF2ML [42], and Apple Saga [18] aim to bridge the gap between ML and RDF systems by enabling the user to extract data from heterogeneous graph engines in a standard tabular format to apply traditional ML tasks such as classification, regression, and clustering or use KGE methods to generate node/edge embeddings for similarity search applications.
Yuo Lu et.al. addressed the problem of AI-enabled query optimization for SQL in [29] and introduced the probabilistic predicates (PPs) method that can be trained without any knowledge of the inference models. In Learned B+tree [43], the B+tree index is optimized based on AI models that map each query key to its page. Hasan et al [44] allow fast join queries by utilizing auto-regressive densities model to represent the joint data distribution among columns. ITLCS [45] introduced an index selection ML-based method that uses a classifier model as well as a genetic algorithm that selects the accurate index. Stardog [13] supports supervised learning to build predictive analytics models. Stardog enables users to write SPARQL queries that collect the ML training features set in a tabular format and apply classical ML, i.e., classification, clustering, and regression that can be used for inference queries.
Google's BigQuery ML [46] provides user-friendly tools to support AI models in SQL statements by introducing a hybrid language model that contains both AI and DB operations, which executes AI operations on AI platforms such as TensorFlow and Keras. SQL4ML [47] translates ML operators implemented in SQL into a TensorFlow pipeline for efficient training. To enable ad-hoc GML pipelines using SPARQL, RDF engines require this level of support.
Bordawekar et al. [48] built a cognitive relation database engine that queries database records utilizing word similarity using word2vec embeddings and extends results with external data sources. The cognitive DB represents a step towards linking representation learning with DB using text embedding techniques. EmbDI [49] automatically learns local relation embeddings with high quality from relational datasets using a word embedding to support datasets schema matching. Unlike all the above-mentioned systems, KGNet proposed a platform combining DB4AI and AI4DB approaches to bridge the gap between GML frameworks and RDF engines.
## VII Conclusion
The lack of integration between GML frameworks and RDF engines necessitates that data scientists manually optimize GML pipelines to retrieve KGs stored in RDF engines and select appropriate GML methods that align with their computing resources. Furthermore, the trained models cannot be directly used for querying and inference over KGs, which impedes systems' scalability, particularly as KGs grow in size and require excessive computing resources. Additionally, these limitations impact the system's flexibility, as descriptive query languages are incapable of incorporating GML models. To overcome these limitations, this vision paper proposed KGNet, an on-demand GML-as-a-service (GMLaaS) platform on top of RDF engines to support GML-enabled SPARQL queries (SPARQL\({}^{\text{ML}}\)). KGNet uses meta-sampling to extract a task-specific subgraph (\(KG^{\prime}\)) as a search query against a KG for a specific task. Our GMLaaS automates a cost-effective pipeline using \(KG^{\prime}\) to train a model within a given time or memory budget. KGNet maintains the metadata and statistics of trained models as an RDF graph called KGMeta, which is stored alongside associated KGs. KGMeta leads to a seamless integration between GML models and RDF engines, allowing users to easily express their SPARQL\({}^{\text{ML}}\) queries based on the SPARQL logic of pattern matching. Moreover, KGMeta enables KGNet to optimize SPARQL\({}^{\text{ML}}\) queries for model selection and rank-ordering for the inferencing process. KGNet raises research opportunities spanning across data management and AI.
|
2307.10095 | The Qudit ZH-Calculus: Generalised Toffoli+Hadamard and Universality | We introduce the qudit ZH-calculus and show how to generalise all the
phase-free qubit rules to qudits. We prove that for prime dimensions d, the
phase-free qudit ZH-calculus is universal for matrices over the ring
Z[e^2(pi)i/d]. For qubits, there is a strong connection between phase-free
ZH-diagrams and Toffoli+Hadamard circuits, a computationally universal fragment
of quantum circuits. We generalise this connection to qudits, by finding that
the two-qudit |0>-controlled X gate can be used to construct all classical
reversible qudit logic circuits in any odd qudit dimension, which for qubits
requires the three-qubit Toffoli gate. We prove that our construction is
asymptotically optimal up to a logarithmic term. Twenty years after the
celebrated result by Shi proving universality of Toffoli+Hadamard for qubits,
we prove that circuits of |0>-controlled X and Hadamard gates are approximately
universal for qudit quantum computing for any odd prime d, and moreover that
phase-free ZH-diagrams correspond precisely to such circuits allowing
post-selections. | Patrick Roy, John van de Wetering, Lia Yeh | 2023-07-19T16:09:48Z | http://arxiv.org/abs/2307.10095v2 | # The Qudit ZH-Calculus:
###### Abstract
We introduce the qudit ZH-calculus and show how to generalise the phase-free qubit rules to qudits. We prove that for prime dimensions \(d\), the phase-free qudit ZH-calculus is universal for matrices over the ring \(\mathbb{Z}[e^{2\pi i/d}]\). For qubits, there is a strong connection between phase-free ZH-diagrams and Toffoli+Hadamard circuits, a computationally universal fragment of quantum circuits. We generalise this connection to qudits, by finding that the two-qudit \(|0\rangle\)-controlled \(X\) gate can be used to construct all classical reversible qudit logic circuits in any odd qudit dimension, which for qubits requires the three-qubit Toffoli gate. We prove that our construction is asymptotically optimal up to a logarithmic term. Twenty years after the celebrated result by Shi proving universality of Toffoli+Hadamard for qubits, we prove that circuits of \(|0\rangle\)-controlled \(X\) and Hadamard gates are approximately universal for qudit quantum computing for any odd prime \(d\), and moreover that phase-free ZH-diagrams correspond precisely to such circuits allowing postselections.
## 1 Introduction
For qubits there are essentially three different graphical calculi: ZX, ZW and ZH [10]. Each of these is suitable for reasoning about different types of structures and quantum gates. The ZX-calculus [11, 12] is the most well-studied of these, and can naturally reason about the Clifford+Phases gate set (containing CNOT, Hadamard, \(S\) as well as arbitrary \(Z\) phase gates) and the useful primitives of phase gadgets and Pauli gadgets [13, 45]. Its _phase-free_ fragment, where the spiders cannot be labelled by a non-trivial phase, corresponds to CNOT circuits (together with ancillae and postselection) and can alternatively be interpreted into a category of linear relations [25]. The ZW-calculus [23, 30] instead can reason about photonic and fermionic computations [24]. The W-spider helps to easily represent sums of linear maps [26, 32, 43]. Its phase-free fragment is universal and complete for matrices over \(\mathbb{Z}\), and here again the W-spider is used to sum up numbers.
The calculus we will be interested in here is the ZH-calculus [3, 4]. Its H-box generator allows for easy representation of gates involving multilinear logic, like the Toffoli or other many-controlled gates. It can represent hyper-graph states [27], the path-sum formalism [28, 36, 37], quantum binary decision diagrams [35] and more [16, 17]. Its phase-free fragment represents the Toffoli+Hadamard gate set and is universal for matrices over \(\mathbb{Z}\)[4]. The H-box here allows for representing the AND operation \(|x\rangle\otimes|y\rangle\mapsto|x\wedge y\rangle\).
The last few years have seen a push towards generalising graphical calculi to work for higher-dimensional qudits. For ZX there is now work on qutrits [34, 38, 42], the prime-dimensional qudit stabiliser fragment [7, 31], and the universal algebraic qudit ZX-calculus [39, 41]. For ZW there are several
different proposals for qudit generalisations [30, 31]. Missing from these proposals is a generalisation for the ZH-calculus.
In this paper we present for the first time a qudit generalisation of the ZH-calculus. We base this translation on extending the representation of Boolean logic in the qubit ZH generators of [4] to arithmetic over \(\mathbb{Z}_{d}\). Then the Z- and X-spiders represent respectively the copy \(x\mapsto(x,x)\) and addition/negation \((x,y)\mapsto-_{d}(x+_{d}y)\), while the H-box represents (up to some Hadamards) the multiplication \((x,y)\mapsto x\cdot_{d}y\), where the subscript \(d\) denotes an operation modulo \(d\). This correspondence makes it easy to represent qudit generalisations of Toffoli-like gates.
In order to motivate this connection, we will first study the qudit generalisation of the Toffoli+Hadamard gate set, which for qubits is known to be computationally universal for quantum circuits [33]. First, we show that whereas the Toffoli suffices to construct all classical reversible qubit logic circuits, for odd-dimensional qudits we can do the same with the \(|0\rangle\)-controlled \(X\) gate. We find that our construction for these qudit classical reversible circuits from the \(|0\rangle\)-controlled \(X\) gate is asymptotically optimal up to a logarithmic factor. Second, we show that the gate set consisting of the \(|0\rangle\)-controlled \(X\) and Hadamard1 gates is approximately universal for quantum computing in all odd prime dimensions. Third, we find that phase-free qudit ZH-diagrams represent precisely postselected circuits over this Hadamard+\(|0\rangle\)-controlled \(X\) gate set.
Footnote 1: Technically in mathematics a Hadamard matrix is a \(\pm 1\) matrix of maximum possible determinant, named after Hadamardβs 1893 article on the matter [22]. However, we follow the convention of other qudit graphical calculi to refer to the \(d\)-dimensional Discrete Fourier Transform as the Hadamard [7].
A considerable part of the paper is devoted to proving that the phase-free ZH-calculus for prime-dimensional qudits is universal for matrices over \(\mathbb{Z}[\omega]\) where \(\omega=e^{2\pi i/d}\) is a \(d\)th root of unity. While proving universality for qubit ZH is straightforward, the qudit case brings several difficulties, since the structure of the matrix of the H-box is a lot more complicated. Our proof involves an encoding of propositional formulae over \(\mathbb{Z}_{d}\) into polynomials and a construction of Pascal's triangle into a matrix.
In Section 2 we present our results regarding classical reversible dit logic and the \(|0\rangle\)-controlled \(X\) gate. Then in Section 3 we introduce the phase-free qudit ZH-calculus and show its connection to the previously introduced gates. In Section 4 we extend the calculus to allow labels over arbitrary rings and prove its universality over this ring. Then in Section 5 we tackle the harder problem of proving universality of the phase-free ZH-calculus.
## 2 The qudit Toffoli+Hadamard gate set
In this paper, we let \(d\) denote the dimension of our qudits, so that a single wire in a (circuit) diagram corresponds to \(\mathbb{C}^{d}\). Note that many of our results only work if \(d\) is an odd prime. We let \(\omega:=e^{2\pi i/d}\) denote a \(d\)th root of unity. Then the qudit Paulis correspond to \(Z|a\rangle=\omega^{a}|a\rangle\) and \(X|a\rangle=|a+_{d}1\rangle\), where we use subscripts on operators like \(+_{d}\) to denote operations modulo \(d\). The controlled \(X\) gate (CX) then becomes \(|x,y\rangle\mapsto|x,x+_{d}y\rangle\). The qudit Hadamard acts as \(H|x\rangle=\frac{1}{\sqrt{d}}\sum_{y}\omega^{x,y}|y\rangle\). For qubits, we can write the action of the Toffoli as \(|x,y,z\rangle\mapsto|x,y,(x\cdot_{2}y)+_{2}z\rangle\). This definition extends straightforwardly to the qudit setting, where we just take the multiplication and addition to be modulo \(d\) instead of modulo \(2\). When allowing _zeroed_ ancillae, i.e. qubits prepared in the \(|0\rangle\) state, the Toffoli together with the \(X\) gate (which acts as the NOT gate) suffice to construct an arbitrary classical reversible logic circuit. It turns out however that for certain qudit dimensions, just a two-qudit gate suffices to achieve the analogous result.
We define the \(|0\rangle\)_-controlled \(X\) gate_ as acting on the computational basis as follows:
\[|c,t\rangle\ \mapsto\ \begin{cases}|c,t+_{d}1\rangle,&\text{if $c=0$}\\ |c,t\rangle,&\text{else}\end{cases} \tag{1}\]
i.e. by applying an \(X\) gate to the target iff the control is \(|0\rangle\).
Note that the \(|0\rangle\)-controlled X gate is not Clifford for any prime qudit dimensions except for the qubit case (for which it is a CNOT gate conjugated by NOTs on the control).
**Theorem 2.1**.: For any odd qudit dimension \(d\), any \(d\)-ary classical reversible function \(f:\mathbb{Z}_{d}^{n}\to\mathbb{Z}_{d}^{n}\) on \(n\) dits can be constructed by a circuit of \(O(d^{n}n)\) many \(|0\rangle\)-controlled \(X\) gates and \(O(n)\) ancillae prepared in the \(|0\rangle\) state.
**Proposition 2.2**.: For any qudit dimension \(d\), there exist \(d\)-ary classical reversible functions \(f:\mathbb{Z}_{d}^{n}\to\mathbb{Z}_{d}^{n}\) that require at least \(O(nd^{n}/\log n)\) single-qudit and two-qudit gates to construct, even when allowed \(\Omega(n)\) ancillae.
We present the proofs of Theorem 2.1 and Proposition 2.2 in the appendix.
Interestingly, we only need a two-qudit gate--the \(|0\rangle\)-controlled \(X\) gate--to construct any \(d\)-ary classical reversible gate (i.e. bijective maps of the form \(f:\mathbb{Z}_{d}^{n}\to\mathbb{Z}_{d}^{n}\)) with the help of \(|0\rangle\) ancillae. Hence, the \(|0\rangle\)-controlled \(X\) gate is universal for all classical reversible logic--generalising to all odd \(d\) what the three-qubit Toffoli gate does for \(d=2\). Hence, it makes sense to consider the generalization of the qubit Toffoli+H gate set to be the qudit gate set containing \(|0\rangle\)-controlled \(X\) and Hadamard, which by Theorem 2.1 generates all possible qudit generalized Toffoli gates (since they are all classically reversible).
For qubits, adding the Hadamard gate to all the classical reversible gates (which is generated by the Toffoli gate and zeroed ancillae) suffices for approximately universal quantum computation [33]. By combining Theorems 2.1 and 2.3 we find that this is in fact true in any prime qudit dimension.
**Theorem 2.3**.: The \(|0\rangle\)-controlled X gate and the H gate form an approximately universal gate set for qudits of any odd prime dimension. In other words, permitting the help of ancillae, this gate set can deterministically approximate any qudit computation up to arbitrarily small error.
Proof.: The proof below suffices for the case where the qudit dimension \(d\) is a prime \(d>3\). The proof for the \(d=3\) case consists of constructing all the Cliffords as follows, and the metaplectic gate (a single-qutrit non-Clifford gate) which we construct in Appendix B similarly to our construction in Ref. [20, Section 3].
Define the single-qudit gates \(Q[i]\) by \(Q[i]\,|j\rangle=\omega^{\delta_{ij}}|j\rangle\) where \(\delta_{ij}=1\) iff \(i=j\). In [44] it is shown that CX, H, and the \(Q[i]\) gates are universal for quantum computing for prime \(d>3\); for \(d=3\) this generates the Clifford group. It hence suffices to show that our gate set generates these gates. Clearly, inputting a zeroed ancilla to the control of the \(|0\rangle\)-controlled X gate yields the X gate. From here, the CX gate is easy to build from X and \(|0\rangle\)-controlled X gates. We can also exactly synthesize the \(Q[0]\) gate deterministically (up to an irrelevant global phase) with just \(|0\rangle\)-controlled X gates, H gates and a zeroed ancilla:
\[\tikzfig{Q[0]}\quad\quad=\quad\tikzfig{Q[0]}\quad\quad=\quad\tikzfig{Q[1]} \quad\quad=\quad\tikzfig{Q[0]}\quad\quad=\quad\tikzfig{Q[1]}\quad\quad= \quad\tikzfig{Q[2]}\quad\quad\quad=\quad\tikzfig{Q[3]}\quad\quad=\quad\tikzfig{Q[4]}\quad \quad=\quad\tikzfig{Q[5]}\quad\quad=\quad\tikzfig{Q[6]}\quad\quad=\quad\tikzfig{Q[7]}\quad \quad=\quad\tikzfig{Q[8]}\quad\quad=\quad\tikzfig{Q[9]}\quad\quad=\quad\tikfig{Q[10]}\quad \quad=\quad\tikzfig{Q[11]}\quad\quad\quad=\quad\tikzfig{Q[12]}\quad\quad= \tikfig{Q[13]}\quad\quad=\tikfig{Q[14]}\quad\quad=\tikfig{Q[15]}\quad\quad= \tikfig{Q[16]}\quad\quad=\tikfig{Q[17]}\quad\quad=\tikfig{Q[18]}\quad\quad= \tikfig{Q[19]}\quad\quad=\tikfig{Q[20]}\quad\quad=\tikfig{Q[21]}\quad\quad= \tikfig{Q[22]}\quad\quad=\tikfig{Q[23]}\quad\quad=\tikfig{Q[24]}\quad\quad= \tikfig{Q[25]}\quad\quad=\tikfig{Q[26]}\quad\quad=\tikfig{Q[27]}\quad\quad= \tikfig{Q[28]}\quad\quad=\tikfig{Q[29]}\quad\quad=\tikfig{Q[30]}\quad\quad= \tikfig{Q[31]}\quad\quad=\tikfig{Q[32]}\quad\quad=\tikfig{Q[33]}\quad\quad= \tikfig{Q[34]}\quad\quad=\tikfig{Q[35]}\quad\quad=\tikfig{Q[36]}\quad\quad= \tikfig{Q[37]}\quad\quad=\tikfig{Q[38]}\quad\quad=\tikfig{Q[39]}\quad\quad= \tikfig{Q[39]}\quad\quad=\tikfig{Q[40]}\quad\quad=\tikfig{Q[41]}\quad\quad= \tikfig{Q[42]}\quad\quad=\tikfig{Q[5]}\quad\quad=\tikfig{Q[6]}\quad\quad=\tikfig{Q[7]}\quad\quad= \tikfig{Q[8]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[ 10]}\quad\quad=\tikfig{Q[11]}\quad\quad=\tikfig{Q[12]}\quad\quad=\tikfig{Q[13]}\quad\quad= \tikfig{Q[14]}\quad\quad=\tikfig{Q[15]}\quad\quad=\tikfig{Q[16]}\quad\quad= \tikfig{Q[17]}\quad\quad=\tikfig{Q[18]}\quad\quad=\tikfig{Q[19]}\quad\quad=\tikfig{ Q[20]}\quad\quad=\tikfig{Q[21]}\quad\quad=\tikfig{Q[23]}\quad\quad=\tikfig{Q[24]}\quad\quad= \tikfig{Q[25]}\quad\quad=\tikfig{Q[26]}\quad\quad=\tikfig{Q[27]}\quad\quad= \tikfig{Q[28]}\quad\quad=\tikfig{Q[29]}\quad\quad=\tikfig{Q[29]}\quad\quad= \tikfig{Q[30]}\quad\quad=\tikfig{Q[31]}\quad\quad=\tikfig{Q[32]}\quad\quad=\tikfig{ Q[33]}\quad\quad=\tikfig{Q[34]}\quad\quad=\tikfig{Q[35]}\quad\quad=\tikfig{Q[36]}\quad\quad= \tikfig{Q[37]}\quad\quad=\tikfig{Q[38]}\quad\quad=\tikfig{Q[39]}\quad\quad= \tikfig{Q[39]}\quad\quad=\tikfig{Q[31]}\quad\quad=\tikfig{Q[32]}\quad\quad= \tikfig{Q[33]}\quad\quad=\tikfig{Q[34]}\quad\quad=\tikfig{Q[35]}\quad\quad= \tikfig{Q[36]}\quad\quad=\tikfig{Q[37]}\quad\quad=\tikfig{Q[38]}\quad\quad= \tikfig{Q[39]}\quad\quad=\tikfig{Q[39]}\quad\quad=\tikfig{Q[39]}\quad\quad= \tikfig{Q[40]}\quad\quad=\tikfig{Q[41]}\quad\quad=\tikfig{Q[42]}\quad\quad= \tikfig{Q[43]}\quad\quad=\tikfig{Q[44]}\quad\quad=\tikfig{Q[44]}\quad\quad= \tikfig{Q[5]}\quad\quad=\tikfig{Q[5]}\quad\quad=\tikfig{Q[6]}\quad\quad= \tikfig{Q[7]}\quad\quad=\tikfig{Q[8]}\quad\quad=\tikfig{Q[9]}\quad\quad= \tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{ Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad= \tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad= \tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad= \tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad= \tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{ Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad=\tikfig{ Q[9]}\quad\quad=\tikfig{Q[9]}\quad\quad
\(|0\rangle^{\otimes n}\)-controlled \(X_{01}\) gates2 each with gate count polynomial in \(n\), and that there exist ternary classical reversible gates requiring at least \(O(n3^{n}/\log n)\) gates to construct. In this work, we generalise these results to any odd qudit dimension \(d\) and we additionally find a construction of the \(|0\rangle^{\otimes n}\)-controlled \(X_{01}\) gate using \(O(n)\) gates. Combining these results gives us the \(O(nd^{n})\) gate count construction of \(d\)-ary classical reversible gates which is hence near asymptotically optimal in gate count up to a \(\log n\) factor.
Footnote 2: \(X_{01}\) maps \(|0\rangle\) and \(|1\rangle\) to each other and is identity on all other basis states.
**Remark 2.5**.: A recent preprint [48] appearing after submission of this paper independently discovered a version of Lemmas A.5 and A.8 for any odd qudit dimension. They additionally provide a separate \(O(n)\) gate count \(|0\rangle^{\otimes n}\)-controlled \(X_{01}\) gate construction applicable to any even qudit dimension. By generalisation of Ref. [47], they independently derived our Proposition 2.2 and a version of our Theorem 2.1 which uses more types of gates than just the \(|0\rangle\)-controlled \(X\), but which does work for all qudit dimensions.
## 3 The qudit ZH-Calculus
Now let us introduce the qudit ZH-calculus, which allows for graphical reasoning about qudit Toffoli-like gates. Diagrams will flow from inputs at the bottom, to outputs on the top (but because our generators will be flexsymmetric [8, 9] the orientation of diagrams in this paper will not matter much).
As is the case for the qubit ZH-calculus, the qudit ZH-calculus will consist of string diagrams built out of two types of generators: Z-spiders and H-boxes. We define these as follows:
\[\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ m\end{array}}^{n}}_{m}\ :=\ \sum_{i=0}^{d-1}|i\rangle^{\otimes n}\langle i|^{ \otimes m},\qquad\qquad\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ m\end{array}}^{n}}_{m}\ :=\ \frac{1}{\sqrt{d}}\sum_{i_{1},\ldots,i_{m},\,i_{1},\ldots,i_{n} \in\mathbb{Z}_{d}}\omega^{i_{1}\ldots i_{m}\,j_{1}\ldots\,j_{n}}|j_{1}...j_{n }\rangle\langle i_{1}...i_{m}|.\]
This matches the qubit-ZH definitions of [3], except that now the sums go from \(0\) to \(d-1\) instead of from \(0\) to \(1\), and we use the \(d\)th root of unity \(\omega=e^{2\pi i/d}\) instead of \(-1\). Additionally, we have included a normalization factor of \(1/\sqrt{d}\) in the definition of the H-box that will prevent some tedious constants from appearing everywhere [6, Ap. E]. As a consequence of this choice of normalisation, the \(1\)-input, \(1\)-output phase-free H-box corresponds exactly to the qudit Hadamard \(|x\rangle\mapsto\frac{1}{\sqrt{d}}\sum_{y}\omega^{xy}|y\rangle\). Note that while the matrix of the qubit H-box consists of just \(1\)'s, with a single entry equal to \(-1\), for qudits the matrix has a more complicated structure, with different powers of \(\omega\) appearing throughout the matrix. In the next section we will also introduce labelled H-boxes, so we will sometimes refer to diagrams containing just the above generators as _phase-free_ ZH-diagrams, following [4].
Apart from these generators we have the standard structural generators--identity, swap, cup and cap--needed to make a compact-closed PROP. Note that the qudit Z-spider and H-box satisfy the same symmetries as their qubit counterparts, meaning we get a flexsymmetric PROP [8, 9]:
\[\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \end{array}}^{\cdots}}_{...}\ =\underbrace{\overbrace{\begin{array}{c}\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\cdots\\ \cdots\\ \cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\\ \cdots\cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\\ \cdots\cdots\\ \cdots\\ \cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\\ \cdots\cdots\\ \cdots\\ \cdots\\ \cdots\cdots\\ \cdots\\ \cdots\\ \cdots\cdots\\ \cdots\\ \cdots\cdots\\ \cdots\\ \cdots\cdots\\ \cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\\ \cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\\ \cdots\cdots\\ \cdots\\ \cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\cdots\cdots\\ \cdots\cdots\cdots\cdots\\ \cdots\cdots\cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\cdots\\ \cdots\cdots\cdots\cdots\\ \cdots\cdots\cdots\cdots\cdots\\ \cdots\cdots\cdots\cdots\cdots\\ \cdots\cdots\cdots\cdots\cdots\\ \cdots\cdots\cdots\cdots\cdots\cdots\\ \
There are a couple of useful derived generators we will need:
(3)
The first of these is the well-known \(X\)-spider. The second realizes the Pauli \(X\) gate, e.g. the map \(X|i\rangle=|i+_{d}1\rangle\). The last two generators represent the scalars \(\sqrt{d}\) and \(1/\sqrt{d}\) respectively.
The (derived) generators of the qubit ZH-calculus can be motivated by a correspondence to Boolean logic [4, Eq. 5]. Similarly, our generators turn out to correspond with arithmetic operations over \(\mathbb{Z}_{d}\):
(4)
Note that here for multiplication we have a sequence of three Hadamard instead of just the one in the qubit version. This is because for qudits \(H^{4}=\mathrm{id}\), but not \(H^{2}=\mathrm{id}\). Instead we have \(H^{2}|i\rangle=|-_{d}i\rangle\). This map is sometimes called the _antipode_ or _dualiser_[10], and we will use it throughout the diagrams in this paper. It turns out to also be equal to a single-input, single-output X-spider.
This interpretation gives a straightforward way to represent the Toffoli and the \(|0\rangle\)-controlled \(X\) gate (writing our diagrams here from left-to-right to match circuit notation):
(5)
The correctness of the Toffoli construction follows easily from the interpretation given in Eq. (4). For the other, note that in the first step we use the trick that a gate controlled on some value, followed by its adjoint, is the same thing as controlling the adjoint on all the other values. Then the correctness of the ZH-diagram follows from Fermat's little theorem: for all \(x\in\mathbb{Z}_{d}\) for \(d\) prime, \(x^{d-1}=0\) if \(x=0\) and \(x^{d-1}=1\) otherwise. The full diagram hence adds 1 if the control is not 0.
Many of the rules of the qubit ZH-calculus generalise to qudits; see Figure 1. For their soundness we refer to Appendix C.
The \(Z\)-spider fusion rule generalises as expected, but the \(H\)-box fusion rule generalizes into something that allows _contraction_ of odd-length sequences of \(H\)-boxes interspersed by Hadamards. For the bialgebra rules, the \(Z/X\) version generalises up to global scalars, while the \(Z/H\) bialgebra needs some additional Hadamards which would cancel in the qubit case (furthermore for (ba1), if \((n-1)(m-1)<0\), introduce \(\blacktriangle^{-(n-1)(m-1)}\) to the LHS instead). Lastly, we have the generalization of the _identity_ and _multiply_ rules. We rename the latter _cyclic_ (cy) because what it really captures it the cyclic structure of the group \(\mathbb{Z}_{d}\).
Note how the above ruleset neither contains a rule stating that \(H^{4}=\mathrm{id}\), nor an inverted color change rule. That is because we can derive them from the rules presented above:
(h4)
Note that in both of these proofs, the application of the bialgebra rule (ba1) does not introduce scalars, as the number of inputs or output of the subdiagram we apply the rule to is always 1. We use the name (h) for the color change rule to keep in line with the notation of [34, Fig. 1].
Since these derivations hold for arbitrary dimension, they particularly hold for. This means that due to the qu_bit_ -box fusion rule, (h4) actually implies the self-inverseness of Hadamard gates, making the (hh) rules of Backens et al's ruleset redundant [4, Tab. 1].
In Appendix D, we also present a generalisation of the ortho rule from the phase-free qubit ZH-calculus [4]. Hence, we have a prime-dimensional qudit generalisation of all the phase-free qubit ZH-calculus rewrite rules [4]. While those rules are complete for the qubit phase-free calculus, it is not clear whether this continues to hold for qudits. We leave this question for future work, for instance building upon the recent completeness for all qudit dimensions in the ZXW-calculus [30].
### Translating ZH-diagrams to ZX-diagrams
The qudit ZX-calculus is universal, and hence can represent any linear map between qudits [42]. So in particular, there must be some way to interpret ZH-diagrams into ZX-diagrams. As the only generator of ZH-diagrams that is different from the ZX-calculus is the H-box, this is the only one we will have to translate. In fact, we only need to translate the three-legged and one-legged H-box, as the two-legged H-box is just the Hadamard gate. We can then obtain diagrams for higher-arity H-boxes by unfusing them into three-legged H-boxes. However, we will also introduce a direct construction for -legged H-boxes, which arises from the asymptotically efficient circuit constructions for any multiple-controlled prime-dimensional qudit Toffoli gate presented in the Appendix.
First, note that there is a close correspondence between an H-box and the qudit CCZ gate, which acts
Figure 1: Basic rules of the phase-free qudit ZH-calculus. Some additional (derived) rules are presented in Appendices D and E. The rules hold for all and. Here is the dimension of the qudit.
(5)
(6)
Hence, in particular the three-legged H-box is equal to one copy of the qudit CCZ gate acting on \(|{+++}\rangle\):
(7)
Since a CCZ gate is just the Toffoli from Eq. (5) with the target qudit conjugated by Hadamards, to construct an H-box in the ZX-calculus it then suffices to show how to construct the qudit Toffoli in the ZX-calculus. But by Theorem 2.1 we can construct the Toffoli from the \(|{0}\rangle\)-controlled \(X\) gate, so that it remains to show how this gate is constructed as a ZX-diagram.
We will write phases on Z-spiders in the ZX-calculus, as vectors \(\vec{\alpha}\) of length \(d{-}1\):
(8)
**Lemma 3.2** ([46]).: The prime-dimensional qudit \(|{0}\rangle\)-controlled X gate can be decomposed into the Clifford+Phases gate set (decomposing \(H\) as phase gates [41, Remark 2.3]), and written as a ZX-diagram:
(9)
where \(\vec{p}=\left(\omega^{\frac{-(d-1)}{2}},\omega^{\frac{-(d-1)}{2}},...,\omega^{ \frac{-(d-1)}{2}}\right)\) and \(\vec{r}=\left(\omega^{\frac{1}{2}},\omega^{\frac{2}{2}},...,\omega^{\frac{d-1 }{d}}\right)\) represents the \(d\)th root of \(Z\) gate from Ref. [46].
**Theorem 3.3**.: Any prime-dimensional qudit ZH-diagram composed of \(m\) Z spiders and \(n\) H-boxes each with no more than \(g\) legs, can be written as a composition of those \(m\) Z spiders, and \(O(ng)\) of either \(|{0}\rangle\)-controlled \(X\), Hadamard or \(|{0}\rangle\).
Proof.: Up to cups and caps, an H-box with two legs is the Hadamard gate, while an H-box with one leg is \(Z|{+}\rangle\): \(\blacksquare\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=\)\(=
## 4 Universality of ZH over arbitrary rings
We will now work towards proving universality of the qudit ZH-calculus for prime dimensions over the ring \(\mathbb{Z}[\omega]\). To do so, it will be helpful to first consider an extended ZH-calculus, where we allow H-boxes labelled by elements of a ring.
So let \(R\supset\mathbb{Z}[\omega,\frac{1}{\sqrt{d}}]\) be a commutative ring. We now introduce the following additional generators, _labelled_ H-boxes:
\[\tikzfig{qdi} \tag{11}\]
Here \(r\) is an arbitrary element of \(R\). Note that the unlabeled \(H\)-box corresponds to the \(\omega\)-labelled one. In writing, we refer to the \(\blacktriangle\)-scaled (\(1\)-ary) \(r\)-labeled \(H\)-state as \(H(r)=(1,r,r^{2},...,r^{d-1})^{T}\). Keeping in line with the notation of Backens et al., we call this calculus \(\mathrm{ZH}_{R}\)[4, Sec. 7].
The basic idea behind the universality proof is to create a big _Schur product_ of simpler matrices. Recall that the Schur product of two matrices \(A\) and \(B\) of equal dimension is the entrywise product \((A\star B)_{ij}=A_{ij}B_{ij}\). The Schur product is easily represented in qudit ZH (in the same way as it is for qubits [4, p. 27]):
\[\tikzfig{qdi} \tag{12}\]
We can express an arbitrary \(R\)-valued matrix \(M=(m_{ij})\) as a Schur-product of \(r,1\)-_pseudobinary_ matrices. These are matrices where every entry of the matrix is either \(r\) or \(1\). Namely, let \(\mathcal{R}\subseteq R\) be the, necessarily finite, set of \(r\in R\) that appear as entries in \(M\). Then for \(r\in\mathcal{R}\), let \(M_{r}=(m_{ij}^{(r)})\) be the matrix such that \(m_{ij}^{(r)}=r\) if \(m_{ij}=r\), and \(m_{ij}^{(r)}=1\) otherwise. Then \(M=\bigstar_{r\in\mathcal{R}}M_{r}\) is the Schur-product of these pseudobinary matrices. To prove universality over a ring \(R\) it hence suffices to show that the qudit ZH-calculus can represent arbitrary \(r,1\)-pseudobinary matrices for \(r\in R\).
In this section we thus introduce the foundational building block of our universality proof: an algorithm for constructing \(\mathrm{ZH}_{R}\)-diagrams of \(r,1\)-pseudobinary matrices. For this, we perform two intermediary steps: (1) Describe the location of the ones in a \(r,1\)-pseudobinary matrix using a logical formula \(\varphi\), and (2) convert the formula into a polynomial whose roots are exactly the fulfilling assignments of \(\varphi\).
Since we know how to express addition and multiplication as \(\mathrm{ZH}_{R}\)-diagrams, turning a polynomial into a diagram is then rather straight-forward. The following diagrammatic gadgets, together with those of (4) will prove useful:
\[\tikzfig{qdi} \tag{13}\]
Consider a linear map \(L:(\mathbb{C}^{d})^{\otimes n}\to(\mathbb{C}^{d})^{\otimes m}\) whose matrix is \(r,1\)-pseudobinary: for every \(\vec{x}\in\{0,...,d-1\}^{n}\) we have \(L(|\vec{x}\rangle)=\sum_{\vec{y}\in\{0,...,d-1\}^{n}}\vec{x}_{\vec{x},\vec{y}} |\vec{y}\rangle\) where all \(\vec{x}_{\vec{x},\vec{y}}\in\{r,1\}\). We can describe the location of the \(1\)s in that matrix using a logical formula \(\varphi_{L}\) in \(n+m\) free variables such that \(\varphi_{L}(x_{1},...,x_{n},y_{1},...,y_{m})\)
is true iff \(\lambda_{[x_{1},...,x_{n}),[y_{1}...y_{n})}=1\):
\[\varphi_{L}(x_{1},...,x_{n},y_{1},...,y_{m})=\bigvee_{\begin{subarray}{c}i_{1},...,i_{n},j_{1},...,j_{n}\\ \in\{0,...,d-1\}\\ \lambda_{i_{1}-i_{n},j_{1},...,j_{n}}=1\end{subarray}}\bigwedge_{k=1}^{n}(x_{k }=i_{k})\wedge\bigwedge_{\ell=1}^{m}(\varphi_{\ell}=j_{\ell}). \tag{13}\]
Logical formulae unfortunately do not correspond to something we can easily directly express in \(\mathrm{ZH}_{R}\). However, we can translate these formulae into polynomials, which we _can_ represent in \(\mathrm{ZH}_{R}\).
**Proposition 4.1**.: If \(d\) is prime, then for every propositional formula \(\varphi\) over \((\mathbb{Z}_{d},-,+,\cdot,=)\) in \(n\) free variables there exists a polynomial \(p_{\varphi}\in(\mathbb{Z}_{d})[X_{1},...,X_{n}]\) such that \(p_{\varphi}(x_{1},...,x_{n})=0\iff\varphi(x_{1},...,x_{n})\).
Proof.: Let \(\varphi\) be a formula over \((\{1,...,d\},-,+,\cdot,=)\) in \(n\) free variables. We describe our polynomial \(p\) inductively. Note that every arithmetic expression in our formula is already a polynomial, since we only allow addition, negation and multiplication in our signature. Thus, we only have to deal with equality, negation and disjunction3.
Footnote 3: We do not need to deal with conjunction, since \(\neg\) and \(\vee\) are functionally complete.
1. When \(\varphi=(p_{1}(x_{1},...,x_{n})=p_{2}(x_{1},...,x_{n}))\) for \(p_{1},p_{2}\in(\mathbb{Z}_{d})[X_{1},...,X_{n}]\), set \(p_{\varphi}=p_{1}-p_{2}\).
2. When \(\varphi=\neg\varphi^{\prime}\), set \(p_{\varphi}=1-(p_{\varphi^{\prime}})^{d-1}\).
3. When \(\varphi=\varphi_{1}\vee\varphi_{2}\), set \(p_{\varphi}=p_{\varphi_{1}}\cdot p_{\varphi_{2}}\).
The only non-obvious part of the construction is the construction for negation. This step follows from the fact that for \(d\) prime, exponentiating with \(d-1\) in \(\mathbb{Z}_{d}\) maps 0 to 0 and everything else to 1. Lastly, note that the construction in 3) makes use of the absence of zero-divisors in fields.
**Lemma 4.2**.: Assume \(d\) is prime. Given a polynomial \(p\in(\mathbb{Z}_{d})[X_{1},...,X_{n}]\) in \(n\) variables, we can construct an \(n\)-input \(0\)-output \(ZH_{R}\)-diagram that evaluates to 1 on states \(|b_{1}...b_{n}\rangle\) such that \(p(b_{1},...,b_{n})=0\), and to \(r\) on all other states.
Proof.: First suppose that we had a diagram implementing the map \(|b_{1}...b_{n}\rangle\mapsto|p(b_{1},...,b_{n})\rangle\). For \(d\) prime, the map \(x\mapsto x^{d-1}\) in \(\mathbb{Z}_{d}\) maps 0 to 0, and everything else to 1. By (12), we know how to realize this operation as a \(\mathrm{ZH}\)-diagram. Apply this operation to the output of the diagram implementing the polynomial, and postselect with the effect \(H(r)^{T}=(1,r,r^{2},...,r^{d-1})\). This gives the desired map. So let's see how to implement the map \(|b_{1}...b_{n}\rangle\mapsto|p(b_{1},...,b_{n})\rangle\).
We do this by induction on the number of variables \(n\). If \(n=0\), then \(p\in\mathbb{Z}_{d}\) is a constant, which we know how to realize using (12). Now suppose we know how to construct diagrams for polynomials with \(n-1\) variables. By definition of polynomial rings we can abuse notation slightly to write \(p\in(\mathbb{Z}_{d}[X_{1},...,X_{n-1}])[X_{n}]\), e.g. \(p=\sum_{i=0}^{k}p_{i}X_{n}^{i}\) for \(p_{0},...,p_{k}\in\mathbb{Z}_{d}[X_{1},...,X_{n-1}]\). By induction, we have \(\mathrm{ZH}\)-diagrams realizing \(p_{0},...,p_{k}\), which we denote by boxes labeled "\(p_{0}\)" through "\(p_{k}\)". Then a diagram for
the desired map can be constructed as follows (using the correspondence to algebraic operations of (4)):
In light of (13), this means that using a polynomial \(p\in(\mathbb{Z}_{d})[X_{1},...,X_{m},Y_{1},...,Y_{n}]\) such that \(p(x_{1},...,x_{m},y_{1},...,y_{n})=0\iff\varphi(x_{1},...,x_{m},y_{1},...y_{n})\), we can use Lemma 4.2 and map-state-duality to construct arbitrary \(r,1\)-pseudobinary linear maps as \(\mathrm{ZH}_{R}\)-diagrams.
**Corollary 4.3**.: For prime \(d\), every \(r,1\)-pseudobinary linear map \(L:(\mathbb{C}^{d})^{\otimes n}\to(\mathbb{C}^{d})^{\otimes m}\) has a qudit \(\mathrm{ZH}_{R}\)-diagram realizing \(L\).
Proof.: Use (13) to construct a formula \(\phi(\vec{x},\vec{y})\) that is true when \(\langle\vec{y}|L|\vec{x}\rangle=1\) and \(r\) otherwise. Then use Proposition 4.1 to transform \(\phi\) into a polynomial \(p\) that is \(0\) when \(\phi\) is true, and finally use Lemma 4.2 to construct a diagram with \(n+m\) inputs that evaluates to \(1\) when you input \(|\vec{x}\rangle\otimes|\vec{y}\rangle\) with \(p(\vec{x},\vec{y})=0\) and to \(r\) on other inputs. Bending the last \(m\) wires up to be outputs gives a diagram that is exactly equal to \(L\).
We give a worked out example of this entire procedure in Appendix F.
**Theorem 4.4**.: Let \(R\supset\mathbb{Z}[\omega,\frac{1}{\sqrt{d}}]\) be a commutative ring. Then \(\mathrm{ZH}_{R}\) is universal for matrices over \(R\).
Proof.: By Proposition 4.3 we can construct \(\mathrm{ZH}_{R}\)-diagrams for arbitrary \(r,1\)-pseudobinary matrices for \(r\in R\). By taking Schur products of these matrices, any matrix over \(R\) can be realised.
## 5 Universality of the phase-free \(\mathrm{ZH}\)-calculus
We now set our sights on establishing the universality of the phase-free \(\mathrm{ZH}\)-calculus, where we are only allowed \(\omega\)-labelled (i.e. phase-free) \(\mathrm{H}\)-boxes, for matrices over the ring \(\mathbb{Z}[\omega]\). We will use the structure of the previous proof, reducing the problem to the ability to construct diagrams for \(r,1\)-pseudobinary matrices, where now \(r\in R=\mathbb{Z}[\omega]\). The only obstacle to using this approach is that in the proof of Lemma 4.2 we require a postselection to the state \(H(r)\), which we don't a priori have access to. To prove universality of the phase-free \(\mathrm{ZH}\)-calculus we hence need to show that we can construct diagrams for states of the form \(H(r)=(1,r,r^{2},\ldots,r^{d-1})^{T}\) where \(r=a_{1}+a_{2}\omega+\ldots+a_{d-1}\omega^{d-1}\in\mathbb{Z}[\omega]\).
Backens _et al._[4] established the analogous results in the qubit case: that \(\mathrm{ZH}\) is universal for integer-valued matrices even without introducing labeled \(H\)-boxes as new generators. To show this, they construct an equivalent to all the integer labelled \(H\)-boxes: there is a simple expression with the same linear map as the \(H(0)\)-box, and there is a successor gadget that increments the label of an arbitrary \(H\)-box by \(1\). Construction of negative integers is done by using a negation gadget. We will follow a similar path.
First, we already have a representation of \(H(0)=|0\rangle\) (see Eq. (12) and take \(k=0\)). Our immediate goal is then to construct a successor gadget to increment \(H\)-box labels. This will give us \(\mathrm{H}\)-boxes with natural numbers as labels. The other possible labelled \(\mathrm{H}\)-boxes are then straightforward to construct.
### The qudit successor gadget
A successor gadget \(S=(s_{ij})_{0\leq i,j<d}\) that increments the label of an \(H\)-box by \(1\) has to satisfy the equation \(SH(a)=H(a+1)\) for any \(a\). Looking at the definition of the qudit \(H\)-box, this means the coefficients \(s_{ij}\) of \(S\) have to satisfy the equations \((a+1)^{i}=\sum_{j=0}^{d-1}s_{ij}a^{j}\). To solve this, we recall the binomial theorem, which states that \((a+1)^{i}=\sum_{j=0}^{i}\binom{i}{j}a^{j}\). Hence, we must have \(s_{ij}=\binom{i}{j}\), with the convention \(\binom{i}{j}=0\) for \(j>i\). This means that the matrix \(S\) encodes _Pascal's triangle_ in the form of a lower triangular matrix.
Note that because we already have a representation of \(H(0)\), that we can use Lemma 4.2 to construct a ZH-diagram for any _binary_ matrix: a matrix whose entries are only \(0\)'s and \(1\)'s. Our task then is to construct a ZH-diagram for \(S\) using only binary matrices. We achieve this by constructing each row of \(S\) individually and then _multiplexing_ between them. To see how this works, first consider the linear map \(R:\mathbb{C}^{d}\to\mathbb{C}^{d},|i\rangle\mapsto|i\rangle+|i+1\rangle\). One readily verifies that the coefficients of \(R^{j}|0\rangle\) for \(0\leq j<d\) correspond to the \((j+1)\)th row of Pascal's triangle. Hence, our desired successor gadget \(S\) satisfies the equation \(R^{j}|0\rangle=S^{T}|j\rangle\). Therefore, we need some way to apply a different power of \(R\) to different inputs (and then take the transpose, which is straightforward). To do this we need a _multiplexer_.
Consider the linear map \(M:(\mathbb{C}^{d})^{d+1}\to\mathbb{C}^{d}\) defined by
\[|x_{0}...x_{d-1}\rangle\otimes|c\rangle\ \mapsto\ \begin{cases}|x_{c}\rangle&x_{j}=0 \text{ for all }j\neq c\\ 0&\text{ otherwise.}\end{cases}\]
Let \(|\varphi^{i}\rangle=\sum_{j=0}^{d-1}\lambda_{ij}|j\rangle\) be a collection of states for \(0\leq i<d\) where for all \(i\) the \(|0\rangle\) coefficient \(\lambda_{i0}\) equals \(1\). Then for a fixed control value \(0\leq c<d\) we calculate:
\[M(|\varphi^{0}\rangle\otimes...\otimes|\varphi^{d-1}\rangle \otimes|c\rangle) = \sum_{j_{0}=0}^{d-1}\cdots\sum_{j_{d-1}=0}^{d-1}\lambda_{0j_{0}} \cdots\lambda_{(d-1)_{d-1}}M(|j_{0}...j_{d-1}\rangle\otimes|c\rangle)\] \[= \lambda_{00}\cdots\lambda_{(d-1)0}\sum_{j_{c}=0}^{d-1}\lambda_{ cj_{c}}|j_{c}\rangle\ =\ \sum_{j_{c}=0}^{d-1}\lambda_{cj_{c}}|j_{c}\rangle\ =\ |\varphi^{c}\rangle.\]
Hence, \(M\) multiplexes between these input states, using \(|c\rangle\) as a control. As each row of Pascal's triangle starts with \(1\), the states \(R^{j}|0\rangle\) have the right property. Hence \(M(R^{0}|0\rangle\otimes\cdots\otimes R^{d-1}|0\rangle\otimes|c\rangle)=R^{c}| 0\rangle=S^{T}|c\rangle\). So by combining \(M\) and powers of \(R\), and applying some appropriate transposes, we get \(S\).
Both maps \(R\) and \(M\) are binary, meaning we can realize them as a phase-free ZH-diagram using Proposition 4.3. We perform this construction for \(R\) in Appendix F, while for \(M\) we only outline the first few steps, without actually constructing the diagram, due to its immense size. Using placeholders for \(M\) and \(R\), we get the following diagram for our successor map \(S\):
We can then realize any integer-labeled \(H\)-box where the label is non-negative: \(H(n)=S^{n}H(0)\). Combining this with Lemma 4.2 means we can already construct arbitrary \(\mathbb{N}\)-valued matrices. To get all integer labeled \(H\)-boxes, we construct \(-1\) in the next subsection.
### Constructing all the labelled H-boxes
To construct more complicated labelled H-boxes we first realise that by taking the Schur product of two labelled H-boxes \(H(a)\) and \(H(b)\), we calculate the product of the labels (up to global scalar): \(H(a)\star H(b)=\frac{1}{\sqrt{d}}H(a\cdot b)\). Since we already know how to construct \(H(n)\) for any \(n\in\mathbb{N}\), and we have the phase-free H-box \(H(\omega)\) we can then also construct \(H(\omega n)\) for any \(n\in\mathbb{N}\). The second observation is that the successor gadget \(S\) adds \(1\) to the label regardless of the label, including non-integers. We can hence construct \(S^{m}H(\omega n)=H(\omega n+m)\). Iterating these two steps we can then build \(H(\omega^{d-1}n_{1}+\omega^{d-2}n_{2}+\cdots+\omega n_{d-1}+n_{d})\) where all the \(n_{j}\in\mathbb{N}\).
Recall that for a \(d\)th root of unity \(\omega\neq 1\) we have the identity \(\sum_{j=1}^{d-1}\omega^{j}=-1\). Hence using the above procedure we can also construct \(H(-1)\) as \(H(-1)=H(\omega+\omega^{2}+\cdots+\omega^{d-1})\). Combining this with our construction of \(H(\sum_{j}n_{j}\omega^{j})\) for positive \(n_{j}\) above, we can then construct any \(H(\sum_{j}a_{j}\omega^{j})\) where \(a_{j}\in\mathbb{Z}\). For instance, if we want to construct \(H(n_{2}\omega^{2}-n_{1}\omega-n_{0})\) we do it with the following sequence of operations:
\[H(n_{2}) \rightarrow H(n_{2}\omega)\ \rightarrow\ H(-n_{2}\omega)\ \rightarrow\ H(-n_{2}\omega+n_{1})\ \rightarrow\ H(-n_{2}\omega^{2}+n_{1}\omega)\] \[\rightarrow H(-n_{2}\omega^{2}+n_{1}\omega+n_{0})\ \rightarrow\ H(n_{2}\omega^{2}-n_{1}\omega-n_{0}).\]
To summarise the whole construction of this section: we started out with the observation that with Lemma 4.2 we can represent arbitrary polynomials in phase-free ZH, and in this way represent arbitrary binary matrices (where every entry is either \(0\) or \(1\)). We then found a way to construct a "successor gadget" \(S\) that increments the label of an H-box, \(SH(a)=H(a+1)\), from building blocks that are binary matrices which we know how to construct. Together with using the Schur product as a multiplication operation for H-box labels, this then allowed us to create H-boxes with arbitrary labels from \(\mathbb{Z}[\omega]\). But then we can appeal to the same construction in Proposition 4.3 and Theorem 4.4 to conclude the following:
**Theorem 5.1**.: The phase-free ZH-calculus for qudits of prime dimension \(d\) is universal for matrices over the ring \(\mathbb{Z}[\omega]\), where \(\omega=e^{2\pi i/d}\) is a \(d\)th root of unity.
Note that phase-free ZH-diagrams in fact represent a slightly larger fragment, corresponding to matrices \(\frac{1}{\sqrt{d}}M\) where \(M\) has entries in \(\mathbb{Z}[\omega]\). This is because we have global factors of \(\frac{1}{\sqrt{d}}\) that cannot be made 'local' inside of the matrix. This is analogous to the qubit result for the phase-free ZH-calculus [4, Section 8.3] and the Toffoli+Hadamard circuit fragment [2].
## 6 Conclusion
We have introduced a qudit ZH-calculus, and showed how to generalise all the rules of the phase-free qubit calculus. We have established a universality result both for qudit ZH over an arbitrary ring, as well as for the phase-free ZH-calculus. We found that phase-free ZH-diagrams correspond to postselected circuits of Hadamard and \(|0\rangle\)-controlled \(X\) gates. We showed that this gate set is approximately universal for qudit computation, and we found an almost asymptotically optimal strategy for compiling classical reversible qudit logic to this gate set.
The most immediate question about our qudit ZH-calculus is whether our generalisation of the qubit phase-free rules remains complete for qudits. It is possible to generalise the unique normal form for qubits from [4], but it is far from clear how to prove that we can reduce arbitrary diagrams to this normal form. Another open question is whether our construction in Theorem 2.1 is optimal, or whether it can be improved by a logarithmic factor. An interesting future direction would be to translate to and from
the ZXW-calculus [30] to achieve completeness of the qudit ZH-calculus and thereafter improve the challenging compilation of classical reversible logic in photonic quantum computing [18].
**Acknowledgements**: LY is supported by an Oxford - Basil Reeve Graduate Scholarship at Oriel College with the Clarendon Fund. PR was supported by the German Academic Scholarship Foundation. We thank the anonymous reviewers for their feedback.
|
2308.16261 | The Missing Link: Testing Galactic Chemical Evolution Models with the
First Multi-Isotopic Abundances in Solar Twin Stars | We present the first isotopic abundances of both $^{13}$CO and C$^{18}$O in
solar twin stars and test the results against several galactic chemical
evolution (GCE) models with different nucleosynthesis prescriptions. First, we
compare M-band spectra from IRTF/iSHELL to synthetic spectra generated from
custom solar atmosphere models using the PHOENIX atmosphere code. Next, we
compare our calculated abundances to GCE models that consider isotopic yields
from massive stars, asymptotic giant branch (AGB) stars and fast-rotating
stars. The $^{12}$C/$^{13}$C ratios determined for this sample of solar twins
are consistent with predictions from the selected GCE models; however, the
$^{16}$O/$^{18}$O ratios tentatively contradict these predictions. This project
constitutes the first in a stellar chemical abundance series seeking to: (1)
support the James Webb Space Telescope (JWST) as it characterizes exoplanet
atmospheres, interiors, and biosignatures by providing host star abundances (2)
identify how unexplored stellar abundances reveal the process of galactic
chemical evolution and correlate with star formation, interior, age,
metallicity, and activity; and (3) provide improved stellar ages using stellar
abundance measurements. By measuring elemental and isotopic abundances in a
variety of stars, we not only supply refined host star parameters, but also
provide the necessary foundations for complementary exoplanet characterization
studies and ultimately contribute to the exploration of galactic, stellar, and
planetary origins and evolution. | David R. Coria, Ian J. M. Crossfield, Joshua Lothringer, Becky Flores, Nikos Prantzos, Richard Freedman | 2023-08-30T18:38:50Z | http://arxiv.org/abs/2308.16261v1 | The Missing Link: Testing Galactic Chemical Evolution Models with the First Multi-Isotopic Abundances in Solar Twin Stars
###### Abstract
We present the first isotopic abundances of both \({}^{13}\)CO and C\({}^{18}\)O in solar twin stars and test the results against several galactic chemical evolution (GCE) models with different nucleosynthesis prescriptions. First, we compare M-band spectra from IRTF/iSHELL to synthetic spectra generated from custom solar atmosphere models using the PHOENIX atmosphere code. Next, we compare our calculated abundances to GCE models that consider isotopic yields from massive stars, asymptotic giant branch (AGB) stars and fast-rotating stars. The \({}^{12}\)C/\({}^{13}\)C ratios determined for this sample of solar twins are consistent with predictions from the selected GCE models; however, the \({}^{16}\)O/\({}^{18}\)O ratios tentatively contradict these predictions. This project constitutes the first in a stellar chemical abundance series seeking to: (1) support the James Webb Space Telescope (JWST) as it characterizes exoplanet atmospheres, interiors, and biosignatures by providing host star abundances (2) identify how unexplored stellar abundances reveal the process of galactic chemical evolution and correlate with star formation, interior, age, metallicity, and activity; and (3) provide improved stellar ages using stellar abundance measurements. By measuring elemental and isotopic abundances in a variety of stars, we not only supply refined host star parameters, but also provide the necessary foundations for complementary exoplanet characterization studies and ultimately contribute to the exploration of galactic, stellar, and planetary origins and evolution.
Solar analogs -- Isotopic abundances -- Galactic Chemical Evolution -- Late-type stars 0000-0002-4880-7880]David R. Coria
0000-0002-4880-7888]Ian J. M. Crossfield
0000-0002-4888-7888]Joshua Lothringer
0000-0002-4888-0888]Becky Flores
0000-0002-4888-0888]Nikos Prantzos
0000-0002-4888-0888]Richard Freedman
## 1 Introduction
In stellar and planetary astrophysics, one of the most important areas of study is the chemical composition of the object in question. Until recently, large-scale photometric surveys could determine only the most fundamental stellar parameters such as mass, radius, luminosity, and temperature. While these parameters are important, they do not provide the context necessary for examining the chemical evolution of our galaxy, nor do they provide a baseline for exploring planet formation mechanisms, exoplanet atmospheres, and exoplanet interiors. For this, we need spectroscopy and stellar chemical abundances. Since generations of stars produce different elements, elemental and isotopic abundance ratios provide information on the age of a system and can be used as a sort of "cosmic clock". These measurements are used to perform key tests of mixing mechanisms inside stars and serve as powerful diagnostic tools to constrain chemical enrichment, galaxy formation and evolution models when used alongside accurate stellar ages, distances, and kinematics (Jackson et al., 2021; Romano et al., 2017).
On the larger, galactic scale, abundance surveys have revealed interesting trends between particular stellar elemental abundances, stellar age, and various stellar populations throughout the galaxy (Brewer and Fischer, 2016; Adibekyan et al., 2018; Botelho et al., 2020; Nissen, 2015; Nissen et al., 2020; Delgado-Mena et al., 2019, 2021). Other studies use stellar abundances to reconstruct the chemical history of the Milky Way (Jofre et al., 2017; Jackson et al., 2021) and to track down our Sun's long-lost siblings (Adibekyan et al., 2018). Some stellar abundances can even be used to distinguish exoplanet host stars from the general galactic stellar population
(Brewer et al., 2016; Swastik et al., 2022; Delgado-Mena et al., 2021; Teske et al., 2019). Although galactic chemical evolution (GCE) models have already successfully reproduced present-day cosmic abundances of many elements (e.g. Kobayashi et al. (2011); Prantzos et al. (2018); Romano et al. (2017)), new stellar abundance measurements allow for modelers to test the adopted stellar nucleosynthetic yields for which there is little observational data. Doing so will reveal whether the current understanding of stellar yields and chemical evolution models is truly accurate or simply appears accurate by coincidence. The \({}^{12}\)C/\({}^{13}\)C ratio is most commonly observed in giant stars where it is well known that various phenomena alter the initial \({}^{12}\)C/\({}^{13}\)C ratio. However, such comparisons also depend on stellar evolution and interior models. To complement these data, GCE model predictions should also be compared to \({}^{12}\)C/\({}^{13}\)C ratios measured in unevolved stars, like FGKM dwarf stars, that have preserved the initial isotope ratio in their envelopes. All of the GCE model studies mentioned in this paper (Kobayashi et al., 2011; Romano et al., 2017; Prantzos et al., 2018) issue a call for dwarf star isotopic abundance ratios, particularly for those isotope ratios (e.g. CNO (Romano, 2022), Mg, Si, Ti) that we cannot trust from giant stars. Since these publications, the \({}^{12}\)C/\({}^{13}\)C ratio has been measured in solar twin stars (Botelho et al., 2020), M-dwarf stars (Crossfield et al., 2019), and even in sub-stellar objects like brown dwarfs and directly imaged exoplanets (Zhang et al., 2021, 2021).
On a smaller, planetary-system scale, stellar elemental abundances provide a glimpse into planetary formation and migration mechanisms as well as planetary chemical composition. Since stellar atmospheres evolve slowly, the elemental abundances of exoplanet hosts tend to reflect the composition of their planet-forming disks (Brewer and Fischer, 2016) and have the potential to yield constraints on planet formation processes and, in turn, even the physical properties of exoplanets themselves (Bedell et al., 2018). These chemical signatures, observed in the photospheres of stars, can be traced back to a planet's sequestration of heavy elements during planet formation or later in the system's history when a host star accretes planetary material. Thus, the presence, absence, and composition of planets could ideally be inferred from minute differences in the abundances between stars (Gaidos, 2015).
Recent chemical abundance surveys explore the implications for planet formation of refractory element abundance (including C, O, Mg, and Si) and overall metallicity for thousands of solar analog stars and others within the local solar neighborhood (Fortney, 2012; Brewer and Fischer, 2016; Bedell et al., 2018; Teske et al., 2019; Nibauer et al., 2021; Swastik et al., 2021). Studying the chemical composition of carbonaceous chondrites within our own solar system (Anders and Grevesse, 1989; Braukmuller et al., 2018) and the photospheres of nearby polluted white dwarfs (Harrison et al., 2018; Bonsor et al., 2021; Putira and Xu, 2021) provide another means of indirectly inferring the composition of planetary material and reinforce the link between stellar abundances and exoplanet composition.
Other studies explore the possibility of using elemental abundance ratios like the carbon-to-oxygen ratio (Oberg et al., 2011; Reggiani et al., 2022; Seligman et al., 2022), refractory-to-volatile ratios (Hands and Helled, 2021; Lothringer et al., 2021; Welbanks et al., 2019), or isotopic abundance ratios like \({}^{12}\)C/\({}^{13}\)C (Zhang et al., 2021, 2021) to constrain a planet's formation location relative to stellar "snowlines". As we enter the era of the space-based James Webb Space Telescope and ground-based ELTs, we also prepare to observe exoplanet atmospheres in unprecedented detail: from massive, accreting super-Jupiters like TYC 8998-760-1 b down to super-Earths and smaller terrestrial planets like those in the TRAPPIST-1 system. Contemporary and future researchers use these planetary and stellar abundances to model exoplanet atmospheres, to infer the structure and composition of the exoplanet's interior, and to understand how atmospheres and interiors co-evolve over time (Madhusudhan, 2012; Unterborn et al., 2014; Brewer and Fischer, 2016; Unterborn et al., 2017; Lincowski et al., 2018; Lincowski et al., 2019).
Clearly, there is plenty of work being done to derive stellar elemental abundances and explore their relationship to planet formation and galactic chemical evolution, however, the same cannot be said for isotopic abundances. The present-day isotopic abundance database contains only a handful of measurements from giant stars and even fewer from dwarf stars and is currently barring progress to test isotopic abundances against GCE models and planet formation mechanisms.
In this paper, we present the first \({}^{12}\)C/\({}^{13}\)C and \({}^{16}\)O/\({}^{18}\)O ratio measurements made in solar twin stars using infrared fundamental CO band features. By developing a body of precise stellar isotopic abundance profiles, we aim to identify the "missing link" between GCE model predictions and observations, and to reveal the physical phenomena responsible for present-day abundances in the process.
In Section 2, we provide an overview of CO isotopic abundances including: isotope production in AGB stars, archival isotope measurements in giant stars, pilot studies in dwarf stars and the challenges faced, the potential of isotopic abundances to act as stellar chronometers,
and also their role in constraining GCE models. In Section 3, we present our solar twin sample (including fundamental stellar parameters and elemental abundances), describe our observations using IRTF/iSHELL Rayner et al. (2016) and spectral reduction. Section 4 contains a description of our PHOENIX stellar models and our isotopic abundance analysis methodology. In Section 5, we present our \({}^{12}\)C/\({}^{13}\)C and \({}^{16}\)O/\({}^{18}\)O ratio measurements for our solar twin sample and compare them to GCE models (Kobayashi et al., 2011; Romano et al., 2017; Prantzos et al., 2018) and archival \({}^{12}\)C/\({}^{13}\)C measurements (Botelho et al., 2020). Finally, in Section 6 we provide a brief commentary of our infrared, CO-based isotopic abundance analysis and present an overview of future stellar targets and other isotopic abundance ratios of interest.
## 2 Background & Motivation
### Carbon and Oxygen Isotope Production Mechanisms
As low-to-intermediate mass stars (\(1\leq M_{\odot}\leq 8\)) enter the late stages of their evolution, they play an important role in the chemical enrichment of the ISM, particularly in regard to minor isotope production. Asymptotic Giant Branch stars, or AGB stars, constitute the late phase in the evolution of these low-to-intermediate mass stars. Depending on time and location within the galaxy, up to 50% of the material returned to the ISM by dying stars comes from AGB stars, (Goswami, 2014) and the many grain species formed in their cool circumstellar envelopes contribute greatly to the dust population of the ISM (Tielens & Allamandola, 1989). Therefore, understanding the nucleosynthetic processes and yields from these stars is crucial to chemical evolution modeling efforts. AGB star isotopic abundance ratios are great probes of internal mixing processes and their effect on stellar nucleosynthesis as different dredge-up events mix the burning material within the stellar envelope. During their red giant branch (RGB) phase, stars undergo a convective mixing process called the first dredge-up (FDU). This process carries nuclei from internal layers to the stellar surface. Because these internal layers are affected by partial CNO cycling which produces minor CNO isotopes like \({}^{13}\)C, this leads to a decrease in the atmospheric \({}^{12}\)C/\({}^{13}\)C ratio with respect to main sequence values of \(\sim 90\) down to \(\sim 20-30\). The FDU leaves \({}^{16}\)O abundances unaltered, increases the \({}^{17}\)O abundance by about 50%, and slightly reduces \({}^{18}\)O (Abia et al., 2017). A second dredge-up event occurs during the early-AGB phase of the more massive stars (M \(\geq 4M_{\odot}\)) where \({}^{4}\)He and \({}^{14}\)N are brought to the surface, but this does not modify the CNO isotope ratios significantly. The third dredge-up (TDU) event that occurs during the star's main AGB phase, however, mixes products of He burning into the envelope. Since \({}^{12}\)C is the main product of He burning, the stellar surface becomes enriched in \({}^{12}\)C and the \({}^{12}\)C/\({}^{13}\)C ratio increases past the previous FDU value (Abia et al., 2017). The amount of carbon in the envelope may eventually exceed the amount of oxygen, and so the AGB star becomes an AGB carbon star.
\({}^{12}\)C, the main carbon isotope, is a primary product of the triple-\(\alpha\) process that occurs during a star's helium-burning phase and drives mixing between the processed core and outer envelope in low to intermediate-mass AGB stars. \({}^{12}\)C from the hydrogen-burning shell in intermediate and massive stars is responsible for producing all of the nuclei involved in the CNO cycle: \({}^{13}\)C, \({}^{15}\)N, and \({}^{17}\)O. While \({}^{12}\)C is significantly produced by low mass stars (1-4 M\({}_{\odot}\)), \({}^{13}\)C is produced mainly in intermediate mass stars and massive stars via three main production mechanisms: (1) the CNO cycle resulting from H-burning; (2) \({}^{12}\)C burning in low metallicity, fast-rotating massive stars; and (3) proton-capture nucleosynthesis in AGB stars (Botelho et al., 2020). State-of-the-art GCE models investigate the connection between stellar rotation and internal mixing mechanisms by implementing nucleosynthetic yields from massive, fast-rotating stars (Prantzos et al., 2018; Romano et al., 2019).
Synthesis of the primary oxygen isotope, \({}^{16}\)O, is well understood since it is a primary product of stellar evolution. It is produced exclusively by massive stars from \({}^{12}\)C via \(\alpha\)-capture at the end of a star's helium burning phase. Lower mass stars eject only the initial \({}^{16}\)O of their envelope and thus do not contribute to the \({}^{16}\)O enrichment of the galaxy. In fact, both the \({}^{16}\)O and \({}^{18}\)O stellar yields are often negative in intermediate-mass asymptotic giant branch (AGB) stars because they are destroyed by proton-capture nucleosynthesis during CN burning (Kobayashi et al., 2011). Production of the secondary oxygen isotopes, \({}^{17}\)O and \({}^{18}\)O, depends on pre-existing seed nuclei. The lighter of the two, \({}^{17}\)O, is mainly produced through CNO burning of hydrogen into helium via the reaction: \({}^{17}\)F \(\rightarrow\)\({}^{17}\)O + e\({}^{+}\) + neutrino, followed by \({}^{17}\)O + \({}^{1}\)H \(\rightarrow\)\({}^{14}\)N + \({}^{4}\)He. The latter reaction leads to the production of the next heavier oxygen isotope. The heavy oxygen isotope, \({}^{18}\)O, is primarily produced from \(\alpha\)-captures on \({}^{14}\)N, left over from CNO cycling, which occurs during the initial stages of helium burning (Meyer et al., 2008; Nittler & Gaidos, 2012). \({}^{18}\)O is also secondary product of the CNO cycle and explosive nucleosynthesis (T \(\sim 10^{9}\) K) in massive stars. It is also worth noting that more massive stars contribute significantly to the production of \({}^{16}\)O and \({}^{18}\)O since these
isotopes require helium burning for production whereas low-mass stars contribute more to \({}^{17}\)O synthesis which only requires hydrogen burning (Meyer et al., 2008; Nittler and Gaidos, 2012). The evolution of the minor oxygen isotope abundances is not well understood. Specifically, the balance between \({}^{18}\)O depletion via CNO cycling and \({}^{18}\)O production during helium burning is not well studied.
### Isotopic CNO Abundances
#### 2.2.1 Giant Star CNO Isotopic Abundances
CNO isotopic abundances have previously been measured in several red giant stars and AGB carbon stars. Observations show that \({}^{16}\)O/\({}^{17}\)O/\({}^{18}\)O ratios are in agreement with evolutionary model predictions (Tsuji, 2006; Abia et al., 2017) which confirms that oxygen ratios are well established using post-FDU values. Although the observed \({}^{12}\)C/\({}^{13}\)C ratios in these stars appear fairly diversified in the range of \(\sim\)5-50 with a majority in the \(\sim\)10-20 range, most appear lower than theoretical predictions and much closer to the CNO cycle equilibrium \({}^{12}\)C/\({}^{13}\)C ratio of \(\sim 3.5\)(Tsuji, 2006; Takeda et al., 2019). These low \({}^{12}\)C/\({}^{13}\)C ratios could, in theory, be explained by extra mixing processes during the RGB and AGB evolution phases, however this would also imply extreme oxygen and nitrogen isotope ratios of \({}^{16}\)O/\({}^{17}\)O \(\geq\) 1000, \({}^{16}\)O/\({}^{18}\)O \(\geq\) 2000, and \({}^{14}\)N/\({}^{14}\)N \(\geq\) 104 (Tsuji, 2006; Abia et al., 2017). Puzzling observations of AGB carbon stars with very low \({}^{12}\)C/\({}^{13}\)C ratios do not show these high oxygen and nitrogen ratios. Moreover, the stars that _do_ demonstrate extreme oxygen and nitrogen ratios have normal \({}^{12}\)C/\({}^{13}\)C ratios (Abia et al., 2017).
Carbon and oxygen isotope ratio measurements in giant stars are not as helpful to GCE models as dwarf star measurements. This is because giant-star nucleosynthesis significantly alters the initial ratio throughout the star's evolution whereas dwarf stars tend to preserve their initial isotopic ratios in their atmospheres (Meyer et al., 2008; Nittler and Gaidos, 2012; Prantzos et al., 2018). Thus, dwarf star abundances better reflect the chemical composition of their birth clusters and are overall a better gauge of chemical evolution over time.
#### 2.2.2 Challenges to Isotopic Abundance Measurements in Dwarf Stars
Until recently, it has been notoriously difficult to detect isotopes in cool dwarf stars, hence the inadequate isotopic abundance catalog. Isotopic effects are prominent only in molecular, not atomic, lines; therefore, spectral molecular features are most prominently observed in stars such as M-dwarfs whose photospheres are cool enough to allow the formation of various molecular species (Tsuji, 2016). However, molecular observations come with some challenges due to telluric absorption and the sheer density of absorption lines throughout the optical and infrared. This includes strong molecular absorption due to metal oxides and hydrides (Koizumi et al., 2020) such as TiO in the optical band or H\({}_{2}\)O in the NIR band (Souto et al., 2018) and the opacity due to millions of other molecular absorption lines that dominate an M-dwarf's observed spectrum. These spectral features have hampered past efforts to identify desired isotopic lines necessary for a detailed chemical analysis and accurate atmospheric models (Veyette and Muirhead, 2018). In addition, determining isotopic abundance ratios in stars requires very high-resolution and high signal-to-noise spectra (Adibekyan et al., 2018; Romano et al., 2017). It is rather difficult to detect isotopologue features in dwarf star spectra, and so the database of isotopic carbon and oxygen abundances is rather small.
#### 2.2.3 Dwarf Star Carbon and Oxygen Isotopic Abundance Ratios
Despite these challenges, a few isotopolgue detections have been made in dwarf stars ranging from F-type stars down to M dwarfs. A pilot study used medium-resolution (R \(\sim 20,000\)) near-infrared spectra to measure the \({}^{12}\)C/\({}^{13}\)C ratio in a sample of M dwarf stars (Tsuji, 2016). The medium resolution observations could not resolve the faint \({}^{13}\)C\({}^{16}\)O from stronger \({}^{12}\)C\({}^{16}\)O, but evidence for \({}^{13}\)C\({}^{16}\)O in these M dwarf spectra was reported for the first time. This study demonstrates the need for very high-resolution spectra, otherwise CO isotopolgue lines become too blended to measure precise abundances.
With this caveat in mind, later studies successfully measured the \({}^{12}\)C/\({}^{13}\)C in solar twins using \({}^{12}\)CH and \({}^{13}\)CH features in the optical band (Adibekyan et al., 2018; Botelho et al., 2020). These efforts relied on high-resolution (R \(\sim\)120,000) and high S/N optical HARPS spectra to make their measurements. This high resolution allowed them to discern weaker \({}^{13}\)CH lines from the predominant \({}^{12}\)CH lines. The \({}^{12}\)C/\({}^{13}\)C is used to look for stars from the same birth cluster as the Sun (Adibekyan et al., 2018) and also to identify \({}^{12}\)C/\({}^{13}\)C trends with stellar isochrone age (Botelho et al., 2020).
For cooler stellar and sub-stellar objects, the preferred method is to target the \({}^{12}\)CO/\({}^{13}\)CO ratio instead since this carbon isotope ratio is more readily detectable and attained from the ground using near- and mid-infrared observations (Molliere and Snellen, 2019). Recent observations show that infrared observations of the CO fundamental rovibrational band at high resolution allow the observation of rarer isotopologues, even in dwarf stars, by resolving individual lines in the spectrum and pro
viding sensitivity to much lower abundances of \({}^{13}\)CO and C\({}^{18}\)O (Crossfield et al., 2019). This approach was used to measure the first multiple isotopic abundances of M-dwarfs in the mid-infrared at high resolution (R \(\sim 60,000\)): not merely the common \({}^{12}\)C\({}^{16}\)O but also the rarer species \({}^{13}\)C\({}^{12}\)O and \({}^{12}\)C\({}^{18}\)O (Crossfield et al., 2019). Since CO is an oxygen-bearing molecule, unlike CH used in earlier studies, these CO isotopologue features allow us to measure the \({}^{16}\)O/\({}^{18}\)O in addition to the \({}^{12}\)C/\({}^{13}\)C ratio. This result shows that isotopic abundance analysis is not only possible via high resolution spectroscopy, but it is also the next logical step for many stellar targets.
In the following sections, we go into further detail describing our present understanding of elemental and isotopic abundances and how they may provide the "missing link" between stellar photospheres and galactic chemical evolution.
### Chemical Abundances as Stellar Chronometers
Investigations into the mechanisms responsible for chemical enrichment of the galaxy have led to the study of elemental abundances as "stellar chronometers" that have the potential to be used to determine stellar ages. It is assumed that the chemical composition of low-mass dwarf stars remains unchanged throughout stellar lifetimes, and instead chemical abundances evolve in time through inheritance between stellar generations. Because older stellar populations synthesize metals and release them back into the ISM to be recycled into new stars, elemental abundances increase over time (da Silva et al., 2012). Past observational data has shown that there is no tight correlation between stellar ages and metallicities (Holmberg et al., 2007; Casagrande et al., 2011), therefore we have to delve beyond and explore the relation between individual elemental abundances and stellar age. Core-collapse (Type II) supernovae enriched the early galaxy with \(\alpha\)-process elements (C, O, Ne, Mg, Si, S, Ar, Ca, Ti) on a faster timescale than Type Ia supernovae could enrich the galaxy in iron-peak elements (e.g. Cr, Mn, Fe, Ni) (Kobayashi et al., 2020). This delay in iron-peak element enrichment produces chemical signatures like [\(\alpha\)/Fe] that serve as a good proxy for stellar age and a probe for the history of galactic chemical evolution (Haywood et al., 2013; Delgado-Mena et al., 2019; Kobayashi et al., 2020). In general, the [\(\alpha\)/Fe] abundance ratios and others formed mostly by Type II supernova increase with the age of a star (Nissen, 2015; Bedell et al., 2018; Delgado-Mena et al., 2019). The ratios of s-process elements (Sr, Ba, Zr, Y, La), primarily produced in low-mass AGB stars, over Fe follow trends similar to the iron-peak elements and show a negative correlation with stellar age (Delgado-Mena et al., 2019). The first major study of elemental abundances and their relation to age by da Silva et al. (2012) identified [Y/Mg], [Sr/Mg], [Y/Zn] and [Sr/Zn] as having significant trends as a function of age in a large sample of solar twins. Since this study, several other abundance ratios have been identified as potential chemical clocks and we discuss them in more detail below.
Among stellar chronometer candidates, the most widely discussed is perhaps the [Y/Mg] ratio. In the context of stellar nucleosynthesis, it makes sense that this abundance ratio is sensitive to stellar age. Massive stars produce and expel Mg into the ISM on a different timescale than intermediate-mass AGB stars produce and expel Y. [Y/Fe] notably decreases with stellar age contrary to other s-process elements with a rather steep negative slope (slope = -0.033 dex Gyr\({}^{-1}\)) and [Mg/Fe] increases with stellar age (slope = +0.009 dex Gyr\({}^{-1}\)) (Nissen, 2015) and so [Y/Mg] acts as a sensitive probe of stellar age. Multiple studies have confirmed the trend of [Y/Mg] with age (Nissen, 2015; Maia et al., 2016), and their equations can determine a stellar age with \(\sim\)0.8 Gyr precision (for solar twin stars aged 0-10 Gyr) given a precise measurement of [Y/Mg] (\(\pm\)0.02). The best stellar chronometer candidates are often ratios of light s-process elements (Sr, Ba, Zr, Y, La) over \(\alpha\) elements (C, O, Ne, Mg, Si, S, Ar, Ca, Ti plus Zn and Al) (Delgado-Mena et al., 2019) because of the drastically different "enrichment timescales". For more information on elemental abundance ratios that demonstrate a trend with stellar age in solar twin samples, refer to Nissen (2015); Maia et al. (2016); Nissen (2016); Spina et al. (2017); Delgado-Mena et al. (2019); Jofre et al. (2020); Nissen et al. (2020).
Since the enrichment timescales for major and minor isotope are analogous to those for the best stellar chronometer candidates, ratios of major to minor isotopes must also be considered. In fact, Botelho et al. (2020) have recently reported that the \({}^{12}\)C/\({}^{13}\)C ratio in solar twin stars seems to have a tentative correlation with metallicity but not quite with stellar age. It remains unknown how the \({}^{16}\)O/\({}^{18}\)O ratios behave with stellar age or metallicity solar twin stars because of the difficulty in detecting faint C\({}^{18}\)O. In this paper, we finally explore how both the \({}^{12}\)C/\({}^{13}\)C and \({}^{16}\)O/\({}^{18}\)O ratio correlate with stellar age in a small sample of solar twin stars, but ultimately more dwarf star observations are necessary to test carbon and oxygen isotope ratios' performance as chemical clocks.
Solar twin stars also exhibit peculiar abundance signatures that divide the solar twin population into distinct age groups and help trace their origin within the galaxy. Examination of solar twin stars in the solar vicinity show that the ages of these stars are widely distributed from 0-10 Gyr (Tsujimoto, 2021). Stars of similar ages share elemental abundance patterns as anticipated by the chemical evolution individual elements over time (Bedell et al., 2018). The oldest group of solar twins (ages \(\sim 8.7\) Gyr) has abundance patterns (C to Dy) quite similar to solar-metallicity galactic bulge stars. The super-solar abundances of this older group of solar twins is likely a result of the faster chemical enrichment that occurs closer to the galactic center. These abundance patterns further suggest that the oldest solar twins have migrated to the local solar neighborhood from birthplaces within the galactic bulge (Tsujimoto, 2021). The second group of solar twins with ages \(\sim 5.9\) Gyr demonstrate abundance patterns nearly identical to the Sun, and this association of the Sun with slightly older solar twins reinforces the notion of a common birthplace much closer to the galactic center. The \({}^{12}\)C/\({}^{13}\)C and \({}^{16}\)O/\({}^{18}\)O ratios, if proven to have a significant trend with stellar age, would be powerful chemical clocks and stellar formation location tracers since they also exhibit prominent gradients with galactocentric radius (Romano et al., 2019).
Clearly there is plenty to explore in galactic chemical evolution and exoplanet systems using elemental and isotopic abundances. However, dwarf star isotopic abundance measurements are quite scarce despite being more reliable than giant star measurements. Currently, isotopic abundance analyses focus on solar twin stars of near-solar metallicity. It is crucial we expand the database to probe low-mass _and_ low-metallicity stars where GCE models lack observational constraints. Therefore, we present an isotopic carbon and oxygen abundance analysis of six bright, well-studied solar twins. This paper serves as a pilot study using infrared (M band) CO features to derive \({}^{12}\)C/\({}^{13}\)C and \({}^{16}\)O/\({}^{18}\)O ratios. Once applied to solar twin stars, the technique could be extended to K and M dwarf stars as well as low-metallicity stars to bridge the 'observational' gap in GCE models.
## 3 Sample and Observations
### Solar Twin Sample
We selected a sample of six bright solar twin stars with near-solar spectral types, radii, effective temperatures, metallicities, carbon abundances, and oxygen abundances (See Tables 1 and 3). The stellar ages used in this analysis are taken from (dos Santos et al., 2016) and originally obtained by Maia et al. (2016) using Yonsei-Yale isochrones (Yi et al., 2001). The solar twin ages range from 1-7 Gyr, but the metallicities are all within 0.15 dex from solar. The solar twin ages cover a large portion of the "GCE vs. Time" space; however, as our sample is nearly solar in [Fe/H], the low-metallicity ([Fe/H \(<-0.2\)]) space remains unexamined by this sample.
Three out of the six solar twins in this sample appear in multi-star systems: HIP 77052 is in a binary (Hirsch et al., 2021), HIP 85042 is in a triple system (Riddle et al., 2015), and HIP 102040 has three optical companions (Riddle et al., 2015). Out of the six solar twins, HIP 29432 is the only confirmed exoplanet host. HIP 29432 hosts a modestly-irradiated Neptune-mass planet with a semimajor axis of 0.55 AU (Fulton et al., 2016).
### Observations
We observed the six solar twins listed in Table 2 during the 2019A semester using the iSHELL spectrograph (Rayner et al., 2016) on the NASA Infrared Telescope Facility. Table 2 lists the technical details of these observations. We obtained spectra at R = 70,000 (4.3 km/s) and mostly-continuous coverage from 4.52-5.24 \(\mu\)m. We reduced the raw iSHELL data using the SpeXTool Data Reduction package (Cushing et al., 2004) which corrects for pixel-to-pixel variations and produces the wavelength calibration. We then compute spatial profiles and extract two one-dimensional spectra (one at each nod position). These two spectra are combined to produce a single spectrum for each star.
We correct for telluric absorption in our spectra following the approach of (Crossfield et al., 2019), using the following A0V standard stars (V Magnitude): HR 7891 (4.82), HR 2133 (6.075), HR 3314 (3.90), HR 5881 (3.53), HR 6629 (3.75). Finally, we remove parts of the spectrum with obvious bad pixels and wherever S/N \(<\) unity. In practice, the choice of S/N cut-off is not especially significant since low-S/N parts of the spectrum are appropriately de-weighted when we calculate our weighted-mean line profile for each isotopologue (see Section 4).
Figure 1: A small section of the normalized spectra for the Sun from the ACE-FTS Solar Atlas (Frank L.M. Hase, 2010) and our six solar twins from IRTF/iSHELL (Rayner et al., 2016). The spectra have been shifted to the same wavelength frame. The black dotted lines show the location of some \({}^{13}\)CO features used in our abundance analysis.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Stellar} & Spectral & Brightness & Radius & Age & *T\({}_{eff}\) & Stellar & *Model & *Model \\ ID & Type & K Mag & (R\({}_{\odot}\)) & (Gyr) & (K) & log g & T\({}_{eff}\) & log g \\ \hline Sun & G2V & 5.08 & 1.0 & 4.567 \(\pm\) 0.11 & 5780 & 4.44 & 5780 & 4.44 \\ HIP 29432 & G4V & 5.301 & 0.95 & 5.51 \(\pm\) 0.71 & 5758 \(\pm\) 5 & 4.44 & 5780 & 4.44 \\ HIP 42333 & G8V & 5.223 & 1.0 & 1.01 \(\pm\) 0.52 & 5848 \(\pm\) 8 & 4.50 & 5870 & 4.49 \\ HIP 77052 & G2V & 4.300 & 0.96 & 3.67 \(\pm\) 0.91 & 5683 \(\pm\) 5 & 4.48 & 5690 & 4.39 \\ HIP 79672 & G2V & 4.190 & 1.03 & 3.09 \(\pm\) 0.40 & 5814 \(\pm\) 3 & 4.45 & 5780 & 4.44 \\ HIP 85042 & G3V & 4.686 & 1.04 & 6.66 \(\pm\) 0.62 & 5694 \(\pm\) 5 & 4.41 & 5690 & 4.39 \\ HIP 102040 & G5V & 4.921 & 0.96 & 2.42 \(\pm\) 0.91 & 5838 \(\pm\) 6 & 4.48 & 5870 & 4.49 \\ \hline \end{tabular} Note. β Stellar Parameters. Stellar ID from the Hipparcos Catalogue (ESA, 1997); Spectral type and K band magnitude from 2MASS (Cutri et al., 2003); Stellar radius from Gaia DR2 (Brown et al., 2018); Stellar age, Stellar effective temperature *T\({}_{eff}\), and stellar surface gravity (log g) from dos Santos et al. (2016); Solar K Mag from Willmer (2018); Solar age from Bonanno et al. (2002); Jacobsen et al. (2008). The final two columns contain the stellar effective temperature and surface gravity parameters adopted for each set of PHOENIX model spectra.
\end{table}
Table 1: Solar Twin Parameters
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{ Parameter} & HIP 29432 & HIP 42333 & HIP 77052 & HIP 79672 & HIP 85042 & HIP 102040 \\ \hline Date [UT] & 2019-02-02 & 2019-03-29 & 2019-05-16 & 2019-05-16 & 2019-05-16 & 2019-05-16 \\ Time [UT] & 07:48 - 08:23 & 06:14 - 06:52 & 07:44 - 08:01 & 08:25 - 08:53 & 10:53 - 11:30 & 14:31 - 15:03 \\ iSHELL Mode & M1 & M1 & M1 & M1 & M1 & M1 \\ Slit & 0.375β \(\times\) 15β & 0.375β \(\times\) 15β & 0.375β \(\times\) 15β & 0.375β \(\times\) 15β & 0.375β \(\times\) 15β & 0.375β \(\times\) 15β \\ Integration Time [s] & 48.65 & 63.94 & 48.65 & 48.65 & 48.65 & 48.65 \\ Co-adds & 3 & 3 & 3 & 3 & 3 & 3 \\ Exposures & 40 & 34 & 20 & 32 & 42 & 32 \\ Median S/N & 16.6 & 36.2 & 32.5 & 45.3 & 36.4 & 22.0 \\ \hline \end{tabular} Note. β Observational parameters for the six solar twins on IRTF/iSHELL (Rayner et al., 2016)
\end{table}
Table 2: Observational Parameters
## 4 Measuring Isotopic Abundances
### Model Spectra
To measure CO isotopologue abundances in our solar twin sample, we compare our observed spectra to synthetic spectra generated from custom solar atmosphere models derived from the PHOENIX atmosphere code (Version 16; Husser et al., 2013)(Hauschildt et al., 1999). Our PHOENIX model atmospheres contain 64 vertical layers, spaced evenly in log-space on an optical depth grid from \(\tau=10^{-8}-1000\) spanning \(1.0-10^{6}\) nm. In our observed wavelength range, the models were sampled at least every 0.01 nm. In contrast to the M-dwarf synthetic spectra from our similar analysis Crossfield et al. (2019), the solar-like models for this analysis were only run with H I, He I, and He II in NLTE to reduce computation time. For both the \({}^{13}\)CO and the C\({}^{18}\)O analysis, we generate synthetic spectra with \({}^{13}\)C and \({}^{18}\)O enrichments of 3\(\times\), 1.78\(\times\), 1\(\times\), 0.56\(\times\), 0.3\(\times\), and 0\(\times\) solar. We use a CO line list (Goorvitch, 1994) that contains lines for \({}^{12}\)C\({}^{16}\)O, \({}^{13}\)C\({}^{16}\)O, \({}^{12}\)C\({}^{18}\)O, and other CO isotopologues. This older line list is in good agreement with newer line lists of the CO fundamental band. These models are run for effective temperatures (in K) of 5690, 5780, and 5870 and log\({}_{g}\) (cgs) of 4.39, 4.44, and 4.49 respectively. Each solar twin's isotopologue abundance analysis is performed with the set of model spectra that best fits that star's effective temperature and log g (refer to Table 1).
### Line Selection
We measured our isotopic abundances following the process described in Crossfield et al. (2019). In this study, we use relatively strong, isolated \({}^{13}\)CO and C\({}^{18}\)O lines to determine \({}^{13}\)C and \({}^{18}\)O isotopic abundances. Identifying these lines requires a focus on the highest S/N regions of the stellar spectra around 4.6-4.7 microns. Here, tellurics are relatively weak and the stellar spectra are dominated by \({}^{12}\)C\({}^{16}\)O, \({}^{13}\)C\({}^{16}\)O, and \({}^{12}\)C\({}^{18}\)O lines from the CO fundamental rovibrational band. We have compiled a list of the strongest \({}^{13}\)CO and C\({}^{18}\)O lines within our desired wavelength range using the HITRAN database (Gordon et al., 2017). The HITRAN line lists used for CO line identification are based on data from Goorvitch (1994) line lists used in our PHOENIX models. In this wavelength range, there are 41 \({}^{12}\)CO Lines, 34 \({}^{13}\)CO Lines, and 32 C\({}^{18}\)O lines, but not all of them are included in the abundance calculation. Some lines are obscured by strong telluric absorption lines, some fall beyond outside the wavelength coverage in our IRTF/iSHELL spectra, and some weaker isotopologues lines are overshadowed by stronger absorption from \({}^{12}\)C\({}^{16}\)O and other molecular species. Please refer to our machine-readable line lists and the associated README file for more details on our line selection process.
### Abundance Calculation
Atomic abundances are often calculated using only a few absorption lines with relatively high S/N. In our case, however, it is important to note that most of these spectral lines belonging to the isotopologues of interest have low statistical significance and are barely visible by eye when considered individually. We thus create a single line profile of each CO isotopologue, for each solar twin in our sample by taking all useable absorption lines for each isotopologue and combining them into a single, high S/N line profile. We create the single line profile by taking the weighted mean, after continuum-normalizing, of each spectral line. This produces the stacked absorption profiles shown in Fig. 2 and Fig. 3. We then create corresponding line profiles for the synthetic stellar models detailed above, using the same set of lines. The isotopic abundances for each CO isotopologue are then determined by comparing the stacked absorption lines from the observed spectrum to the those of the synthetic models.
We measure the \({}^{13}\)C/H and \({}^{18}\)O/H abundance ratios for each solar twin by a \(\chi^{2}\) analysis, which represents how well the model line profile fits the observed solar twin line profile. We calculate \(\chi^{2}\) over the velocity range \(\Delta V\) (\(6<\Delta V<12\)) km/s centered at the line profile center V = 0. Because we want to minimize \(\chi^{2}\) to identify the best fit abundance value, the set of \(\chi^{2}\) values (only including values \(\leq 175\)) is fit with a parabola and the minimum is then calculated, giving the best-fit isotopologue abundance. We infer 1\(\sigma\) confidence intervals using the region where \(\Delta\chi^{2}\leq 1\)(Avni, 1976). Fig. 4 shows an example of this approach for the Sun (further described below). The final \({}^{13}\)CO and C\({}^{18}\)O abundances are shown in Table 3.
To test the accuracy of our \({}^{13}\)C/H and \({}^{18}\)O/H measurements within our solar twin sample and the consistency across different CO line lists, we also repeat this analysis using an infrared spectrum of the Sun from the ACE-FTS Solar Atlas (Frank L.M. Hase, 2010). Ideally, if we measure the \({}^{13}\)CO and C\({}^{18}\)O abundances relative to solar _in a spectrum of the Sun_, our analysis should find an abundance of 1.0 x Solar for each CO isotopologue. This is, nearly, what we observe. Using this solar spectrum with uncertainties adopted from our highest S/N spectrum (HIP 79672; see Table 2), we measure \(1.10\pm 0.04\) and \(1.03\pm 0.42\) x Solar abundances for \({}^{13}\)CO and C\({}^{18}\)O respectively. The slight \({}^{13}\)CO overestimate appears in the solar twin analysis even after
Figure 2: Stacked \({}^{13}\)CO line profiles for our sample. The line profiles represent the weighted average of several M-band \({}^{13}\)CO lines for the observed stellar spectrum (in blue) and multiple PHOENIX model spectra of varying \({}^{13}\)CO abundances. We interpolate between these models using \(\chi^{2}\) minimization to determine the stellar \({}^{13}\)CO abundance relative to solar (See analysis example in Fig. 4; results shown in Table 3).
Figure 3: Stacked C\({}^{18}\)O line profiles for our sample. The line profiles represent the weighted average of several M-band C\({}^{18}\)O lines for the observed stellar spectrum (in blue) and multiple PHOENIX model spectra of varying C\({}^{18}\)O abundances. We interpolate between these models using \(\chi^{2}\) minimization to determine the stellar C\({}^{18}\)O abundance relative to solar (\(\chi^{2}\) minimization shown in Fig. 4; results shown in Table 3).
the \(\chi^{2}\leq 175\) cutoff. These results indicate that our systematic and modeling errors are \(\leq 10\%\) for both isotopologues. We discuss this issue further in Section 5.4.
## 5 Results
Our \({}^{13}\)CO, C\({}^{18}\)O, \({}^{12}\)C/\({}^{13}\)C, and \({}^{16}\)O/\({}^{18}\)O measurements are summarized in Table 3 for our six-solar-twin sample and the Sun. Similar to the analysis in Botelho et al. (2020), we calculate a weighted least-squares linear fit for our \({}^{12}\)C/\({}^{13}\)C ratios to identify tentative trends. The fit demonstrates a decrease in the \({}^{12}\)C/\({}^{13}\)C ratio over metallicity (slope = -157 \(\pm\) 82 dex\({}^{-1}\)). Similarly, we see the \({}^{12}\)C/\({}^{13}\)C ratio decrease over time with a slope of -6.65 \(\pm\) 2.15 Gyr\({}^{-1}\). A weighted linear fit of our oxygen ratios demonstrates an increasing trend over both metallicity and time: slope = +2438 \(\pm\) 713 dex\({}^{-1}\) and +75.55 \(\pm\) 56.8 Gyr\({}^{-1}\) respectively. Although the Botelho et al. (2020) data suggest a systematic increase in \({}^{12}\)C/\({}^{13}\)C with [Fe/H] but not with time, our sample does not seem to significantly favour either an increase or decrease.
We will now discuss how our new carbon and oxygen isotope ratios compare to various GCE models. We will also examine how these solar twin carbon isotope ratio measurements compare to archival measurements. A visual comparison of our measurements to several GCE models is shown in Fig. 5. The top two panels show \({}^{12}\)C/\({}^{13}\)C vs. [Fe/H] (left) and \({}^{16}\)O/\({}^{18}\)O vs. [Fe/H] (right). The bottom two panels show \({}^{12}\)C/\({}^{13}\)C vs. Time (left) and \({}^{16}\)O/\({}^{18}\)O vs. Time (right).
Figure 5 shows our solar twin \({}^{12}\)C/\({}^{13}\)C and \({}^{16}\)O/\({}^{18}\)O ratio measurements compared to GCE models and archival measurements (\({}^{13}\)CO only). The top four panels show the predicted evolution of \({}^{12}\)C/\({}^{13}\)C (left) and \({}^{16}\)O/\({}^{18}\)O (right) ratios over stellar metallicity using GCE models from Kobayashi et al. (2011); Prantzos et al. (2018). Similarly, the bottom two panels show the predicted evolution of \({}^{12}\)C/\({}^{13}\)C (left) and \({}^{16}\)O/\({}^{18}\)O (right) ratio over time using GCE models from Romano et al. (2017). The orange circle represents Solar values from Ayres et al. (2013) and the gray points represent the values of the Botelho et al. (2020) solar twin sample. The red points represent the solar twin abundances measured in this paper. Our measurements agree somewhat with GCE predictions for \({}^{12}\)C/\({}^{13}\)C, but are less obvious for \({}^{16}\)O/\({}^{18}\)O. We elaborate on these GCE models and the comparison below.
### Kobayashi et al. (2011) Model
This model includes nucleosynthetic yields from Type Ia supernovae (Nomoto et al., 1997), updated yields from Type II supernovae and hypernovae (Kobayashi et al., 2006), and AGB stars (Campbell and Lattanzio, 2008; Karakas, 2010). As is evident in this particular model, GCE models that neglect the yields of massive, fast-rotating stars confirm a secondary production mechanism for \({}^{13}\)C which results in a very high (\(\sim 10^{3}\)) \({}^{12}\)C/\({}^{13}\)C ratio at low metallicity and is much higher than the solar value \(\sim 90\)(Kobayashi et al., 2011). Because the chemical enrichment timescale of the galactic halo is longer than in the solar neighborhood, there is a higher \({}^{12}\)C/\({}^{13}\)C ratio predicted in the halo due to a significant contribution from low-mass AGB stars. In Figure 5 (top left panel), we see that this model accurately predicts the solar \({}^{12}\)C/\({}^{13}\)C ratio, but overestimates it for stars in the solar neighborhood like the solar twins from the Botelho et al. (2020) sample. The \({}^{12}\)C/\({}^{13}\)C ratio measured here for HIP 85042 (101 \(\pm\) 17) and HIP 102040 (105 \(\pm\) 48) also fit the model fairly well. The other four solar twins (HIP 29432, HIP 42333, HIP 77052, HIP 79672) demonstrate a significantly lower \({}^{12}\)C/\({}^{13}\)C than model predictions as well as the solar twins in the Botelho et al. (2020) sample. Overall, the decreasing trend in \({}^{12}\)C/\({}^{13}\)C over metallicity is consistent with this GCE model.
All six of our solar twin \({}^{16}\)O/\({}^{18}\)O measurements agree with the Kobayashi et al. (2011) model predictions within the uncertainties, even HIP 102040 and HIP 29432 for which we report the lowest S/N measurements. See Figure 5 (top right panel).
### Prantzos et al. (2018) Model
All the major isotopes of the multi-isotopic elements up to Fe (\({}^{12}\)C, \({}^{14}\)N, \({}^{16}\)O, \({}^{20}\)Ne, \({}^{28}\)Si, \({}^{32}\)S, \({}^{36}\)Ar, \({}^{40}\)Ca, \({}^{54}\)Cr, \({}^{56}\)Fe) are reproduced to better than 15% and, in most cases, to better than 10% in this set of GCE models. Prantzos et al. (2018) provides both a baseline model - which includes yields from low to intermediate mass stars, massive stars, and rotating massive stars - and a second model which considers yields from non-rotating massive and low-to-intermediate mass stars only. These models use the metallicity-dependent yields from Cristallo et al. (2015) and rotating and non-rotating stellar yields from Limongi and Chieffi (2018). The inclusion of rotating star yields in the baseline model significantly reduces the \({}^{12}\)C/\({}^{13}\)C ratio at low metallicities (\({}^{12}\)C/\({}^{13}\)C \(<\) 1000 for [Fe/H] \(>-2.0\)). Even so, the \({}^{12}\)C/\({}^{13}\)C ratios observed in solar twins with near-solar metallicities are significantly lower than the baseline model's predictions. Therefore, in Figure 5 (top/bottom left panels) we plot only the secondary model, not the baseline model. The observed solar twin \({}^{12}\)C/\({}^{13}\)C ratios match the secondary model slightly bet
Figure 4: Example of our analysis approach for the infrared solar spectrum from the ACE-FTS Solar Atlas (Frank L.M. Hase, 2010). Here we show the six \(\chi^{2}\) values calculated between the solar line profile and each model of varying \({}^{13}\)CO and C\({}^{18}\)O abundances. We fit a parabola to the \(\chi^{2}\) values (\(\leq 175\)) and assign the minimum as the best-fit isotopologue abundance relative to solar. We measure \(1.10\pm 0.10\) and \(1.03\pm 0.42\) x Solar abundances for \({}^{13}\)CO and C\({}^{18}\)O respectively.
Figure 5: Comparison of our six solar twin \({}^{12}\)C/\({}^{13}\)C and \({}^{16}\)O/\({}^{18}\)O ratios to GCE models (Kobayashi et al., 2011; Romano et al., 2017; Prantzos et al., 2018) and archival solar twin measurements (Botelho et al., 2020).
ter. It is worth noting that the \({}^{12}\)C/\({}^{13}\)C measurements presented here show the same trend, decreasing over metallicity, as the GCE models, contrary to the Botelho et al. (2020) solar twin sample. We will explore this issue further in Section 6.4 by comparing the carbon isotope ratios in our six solar twins (HIP 29432, HIP 42333, HIP 77052, HIP 79672, and HIP 102040) to the Botelho et al. (2020) \({}^{12}\)C/\({}^{13}\)C measurements of the same stars.
While these models slightly overestimate \({}^{12}\)C/\({}^{13}\)C ratio observed in solar twins, they demonstrate a slight underestimate for \({}^{16}\)O/\({}^{18}\)O ratios. In the top right and bottom right panels of Figure 5, we observe that five out of the six solar twin measurements of the oxygen isotope ratio presented here, including the Sun with \({}^{16}\)O/\({}^{18}\)O \(\sim 511\)(Ayres et al., 2013), show ratios greater than the model predictions of \({}^{16}\)O/\({}^{18}\)O \(\sim 300\) for near-solar metallicity stars. It is worth noting that this particular GCE model set would still underestimate the \({}^{16}\)O/\({}^{18}\)O ratio even if we eliminate the lowest SNR measurements in our sample.
### Romano et al. (2017) Model
The four carbon isotope ratio models incorporate different combinations of stellar yields. Models 1 and 2 use the same nucleosynthetic prescription for low- to intermediate-mass stars (LIMS) and massive stars (Karakas, 2010; Nomoto et al., 2013); however, Model 2 additionally includes super-AGB star yields (Doherty et al., 2014, 2014). Model 3 keeps LIMS yields from Karakas (2010) but pulls the massive star nucleosynthesis prescription from multiple sources (Meynet and Maeder, 2002; Hirschi et al., 2005; Hirschi, 2006; Ekstrom et al., 2008). Finally, Model 4 adds super-AGB star yields (Doherty et al., 2014, 2014) to the other nucleosynthesis prescriptions from Model 3. In the bottom left panel of Figure 5, notice that Models 1 and 3 best reproduce solar data (Ayres et al., 2013), while inclusion of super-AGB star carbon synthesis (Models 2 and 4) results in an underestimate of the solar \({}^{12}\)C/\({}^{13}\)C ratio. Nonetheless, all four models predict the current \({}^{12}\)C/\({}^{13}\)C ratios in the solar neighborhood, in agreement with local ISM values, within the errors. Any discrepancies between solar abundances and the local ISM values are typically attributed to the Sun's migration to its current position from a birthplace closer to the galactic center.
If we consider only the \({}^{12}\)C/\({}^{13}\)C ratios from our six solar twin sample, we observe that Models 1 and 2, without fast-rotator yields, perform better and predict the carbon isotope ratios of the entire sample within the errors. Although Models 3 and 4 could, potentially be used to describe the isotopic abundance evolution of younger of solar twins (Age \(<\) 6 Gyr) particularly those in the Botelho et al. (2020) sample, they do not quite agree with the measurements made in our oldest star (HIP 85042, Age = 6.66 Gyr) nor in the youngest (HIP 42333, Age = 1.01 Gyr).
In terms of the isotopic oxygen ratio (Figure 5, bottom right panel), Models 1 and 2 also reproduce the \({}^{16}\)O/\({}^{18}\)O ratios measured in the Sun and along the Galactic disc. These two models produce the same \({}^{16}\)O/\({}^{18}\)O evolution as they only differ in the treatment of super-AGB stars, and oxygen is mainly produced in massive stars, not in AGB stars. Older stars (Age \(>\) 6 Gyr) are predicted to be more \({}^{18}\)O- poor than the Sun while younger stars (Age \(<\) 6 Gyr) are expected to have little \({}^{18}\)O enrichment relative to the Sun. The older stars in the sample, HIP 85042 and HIP 29432, appear to be enriched in \({}^{18}\)O relative to solar values with \({}^{16}\)O/\({}^{18}\)O ratios lower than the youngest star, HIP 42333, which has a near-solar ratio.
### Comparing Carbon Isotope Ratios
The slight \({}^{13}\)CO overestimate (recall we measure \(1.10\pm 0.04\) and \(1.03\pm 0.42\) x Solar abundances for \({}^{13}\)CO and C\({}^{18}\)O respectively in a solar spectrum; see Section 4.3) mentioned previously led us to compare our solar twin \({}^{12}\)C/\({}^{13}\)C ratios to archival measurements (Botelho et al., 2020). Using this archival sample, we compare our \({}^{12}\)C/\({}^{13}\)C measurements to a sample of stars 10x larger, with isotopic abundance measurements made with optical CH features rather than infrared CO. This allows us to examine \({}^{12}\)C/\({}^{13}\)C trends across different sample sizes and examine the efficiency of this isotopic abundance analysis using optical (Botelho et al., 2020) vs. infrared spectra (this paper).
In Fig. 6, all \({}^{12}\)C/\({}^{13}\)C measurements agree within the uncertainties except the one for HIP 42333. Although a mismatch in one out of six stars may not be unexpected, this discrepancy with the Botelho et al. (2020) measurement of the same star is surprising: disagreement may be expected for our lower signal-to-noise measurements such as HIP 29432 with S/N = 16.6, but not for HIP 42333 with a higher signal-to-noise ratio (S/N = 36.2) and a well-behaved line profile. After applying a \({}^{13}\)CO correction of -0.1 x Solar to our overestimated abundances, we recalculate the \({}^{12}\)C/\({}^{13}\)C ratios and plot them in Fig. 6. The correction brings our four sub-solar \({}^{12}\)C/\({}^{13}\)C ratios closer to archival values. Our two stars with super-solar \({}^{12}\)C/\({}^{13}\)C ratios move further from archival values but still agree within the uncertainties.
The smaller uncertainties for the Botelho et al. (2020) sample are likely due to the higher resolution and S/N of their solar twin spectra. Their sample of solar twins uses HARPS spectra which cover 3780-6910 A at a resolution
\(\rm R\sim 115,000\) and reach a S/N of approximately 800 per pixel. These values are significantly higher than what we obtain in the M band with IRTF/iSHELL at a resolution of \(\rm R\sim 60,000\): We achieve S/N of just \(\sim 20-40\) per pixel. It appears that the \({}^{12}\)C/\({}^{13}\)C ratio may be easier to measure, both in efficiency and precision, using CH spectral features contained in optical spectra. However, there are no ideal oxygen-bearing molecules with spectral features accessible at optical wavelengths. Therefore, measuring the \({}^{16}\)O/\({}^{18}\)O ratio remains possible only using infrared CO features.
In the Botelho et al. (2020) solar twin study, they report that the linear fit of \({}^{12}\)C/\({}^{13}\)C as a function of metallicity shows a positive slope of +56.5 \(\pm\) 7.2 dex\({}^{-1}\). The GCE models discussed above, surprisingly, follow an opposite trend with [Fe/H]. Our solar twin sample demonstrates a weak hint of negative trend for the \({}^{12}\)C/\({}^{13}\)C ratio over metallicity (slope = -157 \(\pm\) 82 dex\({}^{-1}\)) that is in accordance with the steady \({}^{13}\)C enrichment predicted by GCE models. However, this is still consistent with the trend of Botelho et al. (2020) at the 3\(\sigma\) level.
The archival sample also explores trends between the \({}^{12}\)C/\({}^{13}\)C ratio as a function of the isochrone stellar age. They show that \({}^{12}\)C/\({}^{13}\)C ratio is marginally correlated with age (slope of +0.614 \(\pm\) 0.250 Gyr\({}^{-1}\)). In our sample, we observe a positive correlation between the \({}^{12}\)C/\({}^{13}\)C ratio and isochrone age with a slope of +6.65 \(\pm\) 2.15 Gyr\({}^{-1}\) that is again consistent with that of Botelho et al. (2020) at 3\(\sigma\). Both samples are consistent with an overall decrease of the \({}^{12}\)C/\({}^{13}\)C ratio over time.
## 6 Conclusions
Our high resolution spectra (R \(\sim\) 60,000) from IRTF/iSHELL have made it possible to successfully measure the \({}^{12}\)C/\({}^{13}\)C and \({}^{16}\)O/\({}^{18}\)O ratios in a sample of solar twin stars, the latter of which have never been observed before.
Our analysis of \({}^{13}\)CO and C\({}^{18}\)O in HIP 102040 and HIP 29432 in particular, the solar twins with the lowest S/N spectra, exemplify the challenge of detecting isotopologues in stellar photospheres. Low S/N spectra result in uncertainties that rival the magnitude of the stellar line profiles themselves. We further observe that the CO isotopologue lines are greatly deformed in these sub-optimal spectra, and this leads to distorted stacked line profiles that do not resemble the Gaussian shape of their stellar model counterparts. These effects are visibly more pronounced in the weaker C\({}^{18}\)O lines (Fig. 3) and thus make the C\({}^{18}\)O abundance measurements significantly more difficult to make than the \({}^{13}\)CO.
We also find that both K band and M band spectra are amenable to CO isotopologue analysis depending on the observing time available and the target star's brightness. In their pioneer analysis, Tsuji and Nakajima (2016) utilized K band spectra to measure \({}^{12}\)C/\({}^{13}\)C in a sample of M dwarf stars. Their study was notably hindered by the low spectral resolution of their K band spectra which resulted in significant blending of the inherently weak absorption lines from the CO overtone band. Our analysis instead targeted the much stronger CO absorption lines in the fundamental rovibrational band. M band spectra are ideal because they provide access to the CO fundamental rovibrational band that produces the strongest minor isotopologue signatures. This is more practical for brighter targets, however, because M band observations require much more observing time to attain the necessary S/N than K band observations. K band observations may be the only practical choice for fainter stars, but the weaker lines in CO overtone band make minor isotopologue detection more difficult.
Nonetheless, the solar twin isotopic ratio measurements fit relatively well compared to the GCE models, within the uncertainties. Since GCE models are generally tailored to solar abundances, we do not expect much deviation from GCE model predictions, at least not from a population of solar twin stars. This isotopic abundance analysis in solar twin stars is a "pilot study"
Figure 6: Comparing \({}^{12}\)C/\({}^{13}\)C ratio measurements for our solar twin sample with archival measurements (Botelho et al., 2020). The closer measurements from each study are to each other, the closer they lie to the blue dotted line. All solar twin measurements are consistent within the uncertainties except for HIP 42333 for which we report a significantly lower \({}^{12}\)C/\({}^{13}\)C ratio (\(\sim\) 58) than in the previous study (\(\sim\) 94). Refer to Section 5.4 for more details.
for similar analyses in GKM-type dwarf stars. The truly interesting isotopic abundance science lies in low metallicity stars.
### Exploring Isotope Ratios in the Low-Metallicity Regime
While there has been some progress in measuring the \({}^{12}\)C/\({}^{13}\)C ratio in near-solar metallicity dwarf stars, the more interesting results come from low-metallicity stars. Low metallicity stars provide us a glimpse into the chemical composition of the early galaxy, and thus provide key observational constraints to GCE models. Unfortunately, there are virtually no \({}^{12}\)C/\({}^{13}\)C ratio measurements in the literature for these metal-poor dwarf stars. However, a recent measurement by Spite et al. (2021) shows a sub-solar \({}^{12}\)C/\({}^{13}\)C ratio (27 \(<\)\({}^{12}\)C/\({}^{13}\)C \(<\) 45) in metal-poor ([Fe/H] = -2.59) dwarf star HD 140283 (log g = 3.70) suggests a much higher \({}^{13}\)C abundance in the early galaxy than is predicted by GCE models. This one carbon measurement, however, makes it difficult to confirm the CNO isotopic composition of the early galaxy. Measurements of the \({}^{18}\)O abundance are significantly more difficult to make in these metal-poor dwarf stars; however, a sufficiently high resolution and high S/N spectrum may reveal the elusive C\({}^{18}\)O features. Regardless of the target or method, we need a larger database of \({}^{12}\)C/\({}^{13}\)C and \({}^{16}\)O/\({}^{18}\)O ratios covering a wide range of stellar metallicities to unveil the mysteries of stellar mixing processes, stellar nucleosynthesis, and the isotopic composition of the early galaxy.
Furthermore, inspection of the GCE models in Section 6 demonstrates that the predicted evolution of major isotope abundances fits solar values quite well; however, evolution of the minor isotopes does not fit solar abundances nearly as well. Because minor isotopes are more fragile than their primary isotopes and destroyed by typical stellar processes, it is difficult to predict minor isotope evolution over time through GCE models. The only hope, then, is to determine the correct [Fe/H] dependence of these minor isotopes. To better explore the production of secondary carbon and oxygen isotopes, we need to expand the isotopic abundance database beyond solar twin stars and measure these isotope ratios in a sample of low-metallicity stars.
### Isotope Ratios in the Low-Mass Regime
In addition to being good GCE model calibrators, isotopic carbon and oxygen isotopes may also prove to be good tracers of planetary formation, migration, and atmospheric evolution. Of the 5,044 exoplanets discovered to date (NASA Exoplanet Archive), only one has a \({}^{12}\)C/\({}^{13}\)C ratio measurement: TYC-8998-760-1 b, a young accreting super-Jupiter (Zhang et al., 2021). While exciting, this new sub-solar carbon isotope ratio does not have a complementary host star measurement which makes it difficult to definitively identify a link between host star abundances, planetary abundances, formation location, and migration (Reggiani et al., 2022). It is also believed that the different formation mechanisms between brown dwarfs and super-Jupiters (e.g. gravitational collapse vs. core accretion) may produce distinct \({}^{12}\)C/\({}^{13}\)C signatures in their atmospheres (Zhang et al., 2021). Thus, in addition to stellar and planetary isotopic abundances, brown dwarf isotopic abundances introduce yet another important piece of evidence in the planet formation puzzle.
The first exoplanetary and brown dwarf \({}^{12}\)C/\({}^{13}\)C measurements (Zhang et al., 2021, 2021), are likely to be the first of many. Beyond these studies of such giant, H-dominated bodies, CO and H\({}_{2}\)O isotopologue bands may be detectable even in terrestrial exoplanet atmospheres (orbiting late-type M dwarfs like TRAPPIST-1) using JWST transit transmission spectra throughout the near infrared (1-8 um), especially at 3-4 um (Lincowski et al., 2019). JWST and next-generation ELTs may therefore be capable of detecting isotopologues in exoplanet atmospheres ranging from super-Jupiters down to terrestrial-size planets (Lincowski et al., 2019). For now, however, planetary CNO isotope measurements should focus on super-Jupiters and their host stars. Short-period super-Jupiters close to their host stars will provide isotopic abundances within the CO snowline using transmission spectroscopy while bright super-Jupiters, like TYC-8998-760-1 b, provide isotopic abundances outside the CO snowline using a combination of spectroscopy and direct imaging techniques.
### Future Prospects for Other Isotopes
Other isotopic abundance ratios for nitrogen, magnesium, silicon, and titanium have been measured previously in giant stars, but because internal processes change these ratios throughout a giant star's lifetime, they do not provide the same constraints on GCE or planetary formation mechanisms as dwarf stars do. Thus, the next important step in building an isotopic abundance database is to go beyond CNO isotopes and measure those previously studied in giant stars and those with the greatest implications for exoplanet formation and composition. Elemental nitrogen abundances are routinely measured in dwarf stars, but the \({}^{14}\)N/\({}^{15}\)N ratio remains unstudied. Nitrogen isotopes would complete the CNO isotopic abundance trifecta, but there are practically no nitrogen isotope measurements for dwarf stars in the literature despite the extensive re
search done by GCE modelers to predict the evolution of the \({}^{14}\)N/\({}^{15}\)N (Romano et al., 2017). Optical \({}^{12}\)C\({}^{15}\)N absorption features are sensitive to the \({}^{15}\)N abundance in giant stars (Hedrosa et al., 2013) and may be useful in measuring the dwarf star \({}^{14}\)N/\({}^{15}\)N ratios.
Past measurements of the (\({}^{25}\)Mg,\({}^{26}\)Mg)/\({}^{24}\)Mg ratios in cool dwarf stars show that there is a decreasing trend over [Fe/H] which is in accordance with GCE models (Yong et al., 2003). Further analysis of cool thick-disk and halo stars shows a different trend over metallicity that requires increased \({}^{25}\)Mg and \({}^{26}\)Mg production. This may be attributed to intermediate mass asymptotic giant branch stars (Yong et al., 2003), and so GCE models can still gain important constraints from Mg isotopes. Silicon abundance measurements in a sample of evolved M-type stars also make good probes of the evolution of metallicity in the ISM. There is a tentative correlation between the \({}^{29}\)Si/\({}^{30}\)Si and the mass-loss rates of these evolved stars which acts as a proxy of stellar age (Peng et al., 2013). Mg and Si isotopes in exoplanet host stars also may provide useful constraints for exoplanet composition (Suarez-Andres et al., 2018) and so they provide another key set of isotopes to target for the stellar and planetary isotopic abundance database.
Titanium is also a good target for isotopic abundance studies with features that can be measured in GKM dwarfs stars and potentially, exoplanet atmospheres as well. Titanium has five stable isotopes (\({}^{46-50}\)Ti) with about 25% of its abundance partitioned among the minor isotopes-much greater than the relative abundances of the minor H, C, and O isotopes (all \(\sim 2\%\)) (Serindag et al., 2021). This makes Ti isotopes more accessible for challenging observations due to their higher relative abundances compared to other elements. Oxygen and silicon burning in massive stars is responsible for the production of \({}^{46}\)Ti and \({}^{47}\)Ti while \({}^{48}\)Ti, \({}^{49}\)Ti, and \({}^{50}\)Ti are produced mainly in Type Ia and Type II supernovae (Hughes et al., 2008). This isotope fractionation due to different production mechanisms makes dwarf star Ti isotopes great candidates for testing GCE models. There have been previous measurements of Ti isotope ratios in M dwarf stars of near-solar metallicity with good GCE agreement (Chavez and Lambert, 2009), but as we have expressed in this paper, the more interesting science lies in sub- and super-solar metallicity stars. In terms of exoplanets, simulations of Ti isotope measurements in exoplanet atmospheres show that an hour of observing time on 8-meter-class telescopes is sufficient to reveal Ti isotopes in the atmospheres of wide-separation super-Jupiters (Serindag et al., 2021).
### Concluding Remarks
In conclusion, this isotopic abundance analysis in solar twin stars served as a "pilot study" for similar analyses in GKM-type dwarf stars of various ages, metallicities, and even for exoplanet host stars. The agreement between the \({}^{12}\)C/\({}^{13}\)C and \({}^{16}\)O/\({}^{18}\)O measurements made in this paper and GCE model predictions demonstrate the accuracy and precision of calculating isotopic abundances in solar twin stars. The next important step is to fine-tune this process for M dwarf and low-metallicity stars because these stars have the most to benefit from new abundance-age relationships. Furthermore, as the predominant type of exoplanet host stars, M dwarfs provide the best testing sites for abundance-formation relations and planetary isotopic abundance measurements. Together, these stellar and planetary isotopic abundances may eventually unveil "the missing links" between exoplanet formation mechanisms and exoplanet atmospheric evolution, chemical clocks and stellar ages, and the chemical evolution of the galaxy. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.